text
stringlengths
4
2.78M
meta
dict
--- abstract: | =0.6 cm [**Abstract**]{} We have investigated the geodetic precession and the strong gravitational lensing in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity theory. We present the formulas of the orbital period $T$ and the geodetic precession angle $\Delta\Theta$ for the timelike particles in the circular orbits around the black hole, which shows that the change of the geodetic precession angle with the Chern-Simons coupling parameter $\xi$ is converse to the change of the orbital period with $\xi$ for fixed $a$. We also discuss the effects of the Chern-Simons coupling parameter on the strong gravitational lensing when the light rays pass close to the black hole and obtain that for the stronger Chern-Simons coupling the prograde photons may be captured more easily, and conversely, the retrograde photons is harder to be captured in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. Supposing that the gravitational field of the supermassive central object of the Galaxy can be described by this metric, we estimated the numerical values of the main observables for gravitational lensing in the strong field limit. author: - Songbai Chen - Jiliang Jing title: '[**Geodetic precession and strong gravitational lensing in the dynamical Chern-Simons modified gravity**]{}' --- =0.8 cm Introduction ============ Since Einstein’s general relativity was set forth in the last century, it has been of interest in the study of possible modifications of his theory. One of the most promising extension of general relativity is Chern-Simons modified gravity [@Lue; @Jackiw; @Alexander], in which the Einstein-Hilbert action is modified by adding a parity-violating Chern-Simons term, which couples to gravity via a scalar field. The parity-violating Chern-Simons term is a combination of the second order in the curvature tensor and a Chern-Simons scalar field. Thus, the Chern-Simons modified gravity is a high-energy extension of Einstein’s general relativity. Actually, Chern-Simons correction is necessary in the string theory as an anomaly-canceling term to conserve unitarity [@Alexander1; @Svrcek; @Alvarez; @Campbell]. In loop quantum gravity, it is required to ensure gauge invariance of the Ashtekar variables [@Ashtekar]. Moreover, the Chern-Simons modified gravity could help us to explain several problems in cosmology, such as, dark energy and dark matter [@Konno2], baryon asymmetry [@Alexander1; @Alexander2; @Garcia], and so on. It is well know that there exist two formulations of Chern-Simons modified gravity. In the non-dynamical formulation, the Chern-Simons scalar field is an *a priori* prescribed function so that its effective evolution equation can be reduced to a differential constraint on the space of allowed solutions [@Alexander3; @Alexander4; @Alexander5]. While in the dynamical formulation, the Chern-Simons scalar field is treated as a dynamical field possessing its own stress-energy tensor and an evolution equation [@Smith; @Yunes1; @Konno3]. It must be pointed that although the non-dynamical Chern-Simons gravity action can be obtained as a certain limit of the dynamical Chern-Simons gravity action, the non-dynamical Chern-Simons gravity and dynamical Chern-Simons gravity are inequivalent and independent theories. In general, the solutions of the non-dynamical Chern-Simons gravity cannot be obtained from the solutions of the dynamical Chern-Simons gravity [@Yunes1]. The characteristic observational signature of the Chern-Simons modified gravity could allow us to discriminate an effect of this theory from other phenomena. Since the Schwarzschild solution is unaffected by Chern-Simons modified gravity, the solar system tests of general relativity do not put strict bounds on the magnitude of this correction. Recently, Cardoso *et al.* [@Cardoso] studied the evolution of the dynamical Chern-Simons perturbations in the background of a Schwarzschild black hole and found that the quasinormal modes could offer a way to detect the correction from dynamical Chern-Simons terms. For a rotating black hole, it is allowed to possess a non-vanishing Chern-Simons scalar field, which brings convenient for us to probe the observational signature of the Chern-Simons modified gravity. Thus, a lot of attention have been focused on the studies of rotating black holes in the Chern-Simons modified gravity [@Konno2; @Alexander4; @Alexander5; @Yunes1; @Konno3; @Ciufolini1; @Ciufolini2; @Yunes3; @Harko; @Amarilla; @Ahmedov1; @Carlos]. In the non-dynamical formulation, Alexander and Yunes [@Alexander4; @Alexander5] adopted to a far-field approximation and obtained a rotating black hole solution in the Chern-Simons modified gravity. In the non-dynamical framework, it was found that the Chern-Simons modified theory predicts an anomalous precession effect [@Alexander5], which was tested with LAGEOS [@Smith; @Ciufolini1; @Ciufolini2]. Using double binary pulsar data, Yunes and Spergel [@Yunes3] obtained a bound on the non-dynamical model with a canonical Chern-Simons scalar field. In the dynamical Chern-Simons modified gravity, Yunes and Pretorius [@Yunes1] found a rotating black hole solution by using of the small-coupling and slow-rotation limit. This rotating black hole solution was also obtained by Konno *et al.* in [@Konno3]. Harko [@Harko] studied the properties of the thin accretion disk around this black hole and probed the effect of the Chern-Simons coupling parameter on the flux and the emission spectrum of the accretion disks. Amarilla *et al.* [@Amarilla] studied the null geodesics corresponding to a slowly-rotating black hole in the dynamical Chern-Simons gravity and discussed the effect of the Chern-Simons term on the shadows casted by a black hole. These results are very useful for us to understand the properties of black holes in the Chern-Simons modified gravity. The main purpose of this paper is to study geodetic precession and the strong gravitational lensing in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity theory and to see what effects of the Chern-Simons coupling parameter on the geodetic precession angle for the timelike particles and the coefficients of gravitational lensing in the strong field limit. The plan of our paper is organized as follows. In Sec.II we introduce briefly the dynamical formulation of the Chern-Simons modified gravity and present the a slowly-rotating black hole solution found in [@Yunes1; @Konno3]. In Sec.III, we calculate the effects of the Chern-Simons term on the geodetic precession in the circular orbits around this black hole. In Sec.IV, we adopt to Bozza’s method [@Bozza2; @Bozza3] and obtain the deflection angles for light rays passing close to the slowly-rotating black hole in the dynamical Chern-Simons modified gravity, and then probe the effects of the Chern-Simons coupling parameter on the deflection angle and on the coefficients in the strong field limit. At last, we present a summary. Slowly-rotating black hole in the dynamical Chern-Simons modified gravity ========================================================================= In this section, we introduce briefly the slowly-rotating black hole in the dynamical Chern-Simons modified gravity [@Yunes1; @Konno3]. The action in the dynamical Chern-Simons modified gravity theory can be expressed as $$\begin{aligned} S=\int d^4x\sqrt{-g}\bigg[\kappa R+\frac{\alpha}{4}\vartheta\; ^{*}RR -\frac{\beta}{2}\bigg(g^{\mu\nu}\nabla_{\mu}\vartheta \nabla_{\nu}\vartheta+V(\vartheta)\bigg)+\mathcal{L}_{matt}\bigg]. \label{action}\end{aligned}$$ The first term in the right side of Eq.(\[action\]) is the standard Einstein-Hilbert term with $\kappa^{-1}=16\pi G$. The second term denotes the Chern-Simons correction, which consists of the product of a Chern-Simons scalar field $\vartheta$ and the Pontryagin density $^{*}RR$, defined via $^{*}RR= \;^{*}R^{a\;cd}_{\;\;b}R^{b}_{\;\;acd}$. The dual Riemann-tensor $ \;^{*}R^{a\;cd}_{\;\;b}$ is given by $$\begin{aligned} ^{*}R^{a\;cd}_{\;\;b}=\frac{1}{2}\epsilon^{cdef}R^{a}_{\;\;bef},\end{aligned}$$ with $\epsilon^{cdef}$ is the 4-dimensional Levi-Civita tensor. The parameters $\alpha$ and $\beta$ are dimensional coupling constants. The coupling constant $\beta$ is allowed to be arbitrary in the dynamical formulation of the Chern-Simons modified gravity, but it is set to zero in the non-dynamical framework [@Yunes1]. Varying the action S with respect to the metric and the Chern-Simons coupling field, one can find that the modified gravitational field equation and the motion equation of the scalar field $\vartheta$ obey $$\begin{aligned} G_{\mu\nu}+\frac{\alpha}{\kappa}C_{\mu\nu}=\frac{1}{2\kappa}(T^{matt}_{\mu\nu}+T^{\vartheta}_{\mu\nu}),\label{grav1}\end{aligned}$$ and $$\begin{aligned} \beta\nabla_{\mu}\nabla^{\mu}\vartheta=\beta\frac{dV(\vartheta)}{d\vartheta}-\frac{\alpha}{4}\;^{*}RR,\label{moto1}\end{aligned}$$ respectively. Here $G_{\mu\nu}$ is the Einstein tensor and $C_{\mu\nu}$ is the Cotton tensor. Obviously, the evolution of the scalar field $\vartheta$ depends not only on its stress-energy tensor, but also on the curvature of spacetime. Employing the small-coupling and slow-rotating approximation, one can obtain a black-hole solution with non-zero coupling constants in the dynamical Chern-Simons modified gravity, which can be expressed as [@Yunes1; @Konno3] $$\begin{aligned} ds^2&=&-g_{tt}dt^2+g_{rr}dr^2+g_{\theta\theta}d\theta^2+g_{\phi\phi} d\phi^2-2g_{t\phi}dtd\phi, \label{metric0} \\ \vartheta &=&\frac{5}{8}\frac{\alpha}{\beta}\frac{a}{M}\frac{ \cos{\theta}}{r^2}\bigg(1+\frac{2M}{r}+\frac{18M^2}{5r^2}\bigg),\end{aligned}$$ with $$\begin{aligned} g_{tt}&=&1-\frac{2M}{r}+\frac{2a^2M}{r^3}\cos^2{\theta},\nonumber\\ g_{rr}&=&\bigg(1-\frac{2M}{r}\bigg)^{-1}\bigg[1+\frac{a^2}{r}\bigg(\cos^2{\theta}-\bigg(1-\frac{2M}{r}\bigg)^{-1}\bigg)\bigg],\nonumber\\ g_{\theta\theta}&=&r^2+a^2\cos^2{\theta},\nonumber\\ g_{t\phi}&=&\frac{2Ma}{r}\sin^2{\theta}-\frac{5\xi a}{8r^4}\bigg(1+\frac{12M}{7r}+\frac{27M^2}{10r^2}\bigg)\sin^2{\theta},\nonumber\\ g_{\phi\phi}&=&r^2\sin^2{\theta}+a^2\sin^2{\theta}\bigg(1+\frac{2M}{r}\sin^2{\theta}\bigg).\end{aligned}$$ Here the parameter $\xi$ is related to the coupling constants $\alpha$ and $\beta$ by $\xi=\alpha ^2/(\beta\kappa)$, which has an exact dimension $[L]^4$. As in ref.[@Yunes1], we can define a dimensionless parameter $\zeta$ by re-scaling by a factor $(2M)^4$, i.e., $\zeta=\xi/(2M)^4$. If $\alpha$ tends to zero, one can find that Chern-Simons scalar field $\vartheta$ disappears and the metric (\[metric0\]) return to that of the slow-rotating Kerr black hole in the general relativity. Since the Chern-Simons scalar field $\vartheta$ has positive energy [@Yunes1], it is natural that the parameter $\xi$ is non-negative. Here we limit ourselves to the case where $\xi\geq 0$ and study the effect of $\xi$ on the geodetic precession and the strong gravitational lensing in the background (\[metric0\]). Geodetic precession in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity ==================================================================================================== The timelike geodesics in the slowly-rotating black hole in dynamical Chern-Simons modified gravity were considered in [@Harko; @Carlos]. Harko [@Harko] studied the properties of the thin accretion disk around the black hole (\[metric0\]). Sopuerta *et al*[@Carlos] considered the timelike geodesic equations for the massive particles and found that the location of the innermost stable circular orbit (ISCO) and the three physical fundamental frequencies associated with the time $\tau$ for the particles are modified in the Chern-Simons modified gravity. However, in Ref.[@Carlos] the geodesic precession of orbits around Chern-Simons black holes is only illustrated numerically for a few examples, while an analytic expression for this physical quantity is still missing. In this paper, we will present an clear expression of $T$ and study the effects of the Chern-Simons term on the Kepler’s third law, and then study the geodetic precession of the massive particles around the black hole (\[metric0\]). Let us start with the condition $\theta=\pi/2$, which set the orbits on the equatorial plane. In this case, one can find that the timelike geodesics take the form $$\begin{aligned} &&u^{t}=\frac{dt}{d\tau}=\frac{Eg_{\phi\phi}-Lg_{t\phi}}{g^2_{t\phi}+g_{tt}g_{\phi\phi}},\label{u1}\\ &&u^{\phi}=\frac{d\phi}{d\tau}=\frac{Eg_{t\phi}+Lg_{tt}}{g^2_{t\phi}+g_{tt}g_{\phi\phi}},\label{u2}\\ &&\bigg(\frac{dr}{d\tau}\bigg)^2+V_{eff}(r)=E^2,\end{aligned}$$ with the effective potential $$\begin{aligned} V_{eff}(r)=\frac{1}{g_{rr}}\bigg(1+\frac{E^2[g_{rr}(g^2_{t\phi}+g_{tt}g_{\phi\phi})-g_{\phi\phi}]+2ELg_{t\phi}+L^2g_{tt}}{g^2_{t\phi}+g_{tt}g_{\phi\phi}}\bigg),\end{aligned}$$ where $E$ and $L$ are the specific energy and the specific angular momentum of particles moving in the orbits, respectively. For the stable circular orbit in the equatorial plane, the effective potential $V(r)$ must obey $$\begin{aligned} V_{eff}(r)=E^2, \;\;\;\;\;\;\;\frac{dV_{eff}(r)}{dr}=0.\end{aligned}$$ Solving above equations, one can obtain $$\begin{aligned} &&E=\frac{g_{tt}+g_{t\phi}\Omega}{\sqrt{g_{tt}+2g_{t\phi}\Omega-g_{\phi\phi}\Omega^2}},\nonumber\\ &&L=\frac{-g_{t\phi}+g_{\phi\phi}\Omega}{\sqrt{g_{tt}+2g_{t\phi}\Omega-g_{\phi\phi}\Omega^2}},\nonumber\\ &&\Omega=\frac{d\phi}{dt}=\frac{g_{t\phi,r}+\sqrt{(g_{t\phi,r})^2+g_{tt,r}g_{\phi\phi,r}}}{g_{\phi\phi,r}},\label{jsd}\end{aligned}$$ where $\Omega$ is the angular velocity of particle moving in the orbits. From Eq.(\[jsd\]), one can obtain Kepler’s third law in the slowly-rotating black-hole spacetime in the dynamical Chern-Simons modified gravity $$\begin{aligned} T^2&=&\frac{4\pi^2}{M}R^3\bigg[1+a\frac{112MR^5-\xi(567M^2+300MR+140R^2)}{56M^{1/2}R^{13/2}}\nonumber\\&+& \frac{a^2M}{R^3}\bigg(1-\xi\frac{567M^2+300MR +140R^2}{28MR^5}+\xi^2\frac{(567M^2+300MR +140R^2)^2}{6272 M^2R^{10}}\bigg)+\mathcal{O}(a^3)\bigg],\label{Time}\end{aligned}$$ where $T$ is the orbital period and $R$ is the radius of the circular orbit. The later terms in the right hand side is the correction by the $a$ and the Chern-Simons term. Obviously, the correction term disappears as $a$ approaches zero. It is reasonable because that as $a$ vanishes the metric (\[metric0\]) reduces to that of the Schwarzschild black hole in the general relativity. Since the black hole is slowly rotating, the correction is dominated by the first-order terms in $a$. Thus, when the black hole rotates in the same direction as the particle, i.e., $a>0$, the orbital period $T$ decreases with the Chern-Simons coupling parameter $\xi$. But when the black hole rotates in the converse direction as the particle, i.e., $a<0$, the orbital period $T$ increases with the Chern-Simons coupling parameter $\xi$. Now, we are in the position to study the geodetic precession of a timelike particle in the circular orbits around the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. As in [@Ksq], we regard the rotating axis of the gyroscope carried by a satellite as a spacelike spin-vector $S^{\mu}$, which parallely transported along a timelike geodesic with the four-velocity $u^{\mu}$. Thus, the parallel transporting equation of $S^{\mu}$ in the direction of $u^{\mu}$ can be expressed as $$\begin{aligned} u^{\mu}\nabla_{\mu}S^{\nu}=0.\label{S11}\end{aligned}$$ From the orthogonality and normalization conditions, one can find $$\begin{aligned} u^{\mu}S_{\mu}=0,\;\;\;\;\;\;\;S^{\mu}S_{\mu}=1.\label{con}\end{aligned}$$ In the slowly-rotating black hole in the dynamical Chern-Simons modified gravity, the parallel transporting equation (\[S11\]) along the circular orbits in the equatorial plane reads $$\begin{aligned} \baselineskip=1 cm &&\frac{dS^{t}}{d\tau}+\mathcal{A}S^{r}=0,\label{sge1}\\ &&\frac{dS^{r}}{d\tau}+\mathcal{B}S^{t}+\mathcal{C}S^{\phi}=0,\label{sge2}\\ &&\frac{dS^{\theta}}{d\tau}=0,\label{sge3}\\ &&\frac{dS^{\phi}}{d\tau}+\mathcal{D}S^{r}=0,\label{sge4}\end{aligned}$$ with $$\begin{aligned} \mathcal{A}&&=\frac{1}{2}\bigg[\bigg(\frac{g_{tt,r}g_{\phi\phi}+g_{t\phi,r}g_{t\phi}}{g^2_{t\phi}+g_{tt}g_{\phi\phi}}\bigg)u^t +\bigg(\frac{g_{t\phi,r}g_{\phi\phi}-g_{t\phi}g_{\phi\phi,r}}{g^2_{t\phi}+g_{tt}g_{\phi\phi}}\bigg)u^{\phi}\bigg]\bigg|_{r=R},\\ \mathcal{B}&&=\frac{1}{2}\bigg[\bigg(\frac{g_{tt,r}}{g_{rr}}\bigg)u^t+\bigg(\frac{g_{t\phi,r}}{g_{rr}}\bigg)u^{\phi}\bigg]\bigg|_{r=R},\\ \mathcal{C}&&=\frac{1}{2}\bigg[\bigg(\frac{g_{t\phi,r}}{g_{rr}}\bigg)u^t-\bigg(\frac{g_{\phi\phi,r}}{g_{rr}}\bigg)u^{\phi}\bigg]\bigg|_{r=R},\\ \mathcal{D}&&=\frac{1}{2}\bigg[\bigg(\frac{g_{tt,r}g_{t\phi}-g_{tt}g_{t\phi,r}}{g^2_{t\phi}+g_{tt}g_{\phi\phi}}\bigg)u^t +\bigg(\frac{g_{t\phi,r}g_{t\phi}+g_{tt}g_{\phi\phi,r}}{g^2_{t\phi}+g_{tt}g_{\phi\phi}}\bigg)u^{\phi}\bigg]\bigg|_{r=R}.\end{aligned}$$ Combining above equations with Eqs. (\[u1\]) and (\[u2\]), we can obtain the spin-vector $S^{\mu}$ $$\begin{aligned} \baselineskip=1 cm &&S^{t}=C^{t}\sin{(\varpi \tau)},\\ &&S^{r}=C^{r}\cos{(\varpi \tau)},\\ &&S^{\theta}=C^{\theta},\\ &&S^{\phi}=C^{\phi}\sin{(\varpi \tau)},\end{aligned}$$ with $$\begin{aligned} \varpi=\sqrt{-(\mathcal{A}\mathcal{B}+\mathcal{C}\mathcal{D})}.\label{om1}\end{aligned}$$ Here we have imposed initial condition $S^t=S^{\phi}=0$ at $\tau=0$. The coefficients $C^{t}$, $C^{r}$, $C^{\theta}$ and $C^{\phi}$ can be constrained by the orthogonality and normalization conditions (\[con\]). As $\phi$ goes from $0$ to $2\pi$, one can obtain that the proper time $\tau$ goes from $0$ to $\tau_p= 2\pi/u^{\phi}$. Thus, the geodetic precession angle $\Delta\Theta$ during one orbital period can be expressed as $$\begin{aligned} \Delta\Theta=|\varpi\tau_p-2\pi|=\bigg|2\pi\bigg(\frac{\varpi}{u^{\phi}}-1\bigg)\bigg|.\label{sc1}\end{aligned}$$ Substituting (\[u2\]) and (\[om1\]) into (\[sc1\]), one can expand Eq. (\[sc1\]) as a power series in $M/R$ up to $\mathcal{O}(\frac{M^5}{R^5})$ and obtain the geodetic precession angle $\Delta\Theta$ in the slowly-rotating black hole in dynamical Chern-Simons modified gravity $$\begin{aligned} \Delta\Theta&=&\frac{3\pi M}{R}\bigg[\bigg(1+\frac{3M}{4R}+\frac{9M^2}{8R^2}+\frac{135M^3}{64R^3}\bigg) \nonumber\\ &&-\frac{a}{\sqrt{MR}}\bigg(\frac{2}{3}+\frac{M}{R} +\frac{9M^2}{4R^2}+\frac{45M^3}{8R^3}-\frac{5\xi}{6MR^3}\bigg)+\frac{a^2}{R^2}\bigg(\frac{1}{3}+\frac{3M}{2R}+\frac{45M^2}{8R^2}\bigg)+\mathcal{O}(a^3)\bigg]. \label{sc2}\end{aligned}$$ From Eq. (\[sc2\]), it is easy to find that the geodetic precession angle $\Delta\Theta$ increases with the Chern-Simons coupling parameter $\xi$ if $a>0$ and it decreases with $\xi$ if $a<0$. Comparing with Eq. (\[Time\]), we find that the dependent of the geodetic precession angle on the $\xi$ is converse to the dependent of the orbital period on the $\xi$. It is understandable by a fact that the increase of the orbital period $T$ leads to the decrease of the angular velocity $\omega$ of particle and then it results in the decrease of the precession angle $\Delta\Theta$. Deflection angle in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity ================================================================================================= The null geodesics was considered to study the properties of the shadows casted by a slowly-rotating black hole in the dynamical Chern-Simons gravity [@Amarilla]. In this section, we will study deflection angles of the light rays when they pass close to the slowly-rotating black hole in dynamical Chern-Simons modified gravity, and then probe the effects of the Chern-Simons coupling parameter $\xi$ on the deflection angle and the coefficients in the strong field limit. Formulas in the strong gravitational lensing -------------------------------------------- As in the former, we also consider only the case the light ray is limited in the equatorial plane. With this condition, the reduced metric for the slowly-rotating black hole in dynamical Chern-Simons modified gravity can be expressed as $$\begin{aligned} ds^2&=&-A(x)dt^2+B(x)dx^2+C(x) d\phi^2-2D(x)dtd\phi, \label{metric1}\end{aligned}$$ where we adopt to a new radial coordinate $x= r/2M$ and the metric coefficients have the form $$\begin{aligned} A(x)&=&1-\frac{1}{x},\\ B(x)&=&\frac{x^2-x-\hat{a}^2}{(x-1)^2},\\ C(x)&=&x^2+\frac{\hat{a}^2(x+1)}{x},\\ D(x)&=&\frac{\hat{a}}{x}-\frac{\hat{a}\zeta(280x^2+189x+240)}{448x^6}.\end{aligned}$$ Here the quantities $\zeta=\frac{\xi}{(2M)^4}$ and $\hat{a}=\frac{a}{2M}$ are the re-scaled Chern-Simons coupling parameter and the re-scaled rotation parameter of black hole, respectively. Obviously, the parameters $\zeta$ and $\hat{a}$ are dimensionless. For simplicity, we set $2M=1$ in the following calculations. As in Ref. [@Bozza3], the null geodesics take the form $$\begin{aligned} &&\frac{dt}{d\lambda}=\frac{C(x)-JD(x)}{D(x)^2+A(x)C(x)},\label{u3}\\ &&\frac{d\phi}{d\lambda}=\frac{D(x)+JA(x)}{D(x)^2+A(x)C(x)},\label{u4}\end{aligned}$$ where $\lambda$ is an affine parameter along the geodesics and $J$ is the angular momentum of the photon. For the null geodesics, the Lagrangian $\mathcal{L}=\frac{1}{2}g_{\mu\nu}\dot{x^{\mu}}\dot{x^{\nu}}$ vanishes. This implies that $$\begin{aligned} \dot{x}=\pm\sqrt{\frac{C(x)-J[2D(x)+JA(x)]}{B(x)[D(x)^2+A(x)C(x)]}}.\end{aligned}$$ Clearly, $\dot{x}$ is equal to zero at the minimum distance of approach of the light ray. Combining with Eqs. (\[u3\]) and (\[u4\]), one can obtain that [@Bozza3] $$\begin{aligned} J=u=\frac{-D(x_0)+\sqrt{A(x_0)C(x_0)+D^2(x_0)}}{A(x_0)}.\end{aligned}$$ where $x_0$ is the closest approach distance and $u$ is the impact parameter. In the slowly-rotating black-hole spacetime in the dynamical Chern-Simons modified gravity, the photon-sphere equation is given by $$\begin{aligned} A(x)C'(x)-A'(x)C(x)+2J[A'(x)D(x)-A(x)D'(x)]=0.\label{root}\end{aligned}$$ Obviously, this equation is more complex than that in the background of a static and spherical black hole [@Vir2]. It is difficult to obtain an analytical form for the photon-sphere radius in this case. However, we can expand Eq.(\[root\]) as a power series in $\hat{a}$ and find that the photon-sphere radius in the slowly-rotating approximation can be expressed as $$\begin{aligned} x_{ps}=\frac{3}{2}-\hat{a}\bigg(\frac{2\sqrt{3}}{3}-\frac{62\sqrt{3}}{243}\zeta\bigg) -\hat{a}^2\bigg(\frac{4}{9}-\frac{17924}{5103}\zeta+\frac{896024}{413343}\zeta^2\bigg)+\mathcal{O}(\hat{a}^3).\label{rps}\end{aligned}$$ ![Variety of the quantity $r_{ps}=2M x_{ps}$ with the Chern-Simons coupling parameter $\zeta$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. The solid, dashed, dash-dotted and dotted curves are for $\zeta=0$, $0.1$, $0.2$ and $0.3$, respectively. Here we set $2M=1$.[]{data-label="f1"}](ps-a.eps){width="7cm"} From Eq.(\[rps\]), one can obtain that the photon-sphere radius $x_{ps}$ increases with the parameter $\zeta$ if the photons are winding in the same direction of the black-hole rotation (i.e., $\hat{a}>0$), while the radius $x_{ps}$ decreases with the parameter $\zeta$ if the photons rotate in converse direction to the black hole (i.e., $\hat{a}<0$). This is also shown in Fig.(\[f1\]) in which we plotted the variety of the photon-sphere radius $x_{ps}$ with the Chern-Simons coupling parameter $\zeta$ and the rotating parameter $\hat{a}$ by solving Eq. (\[root\]) numerically. Moreover, we find that as $\zeta\rightarrow 0$, the photon-sphere radius $x_{ps}$ reduces to that in Kerr black hole. As the rotation parameter $\hat{a}$ tends to zero, $x_{ps}$ is independent of the Chern-Simons coupling parameter $\zeta$. Following ref. [@Ein1], we can obtain the deflection angle for the photon coming from infinite in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. $$\begin{aligned} \alpha(x_{0})=I(x_{0})-\pi,\end{aligned}$$ and $I(x_0)$ is $$\begin{aligned} I(x_0)=2\int^{\infty}_{x_0}\frac{\sqrt{B(x)|A(x_0)|}[D(x)+JA(x)]dx}{\sqrt{D^2(x)+A(x)C(x)} \sqrt{A(x_0)C(x)-A(x)C(x_0)+2J[A(x)D(x_0)-A(x_0)D(x)]}},\label{int1}\end{aligned}$$ As in the Schwarzschild black hole spacetime, the deflection angle increases when parameter $x_0$ decreases. For a certain value of $x_0$ the deflection angle becomes $2\pi$, so that the light ray makes a complete loop around the black hole before reaching the observer. If $x_0$ is equal to the radius of the photon sphere $x_{ps}$, one can find that the deflection angle diverges and the photon is captured on a circular orbit. From the above discussion about the variety of the photon-sphere radius $r_{ps}$ with the Chern-Simons parameter $\zeta$, it is easy to obtain that for the larger $\zeta$ the prograde photons may be captured more easily, and conversely, the retrograde photons is harder to be captured. In order to find the behavior of the deflection angle very close to the photon sphere, we adopt to the evaluation method for the integral (\[int1\]) proposed by Bozza [@Bozza2], which has been widely used in studying of the strong gravitational lensing of various black holes [@Vir2; @Gyulchev; @Gyulchev1; @Darwin; @Vir; @Vir1; @Vir3; @Fritt; @Bozza1; @Eirc1; @whisk; @Bhad1; @Song1; @Song2; @TSa1; @AnAv]. Let us now to define a variable $$\begin{aligned} z=1-\frac{x_0}{x},\end{aligned}$$ and rewrite the Eq.(\[int1\]) as $$\begin{aligned} I(x_0)=\int^{1}_{0}R(z,x_0)f(z,x_0)dz,\label{in1}\end{aligned}$$ with $$\begin{aligned} R(z,x_0)&=&2\frac{1-A(x_0)}{A'(z)\sqrt{C(z)}}\frac{\sqrt{B(z)|A(x_0)|}[D(z)+JA(z)]}{\sqrt{D^2(z)+A(z)C(z)}},\end{aligned}$$ $$\begin{aligned} f(z,x_0)&=&\frac{1}{\sqrt{A(x_0)-A(z)\frac{C(x_0)}{C(z)}+\frac{2J}{C(z)}[A(z)D(x_0)-A(x_0)D(z)]}}.\end{aligned}$$ The function $R(z, x_0)$ is regular for all values of $z$ and $x_0$. However, the function $f(z, x_0)$ diverges as $z$ tends to zero, i.e., as the photon approaches the photon sphere. Thus, we can split the integral (\[in1\]) into the divergent part $I_D(x_0)$ and the regular one $I_R(x_0)$ $$\begin{aligned} I_D(x_0)&=&\int^{1}_{0}R(0,x_{ps})f_0(z,x_0)dz, \nonumber\\ I_R(x_0)&=&\int^{1}_{0}[R(z,x_0)f(z,x_0)-R(0,x_{ps})f_0(z,x_0)]dz \label{intbr}.\end{aligned}$$ ![The minimum impact parameter $u_{ps}$ changes with the Chern-Simons coupling parameter $\zeta$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. The solid, dashed, dash-dotted and dotted curves are for $\zeta=0$, $0.1$, $0.2$ and $0.3$, respectively. Here we set $2M=1$. []{data-label="fs2"}](um-a.eps){width="7cm"} Expanding the argument of the square root in $f(z,x_0)$ to the second order in $z$, we have $$\begin{aligned} f_0(z,x_0)=\frac{1}{\sqrt{p(x_0)z+q(x_0)z^2}},\end{aligned}$$ where $$\begin{aligned} p(x_0)&=&\frac{1-A(x_0)}{A'(x_0)C(x_0)}\bigg\{A(x_0)C'(x_0)-A'(x_0)C(x_0)+2J[A'(x_0)D(x_0)-A(x_0)D(x_0)]\bigg\}, \nonumber\\ q(x_0)&=&\frac{(1-A(x_0))^2}{2A'^3(x_0)C^2(x_0)}\bigg(2C(x_0)C'(x_0)A'^2(x_0)+[C(x_0)C''(x_0)-2C'^2(x_0)]A(x_0)A'(x_0)\nonumber\\ &&-C(x_0)C'(x_0)A(x_0)A''(x_0)+ 2J\{A(x_0)C(x_0)[A''(x_0)D'(x_0)-A'(x_0)D''(x_0)]\nonumber\\&&+2A'(x_0)C'(x_0)[A(x_0)D'(x_0)-A'(x_0)D(x_0)]\}\bigg).\label{al0}\end{aligned}$$ Comparing Eq.(\[root\]) with Eq.(\[al0\]), one can find that if $x_{0}$ approaches to the radius of photon sphere $x_{ps}$ the coefficient $p(x_{0})$ vanishes and the leading term of the divergence in $f_0(z,x_{0})$ is $z^{-1}$. This means that the integral (\[in1\]) diverges logarithmically. The coefficient $q(x_0)$ takes the form $$\begin{aligned} q(x_{ps})&=&\frac{(1-A(x_{ps}))^2}{2A'^2(x_{ps})C(x_{ps})}\bigg\{A(x_{ps})C''(x_{ps})-A''(x_{ps})C(x_{ps}) 2J[A''(x_{ps})D(x_{ps})-A(x_{ps})D''(x_{ps})]\bigg\}.\end{aligned}$$ Therefore the deflection angle in the strong field region can be expanded in the form [@Bozza2] $$\begin{aligned} \alpha(\theta)=-\bar{a}\log{\bigg(\frac{u}{u_{ps}}-1\bigg)}+\bar{b}+\mathcal{O}(u-u_{ps}), \label{alf1}\end{aligned}$$ with $$\begin{aligned} &\bar{a}&=\frac{R(0,x_{ps})}{\sqrt{q(x_{ps})}}, \nonumber\\ &\bar{b}&= -\pi+b_R+\bar{a}\log{\bigg\{\frac{2q(x_{ps})C(x_{ps})}{u_{ps}A(x_{ps})[D(x_{ps}+JA(x_{ps})]}\bigg\}}, \nonumber\\ &b_R&=I_R(x_{ps}), \nonumber\\ &u_{ps}&=\frac{-D(x_{ps})+\sqrt{A(x_{ps})C(x_{ps})+D^2(x_{ps})}}{A(x_{ps})}.\label{coa1}\end{aligned}$$ ![Variation of the coefficients of the strong field limit $\bar{a}$ (the left) and $\bar{b}$ (the right) with the Chern-Simons coupling parameter $\zeta$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. The solid, dashed, dash-dotted and dotted curves are for $\zeta=0$, $0.1$, $0.2$ and $0.3$, respectively. Here we set $2M=1$. []{data-label="f3"}](1a-a.eps "fig:"){width="7cm"}![Variation of the coefficients of the strong field limit $\bar{a}$ (the left) and $\bar{b}$ (the right) with the Chern-Simons coupling parameter $\zeta$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. The solid, dashed, dash-dotted and dotted curves are for $\zeta=0$, $0.1$, $0.2$ and $0.3$, respectively. Here we set $2M=1$. []{data-label="f3"}](1b-a.eps "fig:"){width="7cm"} ![Deflection angles evaluated at $u=u_{ps}+0.003$ is a function of the Chern-Simons coupling parameter $\zeta$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. The solid, dashed, dash-dotted and dotted curves are for $\zeta=0$, $0.1$, $0.2$ and $0.3$, respectively. Here we set $2M=1$.[]{data-label="f4"}](se-a.eps){width="7cm"} Making use of Eqs.(\[alf1\]) and (\[coa1\]), we can study the properties of strong gravitational lensing in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. Neglecting terms of order $\mathcal{O}(\hat{a}^3)$ and higher term, we can expand the the coefficients ($\bar{a}$ and $\bar{b}$ ) in the strong gravitational lensing and the minimum impact parameter $u_{ps}$ as a power series in $\hat{a}$ $$\begin{aligned} &\bar{a}&=1+\frac{2\sqrt{3}\hat{a}(1134-2003\zeta)}{5103}+ \hat{a}^2\bigg(\frac{10}{9}-\frac{133288\zeta}{15309}+\frac{57184906\zeta^2}{8680203}\bigg)+ \mathcal{O}(\hat{a}^3), \nonumber\\ &\bar{b}&=-0.40023-(0.190505-2.06119\zeta)\hat{a}-\hat{a}^2(14.903\zeta^2-13.9387\zeta+0.541507)+\mathcal{O}(\hat{a}^3), \nonumber\\ &b_R&=\log{6}+ \frac{\sqrt{3}\hat{a}}{3}\bigg[\frac{4(1+\log{6})}{3}+\frac{\zeta (3754-4006\log{6})}{1701}\bigg]\nonumber\\&&+ \hat{a}^2\bigg[\frac{4}{3}+\frac{10}{9}\log{6}+\zeta\bigg(\frac{7528}{1701}-\frac{133288\log{6}}{15309}\bigg)- \zeta^2\bigg(\frac{67484654}{8680203}-\frac{57184906 \log{6}}{8680203}\bigg)\bigg] +\mathcal{O}(\hat{a}^3), \nonumber\\ &u_{ps}& =\frac{3\sqrt{3}}{2}-\hat{a}\bigg(2-\frac{131\zeta}{189}\bigg)-\frac{\sqrt{3}\hat{a}^2}{3}\bigg(1-\frac{2948\zeta}{567 }+\frac{701941\zeta^2}{321489}\bigg)+\mathcal{O}(\hat{a}^3).\label{coa21}\end{aligned}$$ In figs.(\[fs2\])-(\[f3\]), we plotted numerically the changes of the minimum impact parameter $u_{ps}$ and the coefficients ($\bar{a}$ and $\bar{b}$ ) with $\hat{a}$ and $\zeta$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. From Eqs.(\[coa21\]) and figs. (\[fs2\])-(\[f3\]), we find that the minimum impact parameter $u_{ps}$ and the coefficients ($\bar{a}$ and $\bar{b}$ ) in the strong field limit are functions of the rotation parameter $\hat{a}$ and the Chern-Simons coupling parameter $\zeta$. The minimum impact parameter has similar behavior as the radius of the photon sphere $x_{ps}$. The coefficient $\bar{a}$ increases with $\hat{a}$ for fixed $\zeta$. For fixed $\hat{a}$, the coefficient $\bar{a}$ decreases with the Chern-Simons coupling parameter $\zeta$ if $\hat{a}>0$ and it increases if $\hat{a}<0$. The coefficient $\bar{b}$ decreases with $\hat{a}$ for the smaller $\zeta$, but it increases with $\hat{a}$ for the larger $\zeta$. For fixed $\hat{a}$, the variety of $\bar{b}$ with $\zeta$ is converse to the variety of $\bar{a}$ with $\zeta$. In fig. (\[f4\]), we plotted the change of the deflection angles evaluated at $u=u_{ps}+0.003$ with $\zeta$. It is shown that in the strong field limit the deflection angles have the similar properties of the coefficient $\bar{a}$. This means that the deflection angles of the light rays are dominated by the logarithmic term in the strong gravitational lensing. Moreover, we also find that for larger $\zeta$ the deflection angle is larger for the retrograde photon, while it is smaller for the prograde photon. These imply that for the larger $\zeta$ the prograde photons may be captured more easily, and conversely, the retrograde photons is harder to be captured in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. Observables in the strong deflection limit ------------------------------------------ Let us now to study the effect of the Chern-Simons parameter $\zeta$ on the observational gravitational lensing parameters. We start by assuming that the gravitational field of the supermassive black hole at the Galactic center of Milky Way can be described by the slowly-rotating black hole in the dynamical Chern-Simons modified gravity, and then estimate the numerical values for the main observables of gravitational lensing in the strong field limit. In the strong deflection limit, from the lensing geometry we can rewrite the lens equation as [@Bozza3] $$\begin{aligned} \gamma=\frac{D_{OL}+D_{LS}}{D_{LS}}\theta-\alpha(\theta) \; mod \;2\pi\end{aligned}$$ where $D_{LS}$ is the lens-source distance and $D_{OL}$ is the observer-lens distance. $\gamma$ is the angle between the direction of the source and the optical axis. $\theta=u/D_{OL}$ is the angular separation between the lens and the image. Following ref.[@Bozza3], one can find that the angular separation between the lens and the n-th relativistic image is $$\begin{aligned} \theta_n\simeq\theta^0_n\bigg(1-\frac{u_{ps}e_n(D_{OL}+D_{LS})}{\bar{a}D_{OL}D_{LS}}\bigg),\end{aligned}$$ with $$\begin{aligned} \theta^0_n=\frac{u_{ps}}{D_{OL}}(1+e_n),\;\;\;\;\;\;e_{n}=e^{\frac{\bar{b}+|\gamma|-2\pi n}{\bar{a}}}.\end{aligned}$$ The quantity $\theta^0_n$ is the image positions corresponding to $\alpha=2n\pi$, and $n$ is an integer. According to the past oriented light ray which starts from the observer and finishes at the source the resulting images stand on the eastern side of the black hole for direct photons ($\hat{a}>0$) and are described by positive $\gamma$. Retrograde photons ($\hat{a}<0$) have images on the western side of the black hole and are described by negative values of $\gamma$. In the limit $n\rightarrow \infty$, we find that $e_n\rightarrow 0$, and then the relation between the minimum impact parameter $u_{ps}$ and the asymptotic position of a set of images $\theta_{\infty}$ can be simplified as $$\begin{aligned} u_{ps}=D_{OL}\theta_{\infty}.\label{uhs1}\end{aligned}$$ In order to obtain the coefficients $\bar{a}$ and $\bar{b}$, one needs to separate at least the outermost image from all the others. As in Refs.[@Bozza2; @Bozza3], we consider here the simplest case in which only the outermost image $\theta_1$ is resolved as a single image and all the remaining ones are packed together at $\theta_{\infty}$. Thus the angular separation between the first image and other ones can be expressed as [@Bozza2; @Bozza3; @Gyulchev1] $$\begin{aligned} s=\theta_1-\theta_{\infty}=\theta_{\infty}e^{\frac{\bar{b}-2\pi}{\bar{a}}}.\label{ss1}\end{aligned}$$ Through measuring $s$ and $\theta_{\infty}$, we can obtain the strong deflection limit coefficients $\bar{a}$, $\bar{b}$ and the minimum impact parameter $u_{ps}$. Comparing their values with those predicted by the theoretical models, we can obtain information about the parameters of the lens object stored in them. The mass of the central object of our Galaxy is estimated recently to be $4.4\times 10^6M_{\odot}$ [@Genzel1] and its distance is around $8.5kpc$, so that the ratio of the mass to the distance $M/D_{OL} \approx2.4734\times10^{-11}$. Making use of Eqs. (\[coa21\]), (\[uhs1\]) and (\[ss1\]) we can estimate the values of the coefficients and observables for gravitational lensing in the strong field limit. For the different $\zeta$ and $\hat{a}$, the numerical value for the angular position of the relativistic images $\theta_{\infty}$ and the angular separation $s$ are listed in the Table I. The dependence of these observables on the parameters $\zeta$ and $\hat{a}$ are also shown in Fig. (5). ![Gravitational lensing by the Galactic center black hole. Variation of the values of the angular position $\theta_{\infty}$, the angular separation $s$ with parameter $\hat{a}$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. The solid, dashed, dash-dotted and dotted curves are for $\zeta=0$, $0.1$, $0.2$ and $0.3$, respectively.](sth-a.eps "fig:"){width="6cm"}![Gravitational lensing by the Galactic center black hole. Variation of the values of the angular position $\theta_{\infty}$, the angular separation $s$ with parameter $\hat{a}$ in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. The solid, dashed, dash-dotted and dotted curves are for $\zeta=0$, $0.1$, $0.2$ and $0.3$, respectively.](snh-a.eps "fig:"){width="6cm"} \[51\] ----------- ----------- ------------- ------------- ------------- ----------- ------------- ------------- ------------- $\hat{a}$ $\zeta=0$ $\zeta=0.1$ $\zeta=0.2$ $\zeta=0.3$ $\zeta=0$ $\zeta=0.1$ $\zeta=0.2$ $\zeta=0.3$ -0.10 28.497 28.449 28.400 28.350 0.0226 0.0233 0.0242 0.0253 -0.05 27.516 27.487 27.458 27.428 0.0271 0.0278 0.0286 0.0294 0 26.510 26.510 26.510 26.510 0.0332 0.0332 0.0332 0.0332 0.05 25.474 25.518 25.561 25.603 0.0411 0.0395 0.0380 0.0365 0.10 24.403 24.514 24.618 24.717 0.0513 0.0469 0.0429 0.0393 ----------- ----------- ------------- ------------- ------------- ----------- ------------- ------------- ------------- : Numerical estimation for main observables in the strong field limit for the black hole at the center of our Galaxy, which is supposed to be described by in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity. \[tab1\] Obviously, the observables $\theta_{\infty}$ and $s$ are independent of the parameter $\zeta$ as the rotation parameter $\hat{a}=0$. From Table I and Fig. (5), we find that with the increase of $\zeta$, the angular position of the relativistic images $\theta_{\infty}$ increases for the direct photons $(\hat{a}>0)$ and decrease for the retrograde photons $(\hat{a}<0)$. The change of the angular separation $s$ with $\zeta$ is converse to that of $\theta_{\infty}$. Theoretically, we could detect the effects of parameter $\zeta$ on the strong gravitational lensing through the astronomical observations and then make a constraint on the parameter $\xi$. From Table I, we find that the values for $\theta_{\infty}$ is very small, which leads to constrain the parameter $\xi$ more difficultly. If one can constrain $\zeta<0.1$ with lensing observations, it is easy to obtain that the bound of the Chern-Simons coupling parameter is $$\begin{aligned} \xi^{1/4}=\zeta^{1/4}2M<8.70\times10^6\text{km},\end{aligned}$$ which is not stronger than that from the binary pulsar PSR J0737-3039 A/B [@Burgay] obtained by Yunes *et al* [@Yunes1] $$\begin{aligned} \xi^{1/4}<1.5\times10^4\text{km}.\end{aligned}$$ In order that the strong gravitational lensing bound can beat the binary pulsar one, we would have to constrain $\zeta\sim 8.44\times 10^{-13}$, which seems impossible in the near future. summary ======= In this paper we have extensively studied the geodetic precession and the strong gravitational lensing in the slowly-rotating black hole in the dynamical Chern-Simons modified gravity theory. We present the formulas of the orbital period $T$ and the geodetic precession angle $\Delta\Theta$ for the timelike particles in the circular orbits around the black hole. When the black hole rotates in the same direction as the particle ( $a>0$), the orbital period $T$ decreases with the Chern-Simons coupling parameter $\xi$. While the black hole rotates in the converse direction as the particle ($a<0$), the orbital period $T$ increases with the parameter $\xi$. Moreover, it is shown that the change of the geodetic precession angle with the $\xi$ is converse to the change of the orbital period with the $\xi$. We also discuss the effects of the Chern-Simons coupling parameter on the strong gravitational lensing when the light rays pass close to the black hole. We find that the photon-sphere radius, the minimum impact parameter, and the coefficients in the strong field limit depend on the Chern-Simons coupling parameter. With the increase of $\zeta$ (i.e., the re-scaling Chern-Simons coupling parameter $\zeta=\xi/(2M)^4$) the deflection angle increases for the prograde photon, while it decreases for the retrograde photon. It means that for the larger $\zeta$ the prograde photons may be captured more easily, and conversely, the retrograde photons is harder to be captured in the slowly-rotating black hole in dynamical Chern-Simons modified gravity. The model was applied to the supermassive black hole in the Galactic center. Our results show that with the increase of the parameter $\zeta$ the angular position of the relativistic images $\theta_{\infty}$ increases for the direct photons $(a>0)$ and decrease for the retrograde photons $(a<0)$. The change of the angular separation $s$ with $\zeta$ is converse to that of $\theta_{\infty}$. Our result also show that the bound of $\xi$ from the strong gravitational lensing is not stronger than that from the binary pulsar PSR J0737-3039 A/B [@Burgay]. In particular, for the case $\zeta = 0.1$, we can get the Chern-Simons coupling parameter $\xi\sim 8.70\times10^6 $ km, which is ruled out by binary pulsar constraints. If one were to choose the parameter $\xi$ small enough not to be ruled out by these constraints, the effect would be much smaller than microarcseconds. Perhaps with the development of technology, the effects of parameter $\zeta$ on gravitational lensing may be detected in the future. It would be of interest to study the Hawking radiation and quasinormal modes of the slowly-rotating black hole in dynamical Chern-Simons modified gravity. Work in this direction will be reported in the future. We thank the referees for their their constructive comments and suggestions, which make the meaning expressed in our manuscript more clear. This work was partially supported by the National Natural Science Foundation of China under Grant No.10875041, the Program for Changjiang Scholars and Innovative Research Team in University (PCSIRT, No. IRT0964) and the construct program of key disciplines in Hunan Province. J. Jing’s work was partially supported by the National Natural Science Foundation of China under Grant Nos.10875040 and No.10935013; 973 Program Grant No. 2010CB833004 and the Hunan Provincial Natural Science Foundation of China under Grant No.08JJ3010. [99]{} =0.6 cm Lue A, Wang L M and Kamionkowski M 1999 Phys. Rev. Lett. [**83**]{} 1506 Jackiw R and Pi S Y 2003 Phys. Rev. D [**68**]{} 104012 Alexander S and Yunes N 2009 Phys. Rept. [**480**]{} 1 Alexander S H S, Gates J and James S 2006 JCAP [**0606**]{} 018 Svrcek P and Witten E 2006 JHEP [**0606**]{} 051 Alvarez-Gaume L and Witten E 1984 Nucl. Phys. B [**234**]{} 269 Campbell B A, Kaloper N, Madden R and Olive K A 1993 Nucl. Phys. B [**399**]{} 137 Ashtekar A, Balachandran A P and Jo S 1989 Int. J. Mod. Phys. A [**4**]{} 1493 Konno K, Matsuyama T, Asano Y and Tanda S 2008 Phys. Rev. D [**78**]{} 024037 Alexander S, Peskin M E and Sheikh-Jabbari M M 2006 Phys. Rev. Lett. [**96**]{} 081301 Garcia-Bellido J, Garcia-Perez M and Gonzalez-Arroyo A 2004 Phys. Rev. D [**69**]{} 023504 Alexander S, Finn L S and Yunes N 2008 Phys. Rev. D [**78**]{} 066005 Alexander S and Yunes N 2007 Phys. Rev. Lett. [**99**]{} 241101 Alexander S and Yunes N 2007 Phys. Rev. D [**75**]{} 124022 Smith T L, Erickcek A L, Caldwell R R and Kamionkowski M 2008 Phys. Rev. D [**77**]{} 024015 Yunes N and Pretorius F 2009 Phys. Rev. D [**79**]{} 084043 Konno K, Matsuyama T and Tanda S 2009 Prog. Theor. Phys. [**122**]{} 561 Cardoso V and Gualtieri L 2009 Phys. Rev. D [**80**]{} 064008 Molina C, Pani P, Cardoso V and Gualtieri L 2010 arXiv:1004.4007 Ciufolini I 2007 arXiv: 0704.3338. Ciufolini I and Pavlis E C 2004 Nature [**431**]{} 958 Yunes N and Spergel D N 2009 Phys. Rev. D [**80**]{} 042004 Harko T, Kovács Z and Lobo F S N 2010 Class. Quant. Grav. [**27**]{} 105010 Amarilla L, Eiroa E F and Giribet G 2010 Phys. Rev. D [**81**]{} 124045 Ahmedov H and Aliev A N 2010 Phys. Lett. B [**690**]{} 196 Ahmedov H and Aliev A. N, 2010 Phys. Rev. D[**82**]{} 024043 Sopuerta C F and Yunes N 2009 Phys. Rev. D [**80**]{} 064006 Bozza V 2002 Phys. Rev. D [**66**]{} 103001 Bozza V 2003 Phys. Rev. D [**67**]{} 103006 Bozza V, De Luca F, Scarpetta G and Sereno M 2005 Phys. Rev. D [**72**]{} 08300 Bozza V, De Luca F and Scarpetta G 2006 Phys. Rev. D [**74**]{} 063001 Harko T, Kovacs Z and Lobo F S N 2008 Phys. Rev. D [**78**]{} 084005 Harko T, Kovacs Z and Lobo F S N 2009 Phys. Rev. D [**79**]{} 064001 Matsuno K and Ishihara H 2009 Phys. Rev. D [**80**]{} 104037 Claudel C M, Virbhadra K S and Ellis G F R 2001 J. Math. Phys. [**42**]{} 818 Einstein A 1936 Science [**84**]{} 506 Gyulchev G N and Yazadjiev S S 2007 Phys. Rev. D [**75**]{} 023006 Gyulchev G N and Yazadjiev S S 2008 Phys. Rev. D [**78**]{} 083004 Darwin C 1959 Proc. of the Royal Soc. of London [**249**]{} 180 Virbhadra K S, Narasimha D and Chitre S M 1998 Astron. Astrophys. [**337**]{} 18 Virbhadra K S and Ellis G F R 2000 Phys. Rev. D [**62**]{} 084003 Virbhadra K S and Ellis G F R 2002 Phys. Rev.D [**65**]{} 103004 Frittelly S, Kling T P and Newman E T 2000 Phys. Rev. D [**61**]{} 064021 Bozza V, Capozziello S, lovane G and Scarpetta G 2001 Gen. Rel. and Grav. [**33**]{} 1535 Eiroa E F, Romero G E and Torres D F 2002 Phys. Rev. D [**66**]{} 024010 Eiroa E F 2005 Phys. Rev. D [**71**]{} 083010 Eiroa E F 2006 Phys. Rev. D [**73**]{} 043002 Whisker R 2005 Phys. Rev. D [**71**]{} 064004 Bhadra A 2003 Phys. Rev. D [**67**]{} 103009 Chen S and Jing J 2009 Phys. Rev. D [**80**]{} 024036 Liu Y, Chen S and Jing J 2010 Phys. Rev. D [**81**]{} 124017 Ghosh T andSengupta S 2010 Phys. Rev. D [**81**]{} 044013 Aliev A N and Talazan P 2009 Phys. Rev. D [**80**]{} 044023 Genzel R, Eisenhauer F and Gillessen S 2010 arXiv:1006.0064 Burgay M *et al* 2003 Nature. [**426**]{} 531
{ "pile_set_name": "ArXiv" }
--- author: - Netta Engelhardt - 'and Aron C. Wall' bibliography: - 'all.bib' title: Coarse Graining Holographic Black Holes --- Introduction {#sec:intro} ============ One of the primary goals of quantum gravity is a complete description of the black hole interior. This description is being pursued via numerous methods, from the AdS/CFT correspondence [@Mal97; @GubKle98; @Wit98a], to black hole microstate counting (see literature starting with [@StrVaf96]), and the generalized holographic principle [@Tho93; @Sus95; @Bou99d] among others (see [@Pol16] for a review). As an ostensible nonperturbative quantum theory of gravity, AdS/CFT in particular has tremendous potential for shedding light on the physics inside the black hole. The fine-grained entropy of a holographic black hole is given by the HRT formula [@RyuTak06; @HubRan07]. As applied to the case of an eternal black hole (which represents an entangled state of two boundary CFT’s [@Mal01]), the HRT formula tells us that the von Neumann entropy $S_{vN}$ of either CFT is given by the area of a certain compact “extremal” surface $X$ (whose area is stationary under variations) lodged inside the throat, which separates the two boundaries: \[HRTintro\] S\_[vN]{} = . The HRT entropy is time-independent (in the sense that it is independent of the choice of Cauchy slice), so it does not evolve even if we send matter into the black hole. And for a classical black hole that forms from collapse, $X$ is given by the empty set so $S_{vN}$ vanishes. This is because the HRT is a fine-grained quantity, i.e. it does not involve any kind of coarse-graining over the thermalized degrees of freedom. Hence, it does not allow us to define a nontrivial second law, nor does it allow us to interpret the changing area of a black hole horizon as an entropy. For this, we need a definition of coarse-grained entropy. A natural framework of coarse-graining, advocated by Jaynes [@Jay57a; @Jay57b], is to maximize the von Neumann entropy $S_{vN} = -{\operatorname{tr}}(\rho \ln \rho)$ while holding certain quantities fixed. In our case, we wish to hold fixed the classical bulk data outside of some surface $\sigma$. The information outside of $\sigma$ will play the role of the “macrostate”, i.e. the information that is accessible to an exterior observer. The information inside of $\sigma$ will play the role of the “microstate”, i.e. the forgotten information which must be coarse-grained over[^1]. From a bulk perspective, this is justified insofar as an observer living outside of $\sigma$ will find the data in the exterior of $\sigma$ easy to measure, while the data in the interior of $\sigma$ is hard to measure. Although *in principle*, the data in the interior must be holographically encoded in the boundary CFT, in practice recovery is difficult since the data is encoded in subtle thermalized correlations. This explains why, at the level of the classical bulk, there is an effective notion of causality in which information appears to be lost once it falls behind a black hole horizon. As we will see, this notion of causal coarse-graining is holographically dual to a thermodynamic coarse-graining on the boundary. To be a little more precise, any compact surface $\sigma$ that splits a time slice into two pieces induces a natural division of the spacetime into four components: the past of $\sigma$, denoted $I^{-}[\sigma]$, the future of $\sigma$, denoted $I^{+}[\sigma]$, inner wedge of $\sigma$, $I_{W}[\sigma]$ and outer wedge $O_{W}[\sigma]$ [@BouEng15b]. This is illustrated in Fig. \[fig:innerouterdecomp\] for a surface inside a black hole. \[fig:innerouterdecomp\] We will now define the outer entropy as the maximum of the boundary von Neumann entropy given ignorance of the interior: S\^[()]{}\[\] \_[C]{} (S\_[vN]{}), where the $C$ is the set of all density matrices in the CFT whose classical bulk dual exists and contains the fixed region $O_{W}[\sigma]$. The classical “microstates” of $S^{(\mathrm{outer})}[\sigma]$ are all possible spacetime regions that are allowed in $I_{W}[\sigma]$ given that $O_{W}[\sigma]$ is fixed. But which surface $\sigma$ should we use? At this point, we need to define more carefully what we mean by the interior of a black hole. The event horizon of a black hole is defined as the boundary of the region which is inaccesible to future infinity. Hawking proved that the area of the event horizon is increasing with time [@Haw71]. But the event horizon is teleological, in the sense that its location can depend on what is going to happen in the future. There have been a number of proposed definitions of a more local version of a black hole horizon (and consequently, a black hole interior) in classical gravity [@HawEll; @Hay93; @AshKri02; @BouEng15a]. These definitions all exploit the concept of a marginally trapped surfaces, for which the area of outgoing lightrays is stationary to first order. Classically, marginally trapped surfaces always lie inside of event horizons. The local horizons in [@HawEll; @Hay93; @AshKri02; @BouEng15a] are all defined so that they are always foliated by marginally trapped surfaces, satisfying certain additional inequalties. These definitions are nonunique (on the same black hole spacetime, one can usually find infinitely many surfaces satisfying the criteria), but they do obey laws of thermodynamics similar to the event horizon (see [@HaywardBook] for a review); in particular, these local horizons obey various area-increase theorems [@Hay93; @AshKri02; @BouEng15a]. In previous work, we showed that the outer entropy of a slice of the event horizon is *not*, in general, given by its area [@EngWal17a]; in fact in some situations the outer entropy vanishes, while the area of the event horizon does not. Although the area of the event horizon is generically greater than the HRT surface [@Wal12], it remains unclear what coarse graining procedure, if any, corresponds to its area [@HubRan12; @KelWal13]. More happily, in [@EngWal17b] we showed that for an apparent horizon (a codimension-two outermost marginally trapped surface on a time slice) the outer entropy *is* proportional to its area. This allowed us to explain the area increase theorem for certain spacelike or null holographic screens. Besides providing a holographic interpretation for the area of non-minimal extremal surfaces, this shows that there is a natural notion of coarse graining associated with the area of the apparent horizon. We also proposed a boundary dual to the outer entropy of the apparent horizon, called the simple entropy. This article will give a more detailed and formal version of these arguments. We extend [@EngWal17b] by generalizing the notion of an apparent horizon to a “minimar surface” $\mu$, satisfying weaker conditions than an apparent horizon. In addition to being a compact marginally trapped surface, a minimar surface must satisfy certain minimality inequalities given in Section \[sec:Prel\]. The main result of this article is that for a minimar surface, the outer entropy of $\mu$ equals its Bekenstein-Hawking entropy, i.e. the area over four in Planck units: \[eq:coarse\] S\^[()]{}\[\] = , where $S_{vN}$ will be calculated using the HRT formula . This equality automatically implies a second law for certain kinds of local horizons. Suppose we have a spacelike (or null) local horizon foliated by minimar surfaces. We will show from Eq. that $S^{(\mathrm{outer})}[\mu]$ is monotonically increasing as we move spatially outwards, since we are maximizing $S_{vN}$ subject to fewer constraints. This gives a statistical explanation for the area increase theorem obeyed by such local horizons. We now briefly summarize our derivation of in the main text. We must show that, among all bulk states whose classical gravity dual contains $O_{W}[\mu]$, the maximum possibe area of the HRT surface $X$ is equal to the area of $\mu$. This is done in two steps: 1. We show that in any spacetime, the area of the HRT surface is bounded from above by the area of any minimar surface $\mu$. Thus, even if we vary the interior of $\mu$, the von Neumann entropy remains bounded by the area of $\mu$: \[eq:ineq\] S\[’\] , for any state $\rho'$ with a bulk dual whose outer wedge $O_{W}[\mu]$ agrees with $\rho$. 2. We explicitly construct an interior for $\mu$ in which this bound is saturated; we do this by patching an interior of $\mu$ to $O_{W}[\mu]$ such that the resulting spacetime has an HRT surface whose intrinsic geometry (and hence area) is the same as that of $\mu$. Because $S^{(outer)}$ is defined as the maximum of the von Neumann entropy, the fact that  is saturated immediately implies that \[eq:sat\] S\^\[\]= . Point (1) is a simple consequence of the focusing theorem and the maximin formulation of covariant holographic entanglement entropy [@Wal12]. Point (2) is more involved; to execute it, we will make use of the initial data problem on characteristic surfaces (i.e. lightfronts) in General Relativity. This will require us to develop junction conditions for gluing data across a codimension-two surface. As a consequence, we also obtain a general procedure for matching two initial data sets in General Relativity at a codimension-two boundary.[^2] (As a special degenerate case of this construction, we can take our minimar surface to be an extremal surface $X$ which is *not* the one of minimal area (HRT). This will satisfy our minimar conditions as long as there are no extremal surfaces with lesser area closer to the boundary. In this case $X = \mu$ and we construct a new spacetime in which $X$ *is* the HRT surface. This spacetime is simply $O_{W}[X]$ glued to its CPT conjugate. This provides an interpretation for the area of a class of non-minimal extremal surfaces as the von Neumann entropy in a coarse-grained state, or equivalently the outer entropy in the original state.[^3]) So far, our coarse graining has been defined almost entirely on the bulk side. The only part of the construction which is “holographic” was the interpretation of the HRT surface as the fine-grained entropy of the modified spacetime (and hence, as the coarse-grained entropy of the original spacetime). However, under certain assumptions, we can also provide a boundary dual, the *simple entropy*. The term “simple” denotes operators or sources whose corresponding bulk excitations propagate locally into the bulk[^4]. (In the classical regime, we can restrict attention to cases where these sources and operators are integrals of local operators, i.e. one-point functions.) We then define the simple entropy as the maximum of the von Neumann entropy subject to fixing the expectation values of all simple operators $\mathcal{O}$ after some initial time $t_i$, where we are also allowed to turn on arbitrary simple sources $J$ after $t_i$: S\^\[t\_i\] \_[’,J(t &gt; t\_i)]{} . This simple entropy automatically obeys a second law when the slice $t_i$ is pushed to the future. We can associate a particular minimar surface to a time slice $t_i$ by following in lightrays from $t_i$ until they reach a marginally trapped surface. For a black hole near equilibrium, $O_{W}[\mu]$ can be identified with the exterior of the event horizon up to perturbative corrections due to matter falling across the horizon. As we shall show in Section \[sec:simple\], we can remove this matter by turning on some “simple” operators in the bulk, which allows us to measure all of the information in the outer wedge $O_{W}[\mu]$ from the one-point functions on the boundary after time $t_i$. Since it is not possible to measure the information behind $\mu$ by turning on simple sources, this proves that $S^{\mathrm{(simple)}}[t_i] = S^{\mathrm{outer)}}[\mu]$ at least to all orders in perturbation theory. This paper is structured as follows: Section \[sec:Prel\] introduces assumptions and conventions, reviews some of the relevant geometric constructions, and defines minimar surfaces. In Section \[sec:Junctions\], we review the Israel junction conditions and then derive matching conditions for a codimension-two surface. Section \[sec:outer\] defines the outer entropy. Section \[sec:main\] is the main bulk construction containing the proof that the outer entropy is proportional to area of minimar surfaces. Section \[sec:others\] discusses the outer entropy of extremal surfaces and non-minimar surfaces. In Section \[sec:simple\], we define the simple entropy and argue that it is equal to the outer entropy of a minimar surface. Finally, Section \[sec:secondlaw\] gives an explanation of the second law for holographic screens foliated by minimar surfaces. We also motivate a new perspective on how to think about coarse-graining and the second law in ordinary (non-holographic) field theories. Finally, Section \[sec:prospects\] discusses the prospects for extending our work beyond classical AdS/CFT. Preliminaries {#sec:Prel} ============= This section establishes terminology, definitions, and assumptions that will be used throughout the paper. Assumptions, Conventions, and Definitions {#sec:Defs} ----------------------------------------- We will assume the AdS/CFT correspondence, and we work in the large-$N$, large-$\lambda$ limit, in which the bulk $M$ is well-approximated by classical gravity and the RT [@RyuTak06] and HRT [@HubRan07] proposals are valid. We will further assume the Null Convergence Condition (NCC): the requirement that \[eq:NCC\] R\_[ab]{}k\^[a]{}k\^[b]{}0, where $R_{ab}$ is the spacetime Ricci tensor, for every null vector field $k^{a}$ on $M$. For a spacetime satisfying the Einstein equation, this is equivalent to the Null Energy Condition, which requires positivity of null energy: \[NEC\] T\_[ab]{}k\^[a]{}k\^[b]{}0, where $T_{ab}$ is the stress-energy tensor in $M$ and as before, $k^{a}$ is any null vector field.\ We will use the following terminology: - A spacetime $(M,g)$ is a $D$-dimensional Lorentzian manifold, whose metric $g$ is continuous everywhere and smooth almost everywhere.[^5] For shorthand, we will often refer to $(M,g)$ as just $M$. - A *surface* will refer to a connected codimension-2 spacelike (embedded) submanifold of $M$ which is compact in the topological interior Int$[M]$. - A *hypersurface* will refer to a connected codimension-1 (embedded) submanifold of $M$ which is smooth almost everywhere. A hypersurface will be *splitting* if it divides $M$ into two disjoint components. - Two surfaces $s_1,s_2$ are *homologous* if there exists a hypersurface $H$ such that $\partial H = s_1 \cup s_2$. - A hypersurface $N$ is *achronal* if no two points on $N$ are timelike separated. - An achronal hypersurface $N$ is *null* if there exists a null vector field $k^{a}$ which is tangent to $N$ at every point where $N$ is smooth. One way of obtaining a null hypersurface is by firing geodesics in a null direction $k^{a}$ from a surface and allowing the geodesics to leave the hypersurface after intersections and caustics. We will call such hypersurfaces null congruences. - The *causal future* (past) of $p$, denoted $J^{+}(p)$ ($J^{-}(p)$), is the union of all past- (future) directed causal curves fired from $p$. The *chronological future* (past) of $p$, denoted $I^{+}(p)$ ($I^{-}(p)$) is the union of all past- (future-) directed timelike curves fired from $p$. We can similarly talk about the past or future of a set $S$: $I^{\pm}[S] = \cup_{p\in S} I^{\pm}(p)$. - $M$ is said to be *globally hyperbolic* if there are no closed causal curves in $M$ and for every pair $p$, $q$ in $M$, the intersection $J^{-}(p)\cap J^{+}(q)$ is compact. Note that by this definition, an asymptotically AdS spacetime $M$ fails to be globally hyperbolic. This is easily circumvented by applying this definition to the conformal compactification of $M$ [@Ger70] on its asymptotically AdS boundaries. Thus, in this paper we will refer to such spacetimes as globally hyperbolic. - The *domain of dependence* of an achronal hypersurface $\Sigma$, denoted $D[\Sigma]$, is the smallest region satisfying the criterion that every timelike curve that enters $D[\Sigma]$ must intersect $\Sigma$. We will always take $D[\Sigma]$ to be an open set. - A *Cauchy slice* $\Sigma$ of a globally hyperbolic spacetime $M$ is an achronal hypersurface whose domain of dependence is $M$: $D[\Sigma]=M$. One can also define a Cauchy slice of a globally hyperbolic region $R \subset M$. - A surface $\sigma$ is said to be *Cauchy-splitting* if it divides a Cauchy slice $\Sigma$ into two disjoint components, which we shall call $\mathrm{In}_{\Sigma}[\sigma]$ and $\mathrm{Out}_{\Sigma}[\sigma]$. A Cauchy-splitting surface induces a natural division of the spacetime $M$ into four regions: $I^{+}[\sigma]$, $I^{-}[\sigma]$, $D[\mathrm{In}_{\Sigma}[\sigma]]$ and $D[\mathrm{Out}_{\Sigma}[\sigma]]$ [@BouEng15b]. In the introduction, we discussed the *outer wedge* of $\sigma$: $O_{W}[\sigma]\equiv D[\mathrm{Out}_{\Sigma}[\sigma]]$ and the *inner wedge* of $\sigma$: $I_{W}[\sigma]\equiv D[\mathrm{In}_{\Sigma}[\sigma]]$. Henceforth, we will take all surfaces to be Cauchy-splitting. Geometry of Null Hypersurfaces {#sec:NullGeom} ------------------------------ Here we review some properties and definitions in the geometry of null hypersurfaces. Let $N_{k}$ be a null hypersurface with generating vector field $k^{a}$ in a globally hyperbolic spacetime $(M,g)$. By definition, $k^{a}k_{a}=0$. Let $\ell^{a}$ be a null vector field satisfying $\ell^{a}k_{a}=-1$. The vector field $\ell^{a}$, often called the “rigging” vector field, captures a notion of transversality to $N$. Note that in general $\ell^{a}$ is not unique. The induced metric on $N_{k}$ is degenerate; the induced metric on spacelike slices of $N_{k}$ orthogonal to $\ell^{a}$ is given by: \[nullmet\] h\_[ab]{} = g\_[ab]{} +2\_[(a]{}k\_[b)]{}. This metric allows us to define the null and transverse extrinsic curvatures of $N_{k}$, respectively: $$\begin{aligned} & B_{ab}\,_{(k)} = h_{a}^{c}h_{b}^{d} \nabla_{a}k_{b}\\ & B_{ab}\,_{(\ell)}=h_{a}^{c}h_{b}^{d} \nabla_{a}\ell_{b}.\end{aligned}$$ The null extrinsic curvature $B_{ab}\,_{(k)} $ can be decomposed into its trace and traceless parts: \[eq:Bdecomp\] B\_[ab]{}\_[(k)]{} = \_[(k)]{} h\_[ab]{} + \_[ab]{}\_[(k)]{}, where $ \varsigma_{ab}\,_{(k)} = B_{ab}\,_{(k)} - \frac{1}{D-2} \theta_{(k)} h_{ab}$ is a rank-2 tensor that measures the shearing of the congruence with evolution along $k^{a}$; the expansion $\theta_{(k)}=\mathrm{Tr}(B_{ab}\,_{(k)} )$ is a scalar that measures the rate of change the cross-sectional area of $N_{k}$ with evolution along an affinely-parametrized $k^{a}$: \_[(k)]{}=h\^[ab]{}B\_[ab]{}\_[(k)]{} = h\^[ab]{}[L]{}\_[k]{}h\_[ab]{} = \_[k]{} = , where $\lambda$ is a parameter along the $k^{a}$ geodesics generating $N_{k}$, $\delta A$ is the infinitesimal area element of cross-sections of $N_{k}$, and ${\cal L}_{k}$ is the Lie derivative in the $k^{a}$ direction. The shear and expansion are related via the Raychaudhuri equation[^6]: \_[k]{}\_[(k)]{} =-\_[(k)]{} \_[(k)]{} -(\_[(k)]{})\^[2]{} - \^[ab]{}\_[(k)]{} \_[ab]{}\_[(k)]{}-R\_[ab]{}k\^[a]{}k\^[b]{}. Here $R_{ab}$ is the *spacetime* Ricci tensor and $\kappa_{k}$ is the inaffinity of the $k^{a}$ congruence: it measures the failure of the $k^{a}$ geodesics to be affinely parametrized: k\^[a]{}\_[a]{}k\^[b]{} = \_[(k)]{}k\^[b]{}. For null geodesic congruences that are affinely parametrized, $\kappa_{(k)}=0$. In such cases, the NCC  is sufficient by the Raychaudhuri equation to guarantee that gravitational curvature can only cause $\theta_{(k)}$ to decrease. The physical interpretation is that gravity satisfying the NCC can only cause light rays to focus, as $A'(\lambda)$ can only decrease. In a spacetime satisfying the Einstein equation, the NEC guarantees that once light rays begin to focus, they must continue to do so: the derivative $k^{a} \nabla_{a}\theta_{(k)}$ is monotonically nonincreasing. One final player remains missing: the extrinsic twist potential (a.k.a. the normal fundamental form), *twist* for short. It is a 1-form defined using both $\ell^{a}$ and $k^{a}$: \_[a]{}\_[(k)]{} =h\^[c]{}\_[a]{} \^[d]{} \_[c]{}k\_[d]{}. Intuitively the twist measures the spacetime dragging of a rotating mass; it is simple to see this in the Lense-Thirring effect in the weak field limit (see e.g. [@Hay06] for a derivation). Note that the twist is antisymmetric under exchange of $\ell$ and $k$: $\chi^{a}\,_{(k)} =-\chi^{a}\,_{(\ell)}$, so it can also be written $\chi_{a} = (\chi_{a}\,_{(k)} -\chi_{a}\,_{(\ell)})/2$. Marginal, Extremal, and Minimar Surfaces {#sec:MTS} ---------------------------------------- A surface $\sigma$ has by definition two linearly independent null normals $\ell^{a}$ and $k^{a}$. Along each of these null vector fields, we may fire congruences of null geodesics $N_{\ell}$ and $N_{k}$. These are illustrated in Fig. \[fig:NullDec\]. ![A cartoon showing the different ways of denoting the orthogonal null vector fields and hypersurfaces generated from a surface $\sigma$. The left panel figure shows the vectors $\ell^{a}$ and $k^{a}$ at a point on $\sigma$ in $D=3$ dimensions. In the center panel, the null congruences $N_{\ell}$ and $N_{k}$ are shown with $(D-2)$ spacetime dimensions suppressed. The final panel figure shows $N_{\ell}$ (orange) and $N_{k}$ (purple) in $D=3$.[]{data-label="fig:NullDec"}](NullHyper.pdf){width="90.00000%"} **Surface type** $\boldsymbol{\theta_{(\ell)}}$ $\boldsymbol{ \theta_{(k)}}$ ------------------------- -------------------------------- ------------------------------ Untrapped $-$ $+$ Trapped $-$ $-$ Marginally Trapped $-$ $0$ Anti-Trapped $+$ $+$ Marginally Anti-Trapped $+$ $0$ Extremal $0$ $0$ It is convenient to classify surfaces based on the expansions $\theta_{(\ell)}$ and $\theta_{(k)}$ at $\sigma$. When $\theta_{(\ell)}$ and $\theta_{(k)}$ are both positive or both negative, under the right assumptions (including the NCC), we are guaranteed that the spacetime is geodesically incomplete [@Pen65; @Haw65; @Haw66; @Pen69]. When both expansions are negative, this corresponds to a crunching geometry, where null geodesics are trapped; when both expansions are positive, this corresponds to an expanding geometry, where null geodesics are “anti-trapped”. “Untrapped” surfaces have positive $\theta_{(k)}$ and negative $\theta_{(\ell)}$ or vice versa. The natural boundary between untrapped and trapped or anti-trapped regions are “marginal” surfaces with one expansion identically zero on the whole surface. The terminology is summarized in Table \[fig:table\]. Fig. \[fig:Schw\] illustrates the different types of surfaces in the Schwarzschild black hole spacetime. Note that we will largely confine the discussion to marginally trapped surfaces (MTSs), with the understanding that the same statements apply in the time reverse to marginally anti-trapped surfaces, which are more useful in cosmology. ![A conformal diagram of maximally-extended Schwarzschild-AdS, which contains spherically-symmetric surfaces of all types under the classification of Table \[fig:table\]. There are trapped surfaces in the black hole region, anti-trapped surfaces in the white hole region, and untrapped surfaces in each asymptotic region. As with all stationary black holes, the future event horizons are foliated by marginally trapped surfaces; the past event horizons are foliated by marginally anti-trapped surfaces. The bifurcation surface (black dot) is extremal.[]{data-label="fig:Schw"}](Schwarzschild.pdf){width="40.00000%"} We now launch into a discussion of the most oft-used type of surface in holography: #### Extremal Surface: A surface $X$ is *extremal* if the expansions of the two null orthogonal congruences fired from it both vanish: $$\begin{aligned} \label{eq:expansion} &\theta_{(\ell)}= 0\\ &\theta_{(k)}= 0. \label{eq:expansion2}\end{aligned}$$ Since any vector orthogonal to $X$ can be written as a linear combination of $\ell^{a}$ and $k^{a}$, it immediately follows that the area of $X$ is stationary to first order perturbations in any direction. The *HRT surface* of a connected component of the asymptotic boundary $B$ is the minimal area surface homologous to $B$ satisfying Eq.  & . The HRT prescription [@RyuTak06; @HubRan07; @LewMal13; @DonLew16] for computing the von Neumann entropy of an entire connected component $B$ of the CFT at one time is the following formula: $$\label{eq:HRT} S_{vN}= -\mathrm{tr} (\rho_{B} \ln \rho_{B})=\frac{\text{Area}[X]}{4 G \hbar},$$ where $\rho_{B}$ is the density matrix of $B$, and $X$ is the minimal area extremal surface homologous to $B$.[^7] Note that in the case of a one-sided black hole (e.g. a black hole formed from collapse), $X$ is given by the empty set so $S_{vN} = 0$ at classical order. An equivalent formulation of the HRT surface of which we will make frequent use is the maximin construction [@Wal12]. In the maximin construction, one first identifies of the minimal area surface homologous to $B$ on a given Cauchy slice $\Sigma$; we shall denote this surface by min$(B,\Sigma)$. One then chooses $\Sigma$ so as to maximize the area of the minimal area surface over all possible Cauchy slices. Using the NCC together with some global assumptions,[^8] one can show that this is equivalent to the HRT surface [@Wal12]. The following is a very useful consequence of the maximin formalism: #### Lemma: [@Wal12] An HRT surface $X$ is the minimal area surface homologous to $B$ on some Cauchy slice containing $X$.\ The region between $X$ and $B$ is commonly referred to as the entanglement wedge: #### Entanglement Wedge: The entanglement wedge $E_{W}[B]$ of $B$, referred to also as the exterior of the HRT surface $X$, is defined as the domain of dependence of any hypersurface $\text{Out}_{\Sigma}[X]$ connecting $X$ to $B$ [@CzeKar12; @Wal12; @HeaHub14]: E\_[W]{}\[B\]=D\[\_\[\]\]. It is now understood that $E_{W}[B]$ is the region dual to the CFT density matrix $\rho_{B}$, so that field data in $E_{W}[B]$ can be fully reconstructed from operators in $B$, and it commutes with operators on the complementary boundary $\widetilde{B}$ [@JafLew15; @DonHar16].\ More generally, for any surface $\sigma$ homologous to $B$, we can give a natural generalization of the entanglement wedge to the region bounded between arbitrary surfaces $\sigma$ and $B$, which we shall call the *outer wedge* of $\sigma$: #### Outer Wedge: Let $\sigma$ be a surface homologous to $B$. Let $\Sigma$ be a Cauchy slice containing $\sigma$, with decomposition into disjoint components as given in Sec. \[sec:Defs\]. Then $\Sigma=\text{In}_{\Sigma}[\sigma]\cup \sigma \cup \text{Out}_{\Sigma}[\sigma]$ where $\text{Out}_{\Sigma}[\sigma]$ is any homology slice connecting $\sigma$ to $B$. The outer wedge of $\sigma$, denoted $O_{W}[\sigma]$ is the domain of dependence of $\text{Out}_{\Sigma}[\sigma]$: O\_[W]{}\[\]=D\[\_\[\]\]. We define the inner wedge of a surface in an analogous way, as the domain of dependence of $\text{In}_{\Sigma}[\sigma]$: #### Inner Wedge: Let $\sigma$ be as above. The inner wedge of $\sigma$ is defined as the domain of dependence of $\text{In}_{\Sigma}[\sigma]$: I\_[W]{}\[\] = D\[\_\[\]\]. The outer and inner wedges of a surface $\sigma$ are illustrated in Fig. \[fig:OutIn\]. Note that the union of the outer and inner wedges with $\sigma$ necessarily contains a complete Cauchy slice of the spacetime: specifying data in the two wedges is sufficient to fix the entire spacetime, and the data can be independently specified so long as one solves the constraint equations across $\sigma$. Note that spacetime points that are timelike to $\sigma$ do not lie in either wedge; these are the points that are causally related to both wedges. ![Decomposition of the outer and inner wedges of $\sigma$. $\Sigma$ is a Cauchy slice of the full spacetime, and In$_{\Sigma}[\sigma]$, Out$_{\Sigma}[\sigma]$ are the components of $\Sigma$ as split by $\sigma$. []{data-label="fig:OutIn"}](OuterWedgeInnerWedge.pdf){width="40.00000%"} We now consider a natural generalization of extremal surfaces: namely marginal surfaces. Rather than requiring stationarity of the area in both orthogonal null directions as is the case when the surface is extremal (cf. Eq. &), marginal surfaces are only required to be stationary in one null direction. #### Marginal Surface: A surface $\mu$ is *marginal* if the expansions of the two null orthogonal congruences fired from $\mu$ satisfy: $$\begin{aligned} &\theta_{(\ell)}\leq 0 \mathrm{ \ \ or \ \ } \theta_{(\ell)}\geq 0 \label{def:marNeg}\\ &\theta_{(k)}= 0\end{aligned}$$ where the degenerate case in which $\theta_{(\ell)}=0$ is simply the situation in which $\mu$ is an extremal surface. The first equation requires $\theta_{(\ell)}$ to have the *same* sign on all of $\mu$; i.e. the “or” is exclusive. When $\theta_{(\ell)}\leq 0$, $\mu$ is said to be marginally trapped, and when $\theta_{(\ell)}\geq 0$, $\mu$ is said to be marginally anti-trapped. Guided by intuitions from the the entanglement picture, we define a *minimal* marginal surface, or minimar for short; as in the case of the HRT surface, the area of this surface will turn out to measure the entropy associated with ignorance about its interior. Whereas the HRT surface measures the fine-grained entropy of $B$ (i.e. the only ignorance is that of anything outside of $B$), this defines a coarse-grained entropy of $B$ (i.e. we are also forgetting some of the information in $B$ itself). #### Minimar surface: A marginal surface $\mu$ will be called a *minimar surface* if it additionally satisfies the following criteria: 1. $\mu$ is homologous to $B$, and there exists a Cauchy slice $\Sigma_\text{min}[\mu]$ of $O_W[\mu]$ on which $\mu$ is a minimal area surface homologous to $B$. \[def:minimarMin\] 2. There exists a choice of normalization for $\ell^a$ such that $\nabla_{k}\theta_{(\ell)}\equiv k^{a}\nabla_{a}\theta_{(\ell)}\le0$ on $\mu$, with equality allowed only if $\theta_{(\ell)} = 0$ everywhere on $\mu$. \[def:minimarCross\] Condition (1) is a weaker version of the global HRT minimality: we do not require that $\mu$ be the minimal area marginal surface homologous to the boundary (this will in general not be well-defined); instead we require that it be minimal on a partial Cauchy slice. Condition (2) may appear to be a new additional condition with no parallel in the minimality condition of HRT surfaces. However, we will prove in Appendix \[sec:HRT\] that $\nabla_{k}\theta_{(\ell)}\leq 0$ on HRT surface with equality being highly nongeneric. When $\mu$ is only marginal, we must impose condition (2) separately. Condition (2) is also known as “strict spacetime stability” of a marginal surface [@AndMar05]; it guarantees that small deformations of the surface inwards in a null direction can result in a trapped (anti-trapped) surface, while small deformations outwards in a null direction can result in an untrapped surface. It is possible to prove that generic apparent horizons are minimar surfaces, so that our results apply to generic apparent horizons, the case originally investigated in our earlier work. Junction Conditions for Initial Data {#sec:Junctions} ==================================== A description of the possible spacetimes that can constitute $I_{W}[\sigma]$ requires a set of conditions that dictate whether a given spacetime region $V$ can be sewed onto $\sigma$ in such a way that the resulting patched spacetime of $V$ and $O_{W}[\sigma]$ is a manifold with a continuous metric and a well-behaved causal structure, which solves the distributionally-well defined Einstein equation with a stress-energy tensor that satisfies the NEC. The procedure is twofold: first, the region $V$ is patched onto $O_{W}[\sigma]$, then the initial data on a Cauchy slice of the patching (see Fig. \[fig:outline\] for an illustration) must be evolved to give rise to a new spacetime $\widetilde{M}$. Note that because $V$ and $O_{W}[\sigma]$ are by definition spacelike-separated, they must separately satisfy the Einstein equation; the constraints on $V$ must come from the junction at $\sigma$ itself. The task at hand is thus a problem of both junction conditions of spacetime regions and initial data engineering. The problem of gluing together two spacetimes satisfying the Einstein equation has been studied extensively for junctions across codimension-one hypersurfaces [@Dar27; @ObrSyn52; @Lic55; @Isr58; @Isr66; @Rob72; @BonVic81; @ClaDra87; @BarIsr91; @MarSen93], but as far as we are aware, gluing across a codimension-two surface has received relatively little attention: initial data set patching conditions are normally given via an intermediate region rather than over a surface. To derive the constraints imposed on $I_{W}[\sigma]$ by $O_{W}[\sigma]$, we instead employ a twofold application of the junction conditions across two codimension-one null hypersurfaces, and then invoke the initial data formulation of general relativity. A rough sketch is as follows: let $\widetilde{M}$ be a consistent spacetime containing $O_{W}[\sigma]$. Using the decomposition of $\widetilde{M}$ induced by $\sigma$, we can divide $\widetilde{M}$ into two spacetime regions: $O_{W}[\sigma]\cup J^{+}[\sigma]$ and $I_{W}[\sigma]\cup J^{-}[\sigma]$. This is illustrated in Fig. \[fig:Ns\]. As explained above, the two regions are separated by a null hypersurface $N_{1}$; the Barrabès-Israel junction conditions [@BarIsr91] give the requisite constraints on $N_{1}$ for $\widetilde{M}$ to be consistent. Repeating the procedure, but this time breaking up $\widetilde{M}$ into $O_{W}[\sigma]\cup J^{-}[\sigma]$ and $O_{W}[\sigma]\cup J^{+}[\sigma]$. This gives conditions that must be satisfied by $N_{2}$. Together, these give junction conditions at the intersection $N_{1}\cap N_{2}=\sigma$. The conditions give a precise constraint on the spacetimes that are allowed to be inside $\sigma$. Finally, to obtain a full spacetime, we invoke the initial data formulation of general relativity on an achronal slice containing $\sigma$; this guarantees that $\widetilde{M}$ exists[^9]. Review: the Barrabès-Israel Junction Conditions {#sec:Israel} ----------------------------------------------- Let $(M^{+},g^{+})$, $(M^{-},g^{-})$ be two $C^{3}$ globally hyperbolic spacetimes satisfying the Einstein equation, and let $N^{+}\subset M^{+}$, $N^{-}\subset M^{-}$ be two splitting null hypersurfaces. Let $V^{+}=J^{+}[N^{+}]$ and $V^{-}=J^{-}[V^{-}]$. These are illustrated in Fig. \[fig:sewing\]. ![The Barrabès-Israel junction conditions prescribe when two spacetime regions $V^{+}$, $V^{-}$ with null boundaries $N^{+}$, $N^{-}$ can be sewn together by identifying $N^{+}$ with $N^{-}$.[]{data-label="fig:sewing"}](sewing.pdf){width="45.00000%"} Suppose that we wanted to construct a new spacetime by identifying $N^{+}$ and $N^{-}$. What conditions must be imposed across the junction so the new geometry satisfies the Einstein equation? The most basic requirement for a patched spacetime of $V^{+}$ and $V^{-}$ across $N^{+}$ and $N^{-}$ is that the resulting set be smooth as a topological manifold: $N^{+}$ and $N^{-}$ must be diffeomorphic so that they can be identified as one (embedded) submanifold $N$ of a joined smooth topological space $M\equiv V^{+} \cup V^{-}$. The second requirement is that this submanifold $N$ have a well-defined intrinsic geometry. This requires $\left . h_{ab}\right |_{N^{+}}=\left . h_{ab}\right |_{N^{-}}$, where $h_{ab}$ is the induced metric on a spatial slice of $N$. As differences in quantities across $N$ will be appearing a lot, we will use the standard convention to denote them: . F|\_[N\^[+]{}]{} - . F|\_[N\^[-]{}]{}, where $F$ is any spacetime field. In this convention, $[h_{ab}]=0$. Thus $N^{-}$ and $N^{+}$ must be isometric. This is the first junction condition:\ *First Junction Condition:* The null hypersurfaces $N^{+}\subset M^{+}$ and $N^{-}\subset M^{-}$ are isometric (with respect to their induced metrics from $g^{+}$ and $g^{-}$, respectively).\ A theorem by Clarke and Dray [@ClaDra87] then guarantees that the joined topological space $M$ is a smooth manifold with metric $g$ which is continuous on all of $M$ and $C^{2}$ everywhere except possibly on $N$ itself. Recall that the end goal is a spacetime that satisfies the Einstein equation. To understand the conditions imposed by the Einstein equation, we must study derivatives of the metric across $N$. We expect that we can always choose the vectors $\ell^{a}$ and $k^{a}$ to be continuous when $V^{\pm}$ are sufficiently regular and the first junction condition is satisfied: $[\ell^{a}]=[k^{a}]=0$; furthermore, derivatives of $g_{ab}$ along directions tangential to $N$ are continuous as well. In particular: k\^[c]{}=0. Any junction conditions would therefore have to result from derivatives in the transverse direction $\ell^{a}$, as defined in Sec. \[sec:NullGeom\]. The quantity of interest is therefore the change of $g_{ab,c}\ell^{c}$ across $N$. As this is the primary quantity of study, it is worthwhile to give it a name: \_[ab]{} \^[c]{}. Let us now pause and ask what behavior would be desirable for us to call $(M,g)$ a physical spacetime. At a minimum, the stress-energy tensor sourcing this geometry should be well-defined as a distribution: the worst singularities allowed would be Dirac $\delta$-functions. However, we will be stricter and require all stress-energy tensors to be finite, while still allowing finite discontinuities. We now ask the question: what are the contributions of $\gamma_{ab}$ to the Einstein equation? This requires a straightforward if tedious computation of the discontinuities in the connection coefficients across $N$: = k\_[c]{}\^[a]{}\_[b]{} +k\_[b]{}\^[a]{}\_[c]{} -k\^[a]{}\_[bc]{}, which allows us to compute the discontinuities in the stress-energy tensor via the Einstein equation. The expression is easiest to parse in terms of geometric quantities of the null congruence generated by the $\ell^{a}$ vector field on a spacelike slice $S$ of $N_{k}$. In terms of the expansion $\theta_{(\ell)}$ and twist $\chi_{(\ell) \, a}$ of the transverse null congruence $N_{\ell}$ and the inaffinity $\kappa_{(k)}$ of the null congruence $N_{k}$:\ *The Second Junction Condition [@Dar27; @ObrSyn52; @Lic55; @Isr58; @Isr66; @Rob72; @BonVic81; @ClaDra87; @BarIsr91; @MarSen93]:* \[eq:stressShell\] T\_[ab]{}(x) \^= - (\[\_[()]{}\] k\_[a]{}k\_[b]{} +\[\_[()(a]{}\]k\_[b)]{} +\[\_[(k)]{}\] h\_[ab]{}) here $T_{ab}^{\mathrm{shell}}$ is the stress-energy tensor that supports the junction. It is also possible to rewrite $T_{ab}$ in terms of intrinsic coordinates on $N_{k}$; that form is independent of the choice of $\ell^{a}$ [@MarSen93]. The reader may notice that not all components of $\gamma_{ab}$ must vanish for the stress tensor to be finite; in particular, the shear $\varsigma$ of either $N_{\ell}$ or $N_{k}$ does not appear in the above equation. This is special to a junction across a null hypersurface; if there is nonvanishing shear across the junction, the spacetime may include an impulsive gravitational wave (which is not sourced by $T_{ab}$) [@Pen72]. The stress-energy tensor on $N_{k}$ has a physical interpretation as a surface layer of a shell of null matter. If $[\theta_{(\ell)}]+ [\kappa_{(k)}]\geq 0$, this shell will satisfy the NEC (Eq. \[NEC\]). Such shells are used to construct new solutions in General Relativity, including in the context of AdS/CFT (see e.g. [@FreHub05; @FisMar14]). If we now demand that the stress tensor be finite, we obtain the following junction conditions: = \[\_[()a]{}\] =\[\_[(k)]{}\]=0 The last condition in particular guarantees that $N$ is an affinely parametrizable null geodesic congruence in the full patched spacetime $M$. Multiple Junctions {#sec:MultiJunction} ------------------ We can now make use of the Barrabès-Israel junction conditions to derive conditions on initial data matching. Instead of taking $V^{+}$ and $V^{-}$ to be spacetime regions in the future and past of a null hypersurface, we take $V_{\mathrm{out}}$ and $V_{\mathrm{in}}$ to be domains of dependence of initial data in $M_{\text{out}}$ and $M_{\text{in}}$. More precisely, let $\Sigma_{\text{out}}, \ \Sigma_{\text{in}}$ be Cauchy slices of $M_{\text{out}},\ M_{\text{in}}$ (which are maximally extended) and let $\sigma_{\text{out}}\subset \Sigma_{\text{out}}$, $\sigma_{\text{in}}\subset \Sigma_{\text{in}}$ be two surfaces, as defined in Sec. \[sec:Defs\]. We take $V_{\text{out}}$, $V_{\text{in}}$ to be the domains of dependence of one side of $\Sigma_{\text{out}}$, $\Sigma_{\text{in}}$ each: \[Vpm\] & V\_= D\ & V\_ = Dwhere In and Out are chosen arbitrarily. This is illustrated in Fig. \[fig:Multi\]. Suppose now that we want to patch $V_{\mathrm{out}}$ onto $V_{\text{in}}$ across the *codimension-2* surface $\sigma_{\text{out}}$, $\sigma_{\text{in}}$. To determine the appropriate conditions at this surface, we will make use of the Barrabès-Israel junction conditions twice. ![The patching construction illustrated in detail. On the left panel, a surface $\sigma_{\text{in}}$ (purple) splits a Cauchy slice $\Sigma_{\text{in}}$ (not shown) in two. The side In$_{\Sigma_{\text{in}}}[\sigma_{\text{in}}]$ is on the interior of $\sigma_{\text{in}}$; the region $V_{\text{in}}$, which is the interior of $\sigma_{\text{in}}$, is obtained by taking the domain of dependence of In$_{\Sigma_{\text{in}}}[\sigma_{\text{in}}]$. The middle panel illustrates the same construction in $M_{\text{out}}$ for $\sigma_{\text{out}}$. The right panel shows the gluing, with $F$ and $P$ the fiduciary spacetime regions that exist only when the junction conditions are satisfied.[]{data-label="fig:Multi"}](MultiJunction.pdf){width="\textwidth"} Let us imagine that there is some fiducial spacetime region $F$ such that $F=J^{+}[\sigma_{\text{out}}]$, and some fiducial spacetime $P$ such that $P=J^{-}[\sigma_{\text{in}}]$. What conditions must $ F\cap P$ satisfy so that the entire spacetime $F\cup V_{\mathrm{out}}\cup P \cup V_{\text{in}}$ is consistent and satisfies the Einstein equation? In order for the topological space $F\cup V_{\mathrm{out}}\cup P \cup V_{\text{in}}$ to be a manifold with a continuous metric, the boundaries of all touching sets must be isometric by the Clarke-Dray theorem. This is equivalent to requiring that $\sigma_{\text{out}}$ and $\sigma_{\text{in}}$ be isometric. This is our first multi-junction condition; we now identify $\sigma_{\text{out}}$ and $\sigma_{\text{in}}$ as a single surface $\sigma$. The second condition requires an application of the Barrabès-Israel junction conditions twice. First, we consider joining $V_{\mathrm{out}}\cup F$ and $V_{\text{in}}\cup P$ along their mutual boundary. Let us call this hypersurface $N_{k}$, and as above we require that the generator $k^{a}$ be $C^{0}$ across $\sigma$. We pick the rigging vector $\ell^{a}$ so it is normal to $\sigma$ and $C^{0}$. Then Eq. \[eq:stressShell\] tells us that in order to have a a regular stress-energy tensor in the null-null directions, we must require that $\theta_{(\ell)}$, $\chi_{a}\,_{(\ell)}$, and $\kappa_{(k)}$ all be continuous across $N_{k}$. Next, we consider joining $V_{\text{out}}\cup P$ and $V_{\text{in}}\cup F$ along the new boundary, which is the null hypersurface generated by the vector $\ell^{a}$, which we take to be $C^{0}$ across $\sigma$ as well. The Barrabès-Israel junction condition requires that $\theta_{(k)}$, $\chi_{a}\,_{(k)}$, and $\kappa_{(\ell)}$ all be continuous across $N_{\ell}$. This is illustrated in Fig. \[fig:Ns\]. However, unlike in the case of codimension-one hypersurfaces, the condition on the inaffinities is actually vacuous: by an appropriate rescaling, we can always pick the inaffinity to be continuous at $\sigma$. Because we now consider differences when crossing two null hypersurfaces simultaneously at a codimension-2 surface, the symbol $[F]$ will now denote the discontinuities of a quantity $F$ across $\sigma$ in crossing from $O_{W}[\sigma]$ to $I_{W}[\sigma]$ in the following way: . F|\_[\_]{} - . F |\_[\_]{}. Finally, the conditions on $V_{\mathrm{out}}$ and $V_{\text{in}}$ are: #### Codimension-Two Junction Conditions: Let $(V_{\mathrm{out}},g_{\text{out}})$, $(V_{\text{in}}, g_{\text{in}})$ be defined as in : $g_{\text{out}}$, $g_{\text{in}}$ are smooth. Then we may glue $V_{\mathrm{out}}$ and $V_{\text{in}}$ to one another with a finite stress-energy tensor under the following conditions: 1. The surfaces $\sigma_{\text{out}}$ and $\sigma_{\text{in}}$ are isometric and can thus be identified as a single surface $(\sigma, h)$. 2. There exists a choice of $k^{a}$ and $\ell^{a}$ null normals (satisfying Eq. ) defined on both sides of $\sigma$ such that the following conditions hold: & \[\_[(k)]{}\]=0 \[contThK\]\ & \[\_[()]{}\] =0 \[contThL\]\ & \[\_[a\_[(k)]{}]{}\] =- \[\_[a]{}\_[()]{}\]=0 \[contChi\] for some $k^{a}$ and $\ell^{a}$ that are $C^{0}$ on $N_{k}$ and $N_{\ell}$ respectively. As before, no continuity condition is imposed on the shear. Then the null-null components of the stress tensor are finite, and the Einstein equation is distributionally well-defined. Now, because the data on $\mathrm{Out}_{\Sigma_{\text{out}}}[\sigma_{\text{out}}]$ and on $\mathrm{In}_{\Sigma_{\text{in}}}[\sigma_{\text{in}}]$ is guaranteed to satisfy the constraint equations separately, the conditions Eqs. - guarantee that the entire slice $\Sigma=\mathrm{Out}_{\Sigma_{\text{out}}}[\sigma_{\text{out}}]\cup \sigma \cup\mathrm{In}_{\Sigma_{\text{in}}}[\sigma_{\text{in}}]$ satisfies the constraint equations with a finite stress-energy tensor. Because there is no contribution to the stress-energy tensor from $\sigma$, the initial data on $\Sigma$ has a stress-energy tensor that satisfies the NEC whenever $(M_{\text{out}},g_{\text{out}})$ and $(M_{\text{in}},g_{\text{in}})$ do. [^10] We should now ensure that the data available is sufficient to prescribe a Cauchy evolution of our data into the “fiducial” spacetime regions $F$ and $P$. It is simplest to see that the specified data is sufficient via the characteristic initial data formalism, in which data is specified on a piecewise null hypersurface and evolved forward, as illustrated in Fig. \[fig:outline\]. This differs from the standard Cauchy evolution, which requires a smooth spacelike hypersurface. For our purposes here, the characteristic initial data problem of interest is that of two intersecting cones: $N_{k}$ and $N_{\ell}$ intersect on $\sigma$. The requisite geometric data for a characteristic initial data evolution is precisely the data at hand: a conformal metric on $N_{k}$ and $N_{\ell}$, an intrinsic metric $h_{ab}$ on $\sigma$, the twist $\chi^{a}$ on $\sigma$, expansions $\theta_{(\ell)}$ and $\theta_{(k)}$ on $\sigma$, and the inaffinities $\kappa_{(\ell)}$ and $\kappa_{(k)}$ on $\sigma$ [@Ren90; @Hay93EFE; @BraDro95; @Luk12]. We are not quite done yet, as we have not yet addressed the issue of existence of evolution of the initial data. Here we may take the approach of either the characteristic or Cauchy initial data problem. Rigorous theorems for the local existence and uniqueness of evolution of initial data often impose certain regularity conditions on the initial data. Consider first the standard (spacelike) Cauchy problem. Choquet-Bruhat’s original 1952 theorem for (vacuum) Cauchy evolution [@Cho52] required a triplet $(\Sigma, \gamma_{ab}, K_{ab})$, where $\Sigma$ is the initial (spacelike) Cauchy slice, $\gamma_{ab}$ its induced metric, and $K_{ab}$ its extrinsic curvature tensor, where $\gamma_{ab}$ is $C^{5}$ and $K_{ab}$ is $C^{4}$. Since then, these requirements have been progressively reduced to the requirement that the second partial weak derivatives of $g_{ab}$ be square integrable and of $K_{ab}$ once differentiable [@HugKat77; @ChoChr78; @ChoIse00; @KlaRod02; @Cho04; @Max04; @Max04b; @SmiTat05]; the precise statement is that $g_{ab}$ is in the Sobolev space $H_{loc}^{3/2+\epsilon}$ and $K_{ab}$ in $H_{loc}^{1/2+\epsilon}$ for $\epsilon>0$. More recently, studies of low-regularity metrics in the context of junction conditions have produced limited proofs of local well-posedness for metrics with only *first* partial weak derivatives being square integrable: $g_{ab}\in H_{loc}^{1}$ and $K_{ab}\in H_{loc}^{0}$ [@Cla97; @VicWil01; @GraMay08; @SanVic15; @SanVic16]. This is precisely the regularity regime of our desired results, and we will assume existence, in accordance with expectations partially borne out in this class of cases for the Cauchy problem. For the characteristic problem (where only local existence and uniqueness results are rigorously established in broad generality, but see [@CacFra04; @CacNic06] for some limited global results), the original proof of Rendall for existence of a neighborhood at the intersection of two null hypersurfaces is given for $C^{\infty}$ data, but the expectation is that rougher initial data should behave similarly [@Ren90], while Hayward’s proof in [@Hay93EFE] shows that a unique solution exists up to caustics. More recently, Luk proved the existence of a neighborhood of the *union* of both null hypersurfaces assuming the data is $C^{\infty}$ [@Luk12], although followup work on impulsive gravitational waves in the characteristic problem has been able to accommodate a curvature with a Dirac $\delta$-function singularity in the vacuum [@LukRod12]. Note that uniqueness of the Cauchy evolution may be more questionable than existence, as it is possible that the initial data could develop into a Cauchy horizon (this is not expected to occur for vacuum initial data [@ChrPae12]). In that case, we simply adopt the approach in [@Wald] and use the maximal Cauchy development. Finally, a brief comment on the constraint equations for matter fields. The original approach of Rendall [@Ren90] for proving (local) well-posedness of the characteristic initial data extends to scalar, Maxwell, and Yang-Mills fields coupled to gravity [@ChrPae12]; a generalization of the method also works for Vlasov fields [@ChoChr12]. This works well when the matter fields in $I_{W}[\sigma]$ and $O_{W}[\sigma]$ have the same matter Lagrangian. The Outer Entropy {#sec:outer} ================= We have thus far focused on giving a precise definition of minimar surfaces as generalizations of HRT surfaces. In what follows, we will give a definition of our generalization of the von Neumann entropy to the outer entropy. Like the outer wedge, the outer entropy is defined for any bulk surface homologous to the boundary. Although this is a purely classical bulk construction, its relation to the boundary entropy via the HRT formula will justify its interpretation as a coarse-grained entropy. We are interested in the entropy associated to our ignorance of the inner wedge $I_{W}[\sigma]$ subject to knowledge of all of the field data (including the metric) in the outer wedge $O_{W}[\sigma]$. Consider all possible field data $\{\alpha\}$ for possible inner wedges $I_{W}^{(\alpha)}[\sigma]$ that could be patched onto $\sigma$ without altering $O_{W}[\sigma]$ (in such a way as to preserve all global conditions on the spacetime necessary to define the HRT/maximin surface). This is of course constrained by the matching conditions at $\sigma$ itself, derived in Sec. \[sec:MultiJunction\]. By AdS/CFT, each allowed spacetime obtained by some interior $I_{W}^{(\alpha)}[\sigma]$ then corresponds to some boundary state $\rho_{B}^{(\alpha)}$ whose von Neumann entropy is given by: \[eq:tomax\] S\[\^[()]{}\] = -(\_[B]{}\^[()]{} \^[()]{}) = , where $X^{(\alpha)}$ is the HRT surface homologous to the boundary component $B$ in the spacetime with $I_{W}^{(\alpha)}[\mu]$. We would like to define an entropy associated with coarse graining over all such states $\rho_{B}^{(\alpha)}$. A simple way to do this is via a maximization of Eq.  over all states $\rho_{B}^{(\alpha)}$: #### Outer Entropy: The outer entropy associated to a surface $\sigma$ homologous to $B$ is defined by maximizing the von Neumann entropy over the possible inner wedge data $\{\alpha\}$: $$S^{\text{(outer)}}[\sigma]\equiv \max\limits_{\{\alpha\}}[ -\mathrm{tr}(\rho_{B}^{(\alpha)} \ln \rho_{B}^{(\alpha)})].$$ In other words, for any spacetime with $O_{W}[\sigma]$, we minimize the area of extremal surfaces homologous to $B$, and we then maximize over all possible inner wedges. This follows a familiar theme of min-max proposals for computing entropy in AdS/CFT [@Wal12; @FreHea16]. *A priori*, the outer entropy of a surface is not related to that surface’s area. However, for an HRT surface $X_{B}$, $S^{(\mathrm{outer})}[X_{B}] = -\mathrm{tr}(\rho_{B}\ln \rho_{B})= \text{Area}[X_{B}]/4 G\hbar$. Our main result, derived in Sec. \[sec:main\], is an analogous relation for minimar surfaces $\mu$: $S^{(\mathrm{outer})}[\mu] = \text{Area}[\mu]/4 G\hbar$. On the other hand, for a large class of trapped and untrapped surfaces, we will show in Sec. \[sec:general\] that $S^{\text{(outer)}}$ is, respectively, larger and smaller than the area of the surfaces. This shows that minimar surfaces play a very special role in gravitational thermodynamics. Main Construction {#sec:main} ================= In this section, we prove that the outer entropy of a minimar surface is proportional to the area of that surface. We do this in three steps: first, we show that the outer entropy of a minimar surface $\mu$ is bounded from above by the area of $\mu$. This is done by showing that \[eq:inequality\] S\[\_[B]{}\^[()]{}\]=, where as before $X_{B}^{(\alpha)}$ is the HRT surface of the connected component $B$ in the modified spacetime dual to $\rho^{(\alpha)}$. We will drop the superscript when the context is clear. The first equality follows by HRT, and the inequality will be shown via maximin techniques below. Next, we show that for some choice of $\alpha$, there exists a spacetime $(M',g')$ with outer wedge $O_{W}[\mu]$ and an extremal surface $X$ whose area is exactly the same as the area of $\mu$. Finally, we prove that $X$ is in fact the HRT surface of $(M',g')$. This constructs a spacetime dual to a boundary state $\rho'$ whose von Neumann entropy $S[\rho']$ is given by the area of $\mu$. We conclude that Eq.  is in fact an equality. Bounding the Outer Entropy {#sec:bound} -------------------------- To show that $S^{(\mathrm{outer})}[\mu]$ is bounded from above by Area$[\mu]/4G \hbar$, we use a technique from [@Wal12] of representing a marginal surface (e.g. $\mu$) on any Cauchy slice of the spacetime via a null congruence fired from $\mu$. We will assume for the rest of this section that $\mu$ is marginally trapped ($\theta_{(\ell)}\leq 0$, $\theta_{(k)}=0$); the construction when $\mu$ is marginally anti-trapped ($\theta_{(\ell)}\geq 0$, $\theta_{(k)}=0$) is simply a time reverse. #### Representative: Let $\mu$ be a minimar surface in $M$, and let $\Sigma$ be a Cauchy slice of $M$, not necessarily containing $\mu$. Let $N_{k}(\mu)$ be the null congruence generated from $\mu$ by firing null geodesics in the $+k^{a}$ and $-k^{a}$ directions. Since we are assuming global hyperbolicity, $N_{k}(\mu)$ splits $M$ into two pieces. The representative of $\mu$ on $\Sigma$ can then be defined as () =N\_[k]{}(). where by construction $\overline{\mu}(\Sigma)$ is homologous to $\mu$, $B$, and the HRT surface $X_B$. (In some cases, when the generators of $N_{k}(\mu)$ all intersect prior to reaching $\Sigma$, $\overline{\mu}(\Sigma)$ may be the empty set. In these cases, the HRT surface is also the empty set, and the following result will hold trivially.) An immediate consequence of the NCC is that the area of $\tilde{\mu}(\Sigma)$ is bounded from above by the area of $\mu$ [@Wal12]: \[eq:bd\] \[()\]\[\]. Consider now any spacetime $(M^{(\alpha)},g^{(\alpha)})$ containing $O_{W}[\mu]$. As before, let $X_{B}^{(\alpha)}$ be the HRT surface of $B$. By the maximin formulation [@Wal12] there exists a Cauchy slice $\Sigma$ of $(M^{(\alpha)},g^{(\alpha)})$ on which $X_{B}^{(\alpha)}$ is the minimal area surface homologous to $B$. This immediately implies that the representative $\overline{\mu}[\Sigma]$ has greater area than that of $X_{B}$. Using Eq. \[eq:bd\] and HRT, we obtain S\[\_[B]{}\^[()]{}\] = . This establishes that for any spacetime $(M^{(\alpha)},g^{(\alpha)})$ with fixed outer wedge $O_{W}[\mu]$, the von Neumann entropy of $\rho_{B}^{(\alpha)}$ is bounded from above by one quarter of the area of $\mu$. Since $S^{\mathrm{(outer)}}[\mu]$ is obtained by maximizing $S$ subject to fixed $O_{W}[\mu]$, this immediately implies the desired result: \[eq:outerbound\] S\^\[\] . Existence of Extremal Surface {#sec:exist} ----------------------------- We now proceed to give an explicit construction for the inner wedge $I_{W}[\mu]$ that maximizes $S[\rho_B]$. In this new auxiliary spacetime $(M',g')$, we first show that there exists an extremal surface $X$ homologous to $B$ whose area is the same as the area of $\mu$. (Later, in Sec. \[subsec:min\], we will prove that $X$ is in fact the extremal surface of least area — i.e. the HRT surface — of $(M',g')$, so that the von Neumann entropy of the new spacetime is Area$[\mu]/4G\hbar$. This shows that the maximum of Eq.  is attained, and that Eq. \[eq:outerbound\] is in fact an equality.) The spacetime is constructed via the initial data gluing procedure described in Sec. \[sec:MultiJunction\]. The data in $O_{W}[\mu]$, and hence its past boundary $N_{-\ell}$, is already fixed. To this (1) we glue a stationary null hypersurface $N_{-k}$ shot in the inwards-past direction from $\mu$; the assumption of stationarity fully fixes the data on $N_{-k}$. We show (2) that there exists an extremal surface $X$ on $N_{-k}$ and calculate its location. Finally, (3) we complete the spacetime on the other side of $X$ by assuming that it is CPT-reflection symmetric across $X$ (this introduces an additional AdS conformal boundary $\widetilde{B}$, and the reflection of the outer wedge $\widetilde{O}_{W}[\widetilde{\mu}]$ and its future boundary $\widetilde{N_{-\ell}}$. The geometry constructed so far includes a piecewise null Cauchy slice $\Sigma$ composed of three null hypersurfaces $N_{-\ell}$, $N_{-k}$, and $\widetilde{N_{-\ell}}$; this is illustrated in Fig. \[fig:maximizing\]. The resulting hypersurface satisfies all constraint equations (and corresponding junction conditions) derived in Sec. \[sec:MultiJunction\]. We can now define the entire spacetime $(M', g')$ by Cauchy evolution from this slice (the characteristic initial data problem). ![A figure of the full maximizing spacetime, adapted from [@EngWal17b]. The characteristic Cauchy slice that we construct to obtain this geometry consists of $N_{-\ell}\cup N_{-k}\cup \widetilde{N_\ell}$, []{data-label="fig:maximizing"}](MainConstruction.pdf){width="50.00000%"} Recall from Sec. \[sec:MultiJunction\] that in the characteristic problem, the data needed to be specified on $\Sigma$ to yield a well-defined and (at least locally) deterministic evolution is the following: the conformal metric on the null hypersurfaces, the null expansions of the null hypersurfaces at $\mu$, the twist, and the intrinsic metric on $\mu$. All of this information is fixed by the construction outlined above. ### Step 1: Constructing $N_{-k}$ {#step-1-constructing-n_-k .unnumbered} Let us partially fix a gauge in null coordinates $u$ and $v$ and spatial coordinates $\{y_{i}\}_{i=1}^{d-1}$. Our choices below will not fully fix the gauge – we still have some gauge freedom left in the spatial directions. The indices $\{a,b\}$ will run over all spacetime, while the indices $\{i,j\}$ are restricted to the transverse $y_{i}$ directions. We fix $\mu$ to be at $u=0$ and $v=0$, so $N_{-k}$ is at $u=0$ and $N_{-\ell}$ is at $v=0$. In terms of the covariant definitions above, $\ell^{a}=(\partial/\partial u)^{a}$ and $k^{a}=(\partial/\partial v)^{a}$. We will use $\theta_{u}$ and $\theta_{v}$ below to emphasize our choice of gauge. Our gauge conditions are: $$\begin{aligned} \label{eq:gauge} g_{uv}&=-1\\ g_{uu}&=g_{ui}=0\\ \left . g_{vv} \right | _{u=0} &= \left . g_{vi}\right | _{u=0} = \left . g_{vv,u}\right |_{u=0}=0.\end{aligned}$$ In this gauge, the twist and null extrinsic curvatures are: $$\begin{aligned} & g_{vi,u}= 2\chi_{i}\\ & \left . g_{ij,u}\right |_{v=\mathrm{const}} = 2B_{ij}\phantom{}_{(u)}\\ & \left . g_{ij,v}\right |_{u=0}=2 B_{ij}\phantom{}_{(v)}\end{aligned}$$ where $\chi_{i}$ is the (gauge-dependent) twist and $B_{ij(q)}$ is the null extrinsic curvature in the $q=u$ or $q=v$ direction, which is related to the expansion and shear via Eq. . In this gauge, the constraint equations reduce to (see e.g. [@Hay01; @Hay04; @GouJar06; @Hay06; @Cao10] for the gauge-independent constraints): $$\begin{aligned} &\theta_{u,v}=-\frac{1}{2} \mathcal{R} +\nabla\cdot \chi -\theta_{u}\theta_{v} + 8\pi G T_{uv}+\chi^{2} \label{eq:GUV} ,\\ & \theta_{v,v} =- \frac{1}{D-2}\theta_{v}^{2} -\varsigma_{v}^{2} -8\pi G_{N} T_{vv}, \label{eq:Raych}\\ &\chi_{i,v}= -\theta_{v}\chi_{i} + \left(\frac{D-3}{D-2} \right)\nabla_{i} \theta_{v} - (\nabla\cdot \varsigma_{v})_{i} +8 \pi T_{iv}\end{aligned}$$ where $\theta_{u,v}$ is the $v$ derivative of $\theta_{u}$ for constant $v$ slices and $T_{vv}$ denotes the $v-v$ component of $T_{ab}$. We now specify initial data on $N_{-k}$. Because we are fixing $O_{W}[\mu]$ and will implement a symmetry transformation to complete the spacetime, this is the only hypersurface on which we need to specify initial data. We will require: $$\begin{aligned} & \varsigma_{v}[N_{-k}]=0, \label{shear} \\ & T_{vv}[N_{-k}]=0, \label{focus}\\ &T_{iv}[N_{-k}] = 0, \label{twist} \\ &T_{uv}[N_{-k}]= \text{const}. \label{cross}\end{aligned}$$ Inserting the first two equalities into the Raychaudhuri Eq. implies that $\theta_{v}=0$; hence $N_{-k}$ is stationary, and $\mathcal{R}$ is also constant along $N_{-k}$. Eq. is a condition on the twist, since on $N_{-k}$, $T_{iv}=\chi_{i,v}$; this fixes the twist to be constant along the $v$-direction on $N_{-k}$. Using the above, Eq. requires $\left . \theta_{u,v}\right |_{v=\text{const}}$ to be a constant along the $v$-direction via Eq. . The reader may ask whether we can always choose the stress tensor to obey our requirements. Let us briefly justify this in the special case where the bulk matter is a scalar field $\phi$ (with the standard kinetic action) minimally coupled to a Maxwell field $A_{a}$. The Lagrangian density is $${\cal L} = -\sqrt{-g} \left ( \frac{1}{4} F_{ab}F_{cd}\, g^{ac}g^{bd}+ \bar{\nabla}_{a}\phi^* \bar{\nabla}_{b}\phi \, g^{ab} \right),$$ where $\bar{\nabla}_{a}$ is the covariant derivative with respect to the gauge potential $A_{a}$. With the corresponding stress-energy tensor: $$\begin{aligned} & T_{vv}=2\bar{\nabla}_{v}\phi^*\bar{\nabla}_{v}\phi + F_{vi}F_{v}\,^{i}, \\ &T_{vi}=2\bar{\nabla}_{v}\phi^*\bar{\nabla}_{i}\phi + F_{vj}F_{i}\,^{j} + F_{vi}F_{uv},\\ & T_{uv} = \bar{\nabla}_{i}\phi^* \bar{\nabla}_{i}\phi + F_{ij}F^{ij} + \frac{1}{2}F_{uv}F_{uv}. \label{eq:Tuv}\end{aligned}$$ By setting $ \bar{\nabla}_{v}\phi = F_{iv} = 0$ on $N_{-k}$, these being free data in the characteristic problem, we immediately recover Eqs. -. (To prove constancy of $T_{uv}$, use the Bianchi identity and the Gauss Law on the null hypersurface.) These conditions are analogous to Eq.  for gravitational radiation. We expect that a similar prescription exists for other reasonable matter fields to satisfy Eqs. -. Assuming so, it is always possible to construct a stationary hypersurface null $N_{-k}$ satisfying the constraint equations. We now show that we can also satisfy the junction conditions Eq. - across $\mu$, as well as the corresponding continuity conditions for the matter fields:[^11] & \[\] =0, \[m1\]\ & \[F\_[ij]{}\]=0, \[m2\]\ & \[F\_[uv]{}\]=0 \[m3\]. The first junction Eq.   for $\theta_v$ is already satisfied because it vanishes on both $N_{-k}$ and $\mu$. To see that we can satisfy the remaining conditions, note that so far we have only fixed the transverse geometry $g_{ij}$, the twist $\chi_{i}$ and $\left. \theta_{u,v}[N_{-k}]\right|_{v=\text{const}}$ up to functions of the transverse $y_{i}$ directions. Even after fixing $\theta_{u,v}$ we can still choose $\theta_{u}$ at $v = 0$ on $N_{-k}$. Similarly, $\phi$ and $F_{ij}$ are defined up to transverse functions. We are therefore free to choose all of these quantities to be continuous across the junction at $\mu$: $$\begin{aligned} & g_{ij}[N_{-k}]=g_{ij}[\mu],\\ &\chi_{i}[N_{-k}]=\chi_{i}[\mu], \\ &\left. \theta_{u,v}[N_{-k}]\right|_{v=\text{const}}= \theta_{u,v}[\mu], \\ &\lim_{v\rightarrow 0}\left. \theta_{u}[N_{-k}]\right|_{v=\text{const}}= \theta_{u}[\mu],\\ & \phi[N_{-k}]=\phi[\mu],\\ & F_{ij}[N_{-k}]=F_{ij}[\mu],\\ & F_{uv}[N_{-k}]=F_{uv}[\mu],\end{aligned}$$ where the last three conditions also imply $T_{uv}[N_{-k}]=T_{uv}[\mu]$. Note that the cross-focusing constraint continues to be satisfied because all of its terms are by construction the same on $N_{-k}$ and $\mu$; this is only possible because $\theta_{v}=0$ on $\mu$. These choices satisfy the junction conditions for the metric Eqs. &, as well as for matter Eqs. -. We have therefore succeeded at our goal of gluing a stationary null hypersurface $N_{-k}$ to $\mu$.[^12] ### Step 2: Finding $X$ {#step-2-finding-x .unnumbered} Let us now proceed to find an extremal surface on $N_{-k}$. By construction, $\theta_{(k)}[N_{-k}]=0$. Finding an extremal surface $X$ therefore reduces to finding a cross-section of $N_{-k}$ on which $\theta_{(\ell)}=0$. On a constant $v$ slice, all of the terms on the RHS of Eq.  — including the contributions to $T_{uv}$ from Eq.  — are constant, and equal to their values at $\mu$. By definition of a minimar surface (Requirement \[def:minimarCross\]) $\theta_{u,v}[\mu]<0$, so $\left. \theta_{u,v}\right |_{v=\text{const}}$ is strictly negative and constant with respect to $v$. Hence $\left. \theta_{u}\right |_{v=\text{const}}$ increases at a constant rate as we move to more negative $v$ values. Since $\left. \theta_{u}\right |_{v=\text{const}}$ starts out negative (by Eq. ), this indicates that it attains zero on some slice $\sigma$ of $N_{-k}$. However, this need not be a constant-$v$ slice, and if not, then generally $\left. \theta_{u}\right |_\sigma \ne \left. \theta_{u}\right |_{v=\text{const}}$, since (as described explicitly below) $\theta_{u}$ is also sensitive to the derivatives of $v$ with respect to the transverse directions. We must therefore work harder to find the slice $X$ on which $\theta_{u}$ vanishes; this slice will have two vanishing null expansions, and thus by definition it would be an extremal surface. To find this putative slice, we first compare $\theta_{u}$ of a varying-$v$ slice $\beta$ with that of a constant-$v$ slice $\alpha$. Let $v=f(y^{i})$ be the equation for the location of $\beta$ on $N_{-k}$. By definition, $v^{a}$ is normal to both $\alpha$ and $\beta$, but $u^a$ is normal only to $\alpha$. The second null normal to $\beta$, denoted $w^{a}$, can be computed from the defining equation for $\beta$: w\^[a]{} = u\^[a]{} + \_[i]{} y\_[i]{}\^[a]{}\_[i]{}f + v\^[a]{}f , where $\nabla_{i}\equiv y^{b}_{i}\nabla_{b}$. The null expansion of $\beta$ in the $w^{a}$ direction is given by $\theta_{w}[\beta]=h^{ab}\nabla_{a}w^{b}$, where $h_{ab}$ is the induced metric on $\beta$. Transforming this into the coordinates on $\alpha$ yields the following relation: \[eq:relation\] \_[w]{}\[\]= \_[u]{}\[\]+f(y\_[i]{}) + 2f(y\_[i]{}), where we have used stationarity of $N_{-k}$ to simplify the equation. This is illustrated in Fig. \[fig:affineslices\]. Because $\theta_{u,v}$ (i.e. $\partial_{v}\theta_{u}$) is independent of $v$ on constant-$v$ slices, $\theta_{u}$ of $\alpha$ can be simply written in terms of the expansion at $\mu$: \[eq:affineexp\] \_[u]{}\[\] = \_[u]{}\[\]+v\_[u,v]{}\[\]. Thus we obtain \[eq:loc\] \_[w]{}\[\]= \_[u]{}\[\] +\_[u,v]{}\[\] f(y\_[i]{}) +f(y\_[i]{}) + 2f(y\_[i]{})L\^\[f\] + \_[u]{}\[\], where $L^{\mu}$ is an operator that depends only on quantities evaluated on the minimar surface $\mu$. This operator is known as the *stability operator* [@AndMar05; @AndMar08]. Recall now that we are searching for an extremal surface: that is, we are looking for a surface, which we shall call $X$, where $\theta_{w}[X]$ vanishes. Eq. \[eq:loc\] becomes the following eigenvalue equation: L\^\[f\] = -\_[u]{}\[\] ![A (1+1)-dimensional cartoon illustrating the hypersurface $N_{-k}$, which is located at $u=0$. The minimar surface $\mu$ is at $v=0=u$. The horizontal black lines are slice of constant $v$, on which $\theta_{u,v}<0$, and $v<0$ on the entire drawn slice. The extremal surface $X$ is in general not a contant $v$ slice, but rather some function $v=f(y^{i})$, drawn above in orange.[]{data-label="fig:affineslices"}](LocationOfX.pdf){width="50.00000%"} It is a known result that the eigenvalue of $L[f]$ with the smallest real value is real; furthermore, if and only if the marginal surface $\mu$ is “strictly stable” (equivalent to Requirement \[def:minimarCross\] for a minimar surface) this eigenvalue is strictly larger than zero [@AndMar05]. Hence, because $\mu$ is minimar, $L[f]$ has no zero eigenvalues, and is thus invertible. A nontrivial solution exists, and therefore the sought-after extremal surface $X$ exists and may be found by solving Eq. . A final property that we will need is that $f[y]$ is negative (or zero): otherwise, it could lie partly on $N_{+k}$; in such a case, we would find that we need to replace data on $O_{W}[\mu]$ to get an extremal surface with the area of $\mu$, which would ruin our construction. That $f$ must be nonpositive can be shown from stability of $\mu$ by invoking the Krein-Rutman theorem [@KreRutEnglish; @KreRutRussian; @AndMar08], but a simple geometric argument proves this result as well. Suppose that $f$ is indeed positive somewhere, so that it lies at least partly on $N_{+k}$. Because $X$ is compact, $f$ has a maximum. At the maximum, $\nabla_{i} f=0$, $\Box f\leq0$, and $f>0$. But in such a case, Eq.  for an extremal $\beta$ has a strictly negative quantity on the right hand side but is zero on the left hand side. So we have a contradiction, and thus $f \le 0$. We conclude that there exists an extremal surface $X$ on $N_{-k}$. Because $N_{-k}$ is stationary, Area$[X]=\text{Area}[\mu]$. ### Step 3: CPT Reflecting {#step-3-cpt-reflecting .unnumbered} So far we have constructed initial data on a partial Cauchy slice $N_{-k} \cup N_{-\ell}$ terminating on $X$ in the interior. Assuming that $X$ is indeed an HRT surface, then by evolving from this initial data (using the AdS boundary conditions) we expect to be able to construct the entanglement wedge $E_W[B]=O_W[X]$, which is dual to some mixed state $\rho_{B}'$ of the boundary CFT associated with $B$. The advantage of this is that it gives us a state associated with a single spacetime boundary region $B$, which is natural if our original spacetime $M$ had only a single boundary CFT (e.g. in the case of a collapsing black hole). However, this leads to the oddity of a bulk spacetime which simply ends at $X$. In order to construct a complete spacetime $M'$, and to facilitate our proof that $X$ is indeed an HRT surface, we will find it convenient to construct a spacetime with an additional auxiliary boundary $\widetilde{B}$. In the CFT dual, this corresponds to adding a second CFT that purifies the state, similar to the thermofield double interpretation of the Einstein-Rosen wormhole geometry [@Mal01]. Accordingly, to complete our construction, we must specify initial data on a complete Cauchy slice. We will generate this slice by acting with a CPT-reflection across $X$ that takes $v \to -v$, $u \to -u$, $y_i \to y_i$. This transformation acts on the initial data at a surface as follows: **CPT Odd** **CPT Even** --------------- -------------- $ \theta_{v}$ $\chi_{i}$ $\theta_{u}$ $ \phi $ $F_{iv}$ $F_{ij}$   $F_{uv}$ All quantities that are odd under CPT vanish on $X$ by construction.[^13] Therefore the CPT-conjugate data satisfies the requisite junction conditions Eqs. - and -.[^14] The result is a second boundary $\widetilde{B}$, with time running in the opposite direction from $B$. $B$ and $\tilde{B}$ are connected by a Cauchy slice with three null segments: $\Sigma=N_{-\ell}[\mu]\cup N_{-k}[\mu]\cup \widetilde{N_{-\ell}}[\widetilde{\mu}]$. We are using tildes to represent CPT-conjugated submanifolds. This is illustrated in Fig. \[fig:maximizing\].[^15] We have now specified all data necessary to uniquely evolve characteristic initial data via the Einstein field equation. The resulting spacetime $(M',g')$ has a minimar surface $\mu$ whose outer wedge $O_{W}[\mu]$ is the same as in $(M,g)$. $M'$ contains an extremal surface $X$ on the boundary of the inner wedge $I_{W}[\mu]$, which is homologous to $\mu$ and therefore to the boundary $B$. Thus it is a candidate for the HRT surface, although we have not yet shown it is the extremal surface of least area in $M'$. That will be accomplished in the next section. Minimality of the Extremal Surface {#subsec:min} ---------------------------------- We now proceed first to show that the von Neumann entropy of $(M',g')$ is actually given by Area$[\mu] = \text{Area}[X]$. This amounts to showing that the area of any other extremal surface $X'$ in $(M',g')$ cannot exceed the area of $X$. If there are no other extremal surfaces in $(M',g')$, we are done. So suppose that there exists an extremal surface $X'\neq X$ homologous to $B$. Let $\Sigma$ be the Cauchy slice $\Sigma_\text{min} \cup N_{-k}\cup\widetilde{\Sigma_\text{min}}$ of $(M',g')$. Recall that $\Sigma_\text{min}[\mu]$ is the partial Cauchy slice on which the minimar surface $\mu$ is minimal (Requirement \[def:minimarMin\]), and $\widetilde{\Sigma_\text{min}}[\widetilde{\mu}]$ is its CPT-conjugate. Let $\overline{X'}(\Sigma)$ be the representative of $X'$ on $\Sigma$. We will first treat the case where $\overline{X'}(\Sigma)$ lies on just one of $\Sigma_\text{min}$ or $ N_{-k}$ (the $\widetilde{\Sigma_\text{min}}$ case is symmetrical). This is illustrated in Fig. \[fig:MinExtorInt\]. If $\overline{X'}(\Sigma)$ lies on $ N_{-k}$, then: \[()\]=\[X\]. This follows from the fact that $N_{-k}$ is stationary. If $\overline{X'}(\Sigma)$ lies on $\Sigma_\text{min}$, then by definition of $\mu$, the area of $\overline{X'}(\Sigma)$ must be larger than the area of $\mu$. Altogether, if $\overline{X'}(\Sigma)$ lies on either side of $\mu$, we have: \[()\]\[X\]. Since the area of a representative of an extremal surface is always larger than the area of the extremal surface, we immediately find: \[X’\]\[X\]. This shows that $X$ is the minimal area extremal surface homologous to $B$. It is possible $X'$ has the same area, but this does not affect our conclusion, that the area of $\mu$ gives the von Neumann entropy of $(M', g')$. The case where $\overline{X'}(\Sigma)$ intersects multiple regions is only slightly more complicated. Suppose for example that $\overline{X'}(\Sigma)$ lies on both $N_{-k}$ and $\Sigma_\text{min}$. Let $$\begin{aligned} &x_{1}=\overline{X'}\cap \Sigma_\text{min}\\ &x_{2}=\overline{X'}\cap N_{-k},\end{aligned}$$ and further divide $\mu$ into two subsets, where $\mu_{1}=\mu\cap O_{W}[X']$ and $\mu_{\mathrm{2}}$ is the complement in $\mu$. See Fig. \[fig:mixed\]. Note that $\mu_{1}\cup x_{2}$ and $\mu_{2}\cup x_{1}$ are both homologous to $\mu$. Then we find: ![The case of a surface $X'$ whose representative lies partly in $\Sigma_{\text{out}}$ and partly in $\Sigma_{\text{in}}$. This illustration is a planar projection of the spacetime and should be thought of as being periodically identified. []{data-label="fig:mixed"}](minimalityproofMixed.pdf){width="50.00000%"} $$\begin{aligned} \mathrm{Area}[\mu_1] +\mathrm{Area}[x_{1}]&\geq\mathrm{Area}[\mu]\\ \mathrm{Area}[\mu_{2}]+\mathrm{Area}[x_{2}]& = \mathrm{Area}[\mu],\end{aligned}$$ where the first line follows from Requirement \[def:minimarMin\], and the second follows by stationarity of $N_{-k}$. Altogether, we find: \[X’\]()= \[x\_[1]{}\]+\[x\_[2]{}\] \[\]. (If $\overline{X'}(\Sigma)$ intersects all three components of $\Sigma$, the argument works the same way.) This completes the proof. This shows that $X$ is minimal among all extremal surfaces in the auxiliary spacetime $(M',g')$ and therefore by HRT : S\[’\]=, where $\rho'$ is the state dual to $(M',g')$. We already established in Sec. \[sec:bound\] that the outer entropy is bounded above by $\mathrm{Area}[\mu]/{4G\hbar}$. This construction shows that the maximum can indeed be attained. This proves the desired claim: S\^\[\] = \_[{}]{}\[S\_[vN]{}\[\_[B]{}\^[()]{}\] = . Outer Entropy of Other Surfaces {#sec:others} =============================== Extremal Surfaces as Minimar Surfaces {#sec:extremal} ------------------------------------- A special case of our result is when the minimar surface is itself an extremal surface $X$, so that $\theta_{(\ell)} = 0$ as well as $\theta_{(k)} = 0$. To be minimar, the surface must either be HRT already (in which case $S^{(\text{outer})} = S_{vN}$ trivially), or else a non-minimal extremal surface lying closer to the boundary than any other extremal surface of lesser area. (As shown in Appendix \[sec:HRT\], in the extremal case the minimality on the partial Cauchy slice $\Sigma_\text{min}$ automatically implies that $\theta_{(\ell),k} \le 0$.) In this case, there is no need to construct $N_{-k}$. We only need to perform the CPT-reflection about $X$, which is now guaranteed by Sec. \[subsec:min\] to be the HRT surface in the new spacetime. Hence S\^[()]{} = allowing us to interpret the area of a non-minimal extremal surface as a coarse-grained entropy. This construction works even if we take $X$ to be an extremal surface anchored to the boundary of a subregion $R \in B$. Because $X$ and the original HRT surface are locally minimal on some Cauchy slice $\Sigma$ of the original spacetime $(M,g)$, it follows that the divergence structure of their areas agrees. However, in general, as investigated in [@MarWhi17] the divergences in the area of general boundary-anchored marginal surfaces are local to the boundary region in question only at leading order. This complicates any attempt to giving a similar prescription for the outer entropy of a non-extremal minimar surface. Hence we do not address the general case of boundary-anchored $\mu$’s in this paper. Non-Marginal Surfaces {#sec:general} --------------------- How does the outer entropy of non-minimar surfaces compare with their area? There are no general grounds to expect a particular relationship between the area and outer entropy of arbitrary surfaces. However, for untrapped and certain trapped surfaces, the area turns out to be a bound on the outer entropy. ### Untrapped Surfaces {#sec:untrapped} Recall that an untrapped surface satisfies the following relation: $$\begin{aligned} &\theta_{(\ell)}<0\\ &\theta_{(k)}>0.\end{aligned}$$ An example of such a surface is a cross-section of a generic causal horizons, for which $\theta_{(k)} > 0$ due to Hawking’s area-increase theorem [@Haw71] while $\theta_{(\ell)}< 0$ if the cross-section is outside the past horizon. An even more special case is a (generic) casual surface, which is the intersection of the past and future causal horizons [@HubRan12]. It has been proposed that the areas of these surfaces correspond to some notion of coarse-grained entropy [@HubRan12; @KelWal13], with specific proposals being given in [@FreMos13; @KelWal13]. However, the proposal in [@FreMos13] was refuted by [@KelWal13], while [@KelWal13] was refuted in [@EngWal17a]. The counterexample in [@EngWal17a] involved a causal surface with $S^{\mathrm{outer}} = 0$ but $\text{Area} >0$. Hence the outer entropy was strictly greater than the area. Below we prove that this relationship in fact holds more generally for any untrapped surface that lives outside of the horizons[^16]: #### An Upper Bound on $S^{(\mathrm{outer})}$: If $\upsilon$ is an untrapped surface homologous to $B$ and lying outside the past and future horizons of $\partial M$, then: $$\label{untrapped} S^{(\mathrm{outer})}[\upsilon] < \frac{\mathrm{Area}[\upsilon]}{4G\hbar}.$$ Because $\upsilon$ lies outside of the event horizons, a nontrivial compact extremal surface can only live in $I_{W}[\upsilon]$ [@Pen69]. Let $X$ be the HRT surface corresponding to a given choice of $I_{W}[\upsilon]$, and let $\Sigma_{\text{min}}$ be the Cauchy slice on which $X$ is minimal. We find: \[X\]\[\_I\_[W]{}\[\]\]\[\], where the first inequality follows by minimality of $X$ on $\Sigma_{\text{min}}$, and the second inequality follows from the fact that $\upsilon$ is untrapped, so that the area decreases as we move inwards on the null hypersurface $\partial I_{W}$. The equality in the first equation can only happen if there are multiple minima on $\Sigma_{\text{min}}$ and $\Sigma_{\text{min}}\cap \partial I_{W}[\upsilon]$ is another of those minima; the equality in the second equation can only happen if $\upsilon$ lies on $\Sigma_{\text{min}}$ since otherwise there is some focusing. But $\upsilon$ cannot be a minimum area surface on any Cauchy slice, since the outward spacelike expansion on $\Sigma$ (which is a linear combination of $\theta_{(k)}$ and $-\theta_{(\ell)}$) is nonzero. Hence the inequality is strict. The situation is more complicated for untrapped surfaces inside an event horizon. But it is likely that a maximin argument [@Wal12] can be made towards the same conclusion. [^17] ### Trapped Surfaces {#sec:trapped} An opposite bound can be proven for a class of trapped surfaces, i.e. surfaces with: $$\begin{aligned} &\theta_{(\ell)}<0\\ &\theta_{(k)}>0.\end{aligned}$$ Here we wish to show that $S^{\text(outer)}$ always exceeds the area, but we need some additional assumption to rule out cases where the trapped surface lies in a “bag of gold” region [@Whe64; @Mar08] behind another black hole with small area. For this reason, we require our trapped surface to be to the outward-null future of a minimar surface: #### Lower Bound on $S^{\mathrm{(outer)}}$: Let $\mu$ be a minimar surface (with $\theta_{(\ell)}<0$) and let $\tau$ be a trapped surface on the null congruence $N_{+k}$ fired in the $+k^{a}$ direction from $\mu$. Then: $$S^{(\mathrm{outer})}[\tau] \geq \frac{\mathrm{Area}[\upsilon]}{4G\hbar}.$$ This is almost immediate from the definition of $\tau$: &lt; = S\^\[\] S\^\[\], where the first inequality follows by focusing, the equality follows from the fact that $\mu$ is minimar; the second inequality follows from $O_{W}[\tau]\subset O_{W}[\mu]$, so to obtain $S^{\mathrm{(outer)}}[\tau]$ we must coarse grain over fewer constraints than we do to obtain $S^{\mathrm{(outer)}}[\mu]$. Therefore the former must be at least as large as the latter. This establishes that for all trapped surfaces on $\partial O_{W}[\mu]$ for a minimar surface $\mu$, the area gives an upper bound on the outer entropy. The construction is similar for anti-trapped surfaces ($\theta_{(\ell)}>0$). We expect, however, that $S^{(\mathrm{outer})}[\tau]$ cannot be made arbitrarily large, since $O_W[\tau]$ includes a boundary slice $B$, and in AdS/CFT we expect that there is a maximum entropy state compatible with a finite ADM mass. Boundary Perspective: The Simple Entropy {#sec:simple} ======================================== Our focus has thus far been on proving that the outer entropy of a minimar surface $\mu$ — the entropy associated with coarse graining over inner wedge of $\mu$ subject to knowing its outer wedge — is proportional to the area of $\mu$. Aside from the use of HRT to interpret the area of the extremal surface $X$ as $S_{vN}$, this statement has been defined entirely on the *bulk* side of the duality. Yet to get a fully holographic definition of the coarse-grained entropy of a black hole, we need to define the dual quantity on the *boundary* side, using as little of the bulk physics as possible. We therefore give a proposal for the boundary interpretation, *the simple entropy*, and we prove that it holds under a set of assumptions. The simple entropy is defined as a coarse graining of $S_{vN}$ obtained by maximizing $S_{vN}$ subject to fixing the expectation values of “simple” boundary operators with “simple” sources turned on. We refer to sources defined on some set of boundary points $V \in \partial M$ as *simple* if the bulk fields they produce propagates causally into the bulk from the points in $V$. We will define a boundary operator *simple* if the corresponding infinitesimal sources are simple.[^18] The reason why we call these operators “simple” is that sufficiently complicated operators in a region $R$, e.g. precursers [@PolSus99; @FreGid02], should be able to access data arbitrarily deep in the entanglement wedge of $R$, including in the inner wedge behind a minimar surface $\mu$. In our classical bulk regime, all local operators are simple, and it should be sufficient to fix the one-point functions of these local operators (since the higher $n$-point functions are determined from these). Furthermore it is sufficient to restrict attention to local simple sources, although not all local sources are simple, e.g. the exponentiation of the Hamiltonian $H$ can change the time-localization of fields acausally [@RobSta14]. Our coarse-graining procedure to define the simple entropy is: 1. Choose a boundary initial time slice $t=t_{i}$, and a very late-time cutoff $t = t_f$ (in order to prevent recurrences),[^19] 2. Fix the one-point functions of local operators after $t_i$ (but before $t_f$) in the presence of all possible simple sources turned on after time $t_{i}$, but without changing the state $\rho$ at $t_i$ (so that there is retarded propagation from the sources). \[2\] 3. Find the state $\rho'$ the maximizes the von Neumann entropy $S_{vN}$ over all of the states with the same simple one-point functions as defined in \[2\] for $\rho$. In short: \[eq:simple\] S\^\[t\_[i]{}\]\_[’]{} , where \[eq:E\] E = [T]{} , $J[t']$ is a simple source, and ${\cal O}_{J}$ the corresponding simple operator.[^20] (Note that $S^{\mathrm{(simple)}}[t_{i}]$ is not quite a purely boundary construct, as the definition of simple sources references the behavior of the corresponding bulk fields. We hope that in the future, a purely boundary description of “simple” sources can be provided.) We now wish to relate the simple entropy to the outer entropy of some minimar surface $\mu$ in the bulk. The following construction is natural: take the slice $t_i$ and shoot in a future-directed null hypersurface $N_\ell[t_i] \equiv \partial I^+[t_i]$. See Fig. \[fig:simple\]. In a black hole spacetime, there ought to exist some slice $\mu$ of $N_\ell[t_i]$ for which the outgoing expansion $\theta_{(k)}$ vanishes.[^21] We expect that the outermost such slice $\mu$ generically satisfies $\nabla_\ell \theta_{(k)} < 0$;[^22] by focusing, $\mu$ has minimal area on the part of $N_\ell$ outside $\mu$. Hence, $\mu$ should be minimar, at least generically. Note that any such $\mu$ lies outside of any past horizon. ![Generating a minimar surface by firing a null congruence $N_{\ell}[t_{i}]$ into the bulk in the $+\ell^{a}$ direction from $t_{i}$. This will not always coincide with $N_{-\ell}$ fired from $\mu$, but our procedure guarantees that there is no matter thrown into the bulk between $N_{-\ell}\cap \partial M$ and $t_{i}$.[]{data-label="fig:simple"}](SimpleEntropy.pdf){width="50.00000%"} In this section, we consider only minimar surfaces $\mu$ that are obtained from boundary time slices $t_i$ in this manner. There can exist other minimar surfaces which cannot be constructed in this way. Note also that if $N_\ell[t_i]$ forms caustics before it reaches $\mu$, then $N_{-\ell}[\mu] $ (the past boundary of $O_{W}[\mu]$) will not coincide with $N_\ell[t_i]$! But by bulk causality, $N_{-\ell}[\mu]$ lies nowhere to the future of $N_\ell[t_i]$. However, in this case the domains of dependence agree: $N_{-\ell}[\mu] = N_{-\ell}[\mu]$. This implies that the data in $O_W[\mu]$ can be reconstructed from the part of $O_W[\mu]$ which is to the future of $N_\ell[t_i]$, so long as we also know what the sources are between $t_\mu = N_{-\ell}[\mu] \cap \partial M$ and $t_i$. See Fig. \[fig:simple\]. We now evaluate $S^{\mathrm{(simple)}}[t_{i}]$ by a three-step argument: 1. First we hold the sources $J(t > t_i)$ fixed, and identify the “reconstructible region” $R[t_i]_J = D[I^+[t_i] \cap I^-[\partial M]]$ of the bulk which can be fully reconstructed from the one-point boundary data,[^23] and that furthermore no bulk information independent of $R[t_i]_J$ can be recovered. 2. Next we allow the sources to vary; in this case the size of $R[t_i]_J$ may depend on $J$, but it always remains within the outer wedge $O_W[\mu]$ of the new spacetime $M'$. Hence we can reconstruct at most $O_W[\mu]$, and it follows that $S^{\mathrm{(simple)}}[t_{i}] \ge S^{\mathrm{(outer)}}[\mu]$. 3. Finally we wish to vary the sources $J(t > t_i)$ in such a way as to maximize the extent of $R[t_i]_J$. We will argue that, for certain classes of states, there exists a $J$ such that $R[t_i]_J = O_W[\mu]$, so that *all* of the data in $\partial^-O_W[\mu]$ is visible to the boundary, and hence \[SisO\] S\^\[t\_[i]{}\] = S\^\[\]. We can prove these results to all orders of perturbation theory around equilbrium (e.g. at late times in an AdS-Kerr ringdown process) — and also of course for states that differ from such near-equilbrium states by the addition of large simple sources after time $t_i$. We therefore conjecture that the equality in fact holds for *all* classical states in the holographic regime. #### Step 1: $R[t_{i}]_{J}$ with fixed $J$: Given the one-point data in a fixed state with fixed simple sources, we make use of the HKLL prescription [@HamKab05; @HamKab06; @HamKab06b] to reconstruct $I^+[t_i] \cap I^-[\partial M]$, the region causally accessible from the boundary after $t_i$. Using the bulk equations of motion, we can then reconstruct the domain of dependence of this region, which we call $R[t_{i}]_{J} = R[t_i]_J = D[I^+[t_i] \cap I^-[\partial M]]$.[^24] The HKLL procedure solves a non-standard Cauchy problem by evolving the boundary one-point data “sideways” into the bulk via the bulk equations of motion; in this way the one-point functions can be used to reconstruct all local bulk operators[^25] anywhere in $R[t_{i}]_{J}$. The HKLL procedure has been rigorously established for free field evolution at least if we assume any of (a) spherical symmetry [@HamKab06], (b) Killing symmetry [@Tat07], or (c) analytic bulk sources [@Holmgren].[^26] It is also possible to include interactions, at least perturbatively in 1/N [@Kab11; @HeeMar]. Below we will assume that the HKLL procedure can be performed perturbatively on general globally hyperbolic spacetimes, outside of event horizons. Aside from the information in $R[t_i]_{J}$, no additional independent information about the spacetime $M$ can be reconstructed via the one-point data to the future of $t_{i}$. To see this, consider a modification of the fields localized in the spatially complementary region to $R[t_i]_{J}$, which we term $R^{c}[t_i]_J$. Since we are in a regime where bulk fields propagate causally, it is clear that such a modification cannot affect local operators on the boundary after the time $t_i$. Hence it is not encoded in our one-point data, and all of the reconstructible data is in $R[t_i]_J$.[^27] #### Step 2: Upper bound on $R[t_{i}]_{J}$ with varying $J$: We now allow arbitary simple boundary sources $J$ to be turned on after time $t_{i}$ (while holding fixed all sources before $t_{i}$). Since the sources are simple, the resulting matter fields propagate causally into the bulk. Hence the change in the geometry is localized in the region $I^+[t_{i}]$, and in particular the null hypersurface $N_\ell[t_i]$ is unaffected. This allows us to compare the size of the invariant region $R[t_{i}]_{J}$ for two different choices of $J$, by comparing how much of the invariant hypersurface $N_\ell[t_i]$ is included in $R[t_{i}]_{J}$. Turning on certain sources introduces more infalling matter and causes the event horizon to move outwards along $N_\ell[t_i]$; turning on other sources removes the infalling matter and push the event horizon inwards along $N_\ell[t_i]$. However, there is no set of sources that can shift the event horizon so far inwards that $\mu$ lies outside of it [@Haw71; @EngWal14]. This is a consequence of a theorem of Hawking [@Haw71], which states that marginally trapped surfaces always lie behind event horizons. Thus, there is no set of simple sources $J'$ that we can turn on to produce a geometry in which $\mu$ lies inside $R[t_{i}]_{J'}$. For the same reason, the null hypersurface $N_{+k}[\mu]$, which is foliated by trapped surfaces, also lies behind the event horizon. And obviously $R[t_{i}]_{J'}$ also cannot dip to the past of $N_{-\ell}[\mu]$. This shows that R(t\_[i]{})\_[J’]{}O\_[W]{}\[\], for any modified sources $J'$. So far we have defined the reconstructible region $R[t_{i}]_{J'}$ on $M'$, the geometry corresponding to the modified sources $J'$. Since the hypersurface $N_\ell[t_i]$ is invariant, we can use it to define the corresponding reconstructible region $R[t_{i}]_{J'}$ on $M$, the original manifold with sources $J$. This is simply the domain of dependence of the part of $N_\ell[t_i]$ that we can reconstruct, which contains the same data on both spacetimes. (This may well be larger than the region $R[t_{i}]_{J} \subset M$ which could have been reconstructed using the original sources, but in no case can it be larger than $O_{W}[\mu]$ since it still does not extend past $\mu$ on $N_\ell[t_i]$.) In other words, to reconstruct a field $\phi$ somewhere in $R[t_{i}]_{J} \subset M$, we simply evolve it back to initial data on $N_\ell[t_i]$ using the *original* boundary sources $J$, and then we turn on the *new* sources $J'$ that move the causal horizon inwards, making it visible to the boundary after time $t_i$. The one-point functions $\langle O\rangle_{J}$ thus allow us to fully reconstruct (via HKLL) *at most* the outer wedge $O_{W}[\mu]$ of $M$. That is, the set of data used to compute the simple entropy is a (possibly improper) subset of the set of data used to compute the outer entropy. Since both entropies involve maximization subject to these constraints, we conclude that $S^{\mathrm{(simple)}}[t_{i}] \geq S^{\mathrm{(outer)}}[\mu]$, i.e. the simple entropy is either equivalent to, or else coarser than, the outer entropy. #### Step 3: Maximizing $R[t_{i}]_{J}$: We start by considering the case where $\rho$ is perturbatively close to a state $\rho_{stat}$ of thermal equilibrium, which is dual to a stationary geometry $M_{stat}$. In $M_{stat}$, $R(t_{i})=O_{W}[\mu]$ (because $\mu$ lies on the Killing horizon). In the perturbed state $\rho$, there exists matter falling across the event horizon $H_{EH}$, which we need to remove to cause $\mu$ to sit on $H_{EH}$. To any order in perturbation theory, we can regard this matter as crossing $H_{EH}$ on the original background $M_{stat}$. We modify the state by removing the matter crossing $H_{EH}$ while keeping the data on $N_{\ell}[t_i]$ fixed.[^28] This can be done in the bulk by attaching a stationary null hypersurface $N_{+k}[\mu]$ to $\mu$ via a similar construction as given in Sec. \[sec:main\]; we call this modified spacetime $M'$ (as in the previous step). We can now use HKLL to solve for the corresponding boundary sources $J'[t>t_i]$, which must differ from the original sources $J[t>t_i]$ since otherwise the spacetimes would be the same (and $\mu$ would already be a cross-section of $H_{EH}$). These are the sources that are needed to “turn off” matter falling across the horizon $H_{EH}$. Note that due to caustics and intersections, $N_{\ell}[t_{i}]$ need not coincide with $N_{-\ell}[\mu]$. This is fine, as $N_{-\ell}[\mu]$ will always lie to the past of $N_{\ell}[t_{i}]$, and thus no new sources are present in th region between the two congruences. We now consider the boundary state which agrees with $\rho$ prior to $t_{i}$, but in which we turn on the $J'[t>t_i]$ rather than $J[t>t_i]$. Because $J'$ is simple, a classical bulk dual exists. This bulk dual is none other than $M'$, because the data on $N_{\ell}[t_i]$ together with the boundary sources $J[t>t_i]$ allows us to determine (via future-directed Cauchy evolution) the data on the horizon $H_{EH}$. This allows us to recover $O_{W}[\mu]$ to whichever order in perturbation theory we are working in. As explained in step 2, we can recover the outer wedge in $M$ as well as $M'$. Thus, we find that for $\rho$ perturbatively close to $\rho_{stat}$, the outer and simple entropies agree: $S^{\mathrm{(simple)}}[t_{i}] = S^{\mathrm{(outer)}}[\mu]$. To see why we only work perturbatively, consider the case where $\mu$ is a finite distance away from $H_{EH}$. In this case even if we turn off the matter on $H_{EH}$, we are not guaranteed that $\mu$ lies on $H_{EH}$, as there may be another minimar surface in the way. This is not possible in perturbation theory because on $N_{\ell}[t_i]$ there is a unique minimar surface on the Killing horizon $H_{EH}$ of the background spacetime $M_{stat}$, which remains unique under small perturbations.[^29] Furthermore, it is possible that HKLL is valid only perturbatively, in which case we not justified in using it when $\mu$ is deep inside the black hole. It is also clear that this equality holds for a state $\rho'$ that is prepared from the near-equilibrium state $\rho$ above by turning on any additional simple sources after $t_i$, even if these sources are large (so that perturbation theory is not valid). Explanation for the Second Law {#sec:secondlaw} ============================== One consequence of the equivalence between area and entropy for minimar surfaces is a statistical interpretation of the area law for certain sorts of local horizons [@Hay93; @AshKri02; @BouEng15a], as a second law of thermodynamics. A hypersurface $\mathcal{H}$ foliated by marginally trapped surfaces (and satisfying certain regularity conditions) is known as a future holographic screen  [@BouEng15a; @BouEng15b] (or a future trapping horizon [@Hay93]). They can be defined in a way which is local in time, but are highly nonunique. Such a hypersurface can be timelike, spacelike, or null in different parts of $\mathcal{H}$. But if a black hole settles down to a stationary configuration, then at late times its causal horizon coincides with a null holographic screen $\mathcal{H}$. These holographic screens obey an area law: the area of the marginally trapped surfaces increases with evolution along $\mathcal{H}$ [@Hay93; @AshKri02; @GouJar06; @BouEng15a; @SanWei16] when moving towards the *past* (on a timelike segment) or *outwards* (on a spacelike segment). To apply our results, we need to consider the case where the holographic screen is foliated by minimar surfaces, which by Requirement \[def:minimarCross\] satisfy $\nabla_\ell \theta_{(k)} < 0$. From this inequality, it follows from the NCC that $\mathcal{H}$ must be spacelike or null [@Hay93], in which case they are also known as dynamical horizons [@AshKri02]. In general there will be multiple holographic screens foliated by minimar surfaces on the same black hole background. As an example, for any slicing of of a black hole spacetime $M$ into Cauchy hypersurfaces $\Sigma[t]$, the apparent horizon (i.e. the outermost marginally trapped surface on each $\Sigma(t)$) satisfies $\nabla_\ell \theta_{(k)} \le 0$ [@HawEll], and thus — assuming there is no homologous surface of lesser area outside of it[^30] — generically satisfies the criteria to be a minimar surface. The evolution with $t$ would then define a holographic screen $\mathcal{H}$ foliated by minimar surfaces. In general, the location of $\mathcal{H}$, and hence the outer wedges $O_W[\mu]$ used for coarse graining, will depend on the choice of foliation $\Sigma[t]$. Our derivation of the second law will hold for *any* holographic screen $\mathcal{H}$ foliated by minimar surfaces, whether or not it is obtained by one of the construction described in this paragraph. As stated above, the area of the minimar surfaces $\mu$ will increase as we evolve outwards along $\mathcal{H}$: \[areainc\] \[\_2\] \[\_1\], where $\mu_2$ is further outwards than $\mu_1$. This can be proven by geometrical means using the NCC, but we will now provide an simple statistical-mechanical explanation for the area increase in terms of the outer entropy. First note that the corresponding outer wedges are nested: $O_W[\mu_2] \subset O_W[\mu_1]$. Hence there is less data in $O_W[\mu_2]$ than $O_W[\mu_1]$. This is illustrated in Fig. \[fig:nesting\]. It follows that S\^\[\_2\] S\^\[\_1\], since we are maximizing $S_{vN}$ with respect to fewer constraints at $\mu_2$. But $S^{\text{(outer)}}[\mu] = \text{Area}[\mu]$ for each surface, so the area increase inequality follows automatically.[^31] ![Reproduced from [@EngWal17b]. The outer wedges of the marginally trapped surfaces constituting a holographic screen are nested: evolution along the screen from the leaves labeled 1, 2, and 3 corresponds to progressively larger outer entropy. On the boundary, this translates into a timelike law of outer entropy increase: $S^{\mathrm{(outer)}}[t_{1}]<S^{\mathrm{(outer)}}[t_{2}]<S^{\mathrm{(outer)}}[t_{3}]$.[]{data-label="fig:nesting"}](OuterWedgeNesting.pdf){width="33.00000%"} This second law also has an appealing interpretation on the boundary side. Suppose that we obtain our holographic screen $\mathcal{H}$ shooting in null surfaces $N_\ell(t)$ from a Cauchy foliation $\Sigma[t]$ of the *boundary* $\partial M$, as in Sec. \[sec:simple\]. (Such a holographic screen always lies outside of any past horizons.) In this case, the boundary interpretation of the growth of $\mathcal{H}$ is an increase in the simple entropy $S^{\text{(simple)}}[t_i = t]$. (Here we are holding the very late time cutoff $t_f$ fixed as we vary $t_i$). Consider two different initial times $t_i = t_1$ and $t_i = t_2$, with $t_2 > t_1$. Because we can only turn on simple sources and measure simple operators after $t_i$, it follows that there are fewer simple experiments that can be done after $t_2$ than after $t_1$. Hence, S\^\[t\_2\] S\^\[t\_1\]. since the later entropy is maximized subject to fewer constraints. Since at any time $S^{\text{(simple)}}[t_i] = S^{\text{(outer)}}[\mu[t_i]]$, this second law is equivalent to the previous ones, but now it is expressed in terms of boundary quantities. Our construction suggests a natural perspective on proving the ordinary second law of theormodynamics even in non-holographic theories. The reader may think it odd that $S^{\text{(simple)}}[t_i]$ has been defined relative to a set of measurements taking place at *all* times later than $t_i$ (up to some very late cutoff $t_f$), rather than being restricted to times near $t_i$. But this is actually very natural from the perspective of proving the second law. First recall the standard (not completely satisfactory) textbook analysis of the second law. Suppose for example we start with a pure state at time $t_0$ and then allow it to begin to thermalize. To define a nontrivial second law (where entropy increases with time), we need a notion of coarse-graining such which allows us to “forget” some information that was available at $t_0$, once we have arrived at a later time $t_1$ when this information is no longer accessible to macroscropic measurements. This allows us to define a coarse-grained entropy $S^{\text{(coarse)}}[t_1] > 0$. However, if the forgotten information has not fully thermalized, then there is the danger that at a still later time $t_2$, the forgotten information may re-emerge into the macroscopic degrees of freedom, causing a decrease of $S^{\text{(coarse)}}$ from $t_1$ to $t_2$! It is very hard to prove rigorously that this cannot happen in reasonable matter systems. Our approach neatly sidesteps this issue by defining the coarse-grained entropy relative to observations made *anywhere* in the time interval $(t_i, t_f)$, with the late time cutoff $t_f$ taken very large. That is equivalent to saying, that if any reasonable future experiment could have recovered some piece of information, then (almost by definition) we ought *not* to have coarse-grained over that piece of information for purposes of the second law, since it has not been irreversibly thermalized into the microscopic degrees of freedom. Maximizing $S_{vN}$ subject to to all information accessible in $(t_i, t_f)$ automatically excludes such pathological cases of information return, and makes it easy to prove a second law mathematically for all systems, without the use of additional postulates that are difficult to justify. The price that we pay is that such an increasing coarse-grained entropy may be hard to evaluate, in situations where it is unclear whether some information is permanently lost. Fortunately, this turns out not to be an issue in the holographic context, since the duality to black hole horizons makes a sharp division between the information that is accessible, and the information that is (classically) lost forever. Prospects {#sec:prospects} ========= We have shown that in black hole physics, the area of certain marginally trapped “minimar surfaces” have a natural interpretation as a coarse-grained holographic entropy, which we have called the “outer entropy”. We have given a statistical explanation for why the corresponding holographic screens obey a second law, and (at least perturbatively) have shown that this is equivalent to a second law on the boundary, expressed in terms of the “simple entropy”. As described at the end of the previous section, this boundary second law provides an interesting new perspective on the thermodynamics of ordinary (not necessarily holographic) systems. Leaving aside our proposed boundary dual, our only use of holography in the bulk has been the HRT formula for the holographic entanglement entropy . Thus, all of our bulk results about the outer entropy also extend to the case of asymptotically flat bulk black holes, assuming that (as is plausible) the area of extremal surfaces also corresponds to some von Neumann entropy in this case (perhaps, defined in terms of a hypothetical flat space holographic dual – see [@BarCom06; @Bag10; @BagDet12; @Com12; @BagBas14; @JiaSon17] and references therein). Another natural extension of our work is to boundary-anchored marginally trapped surfaces in AdS. We expect that similar results will hold, but in this article we have only covered the case of nonminimal extremal surfaces (see Sec. \[sec:extremal\] for the details). This article has not explained the second laws that are known to be obeyed by timelike holographic screens [@BouEng15a; @BouEng15b] and by causal horizons [@Haw71]. Although holographic screens obey a second law in cosmology [@BouEng15c], it is especially unclear how to extend our results to this case, since in a closed universe the minimal area surface dual to e.g. a de Sitter horizon is always the empty set. Another direction that remains to be addressed is the extension to semiclassical settings, in which the black hole is coupled to quantum field theory. In this case, we expect that we need to replace the area with a *generalized entropy* which includes bulk entropy corrections [@FauLew13; @EngWal14]. Hence, we will need to consider *quantum* marginally trapped surfaces [@EngWal14], and we will end up with a second law for certain *Q-screens* [@BouEng15c]. However, in order to construct our stationary null surface $N_{-k}$, we will need a better understanding of when, given the data outside the surface, we can saturate inequalities such as monotonicity of relative entropy. See [@Wal17PRL] for discussion of a relevant conjecture. If our results can be extended to the semiclassical regime, they are likely to provide an interesting perspective on the firewalls puzzle [@AMPS; @AMPSS; @Mat09]. Recall that the paradox here is that strong subadditivity seems to prevent old black holes (that are highly entangled with their early Hawking radiation) from having a normal interior. A quantum version of our result could be used to construct the “best possible” (i.e. entropy maximizing) interior of the black hole as a function of time, which might reveal interesting behavior across the transition to the “firewall” phase. Finally, we would like to speculate on what our results mean for nonperturbative quantum gravity. It is natural to suppose that the Bekenstein-Hawking entropy of any surface $\sigma$ corresponds to the entropy of some set of Planck-scale boundary qubits sitting on $\sigma$ [@Sor83; @Jac95; @Sor05; @BiaMye12]. If these qubits can be approximately localized, this explains why the entropy is an extensive (geometric) intergral on $\sigma$. Our results show that if $\sigma$ is a minimar surface, it is possible to act on the state in a way that maximally mixes the qubits, without changing the classical geometry outside of $\sigma$. These degrees of freedom can therefore be regarded as independent degrees of freedom. On the other hand, for an untrapped surface, it is *not* usually possible to fully mix the qubits without changing the geometry outside (see Sec. \[sec:untrapped\]). So these degrees of freedom cannot become fully mixed without adding energy from outside. Finally, in the case of a trapped surface, the outer entropy can exceed the total entropy of the surface qubits. In this case, there must be some other source of boundary entropy which is not fully accounted for by the Planckian qubits near $\sigma$. For a model of holographic quantum gravity in the bulk to be successful, it must be able to explain why there is a match between the area and the outer entropy for minimar surfaces, but not for these other classes of surfaces. Acknowledgments {#acknowledgments .unnumbered} =============== It is a pleasure to thank S. Alexakis, R. Bousso, S. Fischetti, G. Horowitz, H. Kunduri, J. Maldacena, D. Marolf, F. Pretorius, G. Remmen, A. Shao, D. Stanford, H. Verlinde, S. Weinberg, E. Witten, and B. White for discussions. The work of NE was supported in part by NSF grant PHY-1620059 and in part by the Simons Foundation, Grant 511167 (SSG). NE thanks the Stanford Institute for Theoretical Physics for hospitality during the final stages of this work. The work of AW was supported in part by NSF grant PHY-1314311, the Stanford Institute for Theoretical Physics, the Simons Foundation (“It from Qubit”), the Institute for Advanced Study, and the S. Raymond and Beverly Sackler Foundation Fund. HRT Surfaces are Minimar Surfaces {#sec:HRT} ================================= We prove in this section that HRT surfaces are automatically minimar. By definition, HRT surfaces are homologous to the boundary; by the maximin method [@Wal12], they are also minimal on a complete Cauchy slice. Hence they satisfy Requirement \[def:minimarMin\] for a minimar surface. To show Requirement \[def:minimarCross\], that $\nabla_{k}\theta_{(\ell)} \le 0$ we prove the following: #### Theorem: Let $X$ be an extremal surface homologous to $B$ which is minimal on a Cauchy slice $\Sigma$ of $O_{W}[X]$. Then $\nabla_{k}\theta_{(\ell)} \le 0$. Consider firing out the null congruence $N_{+k}[X]$ in the $k$ direction from $X$, and let $\sigma$ be a cross-section of $N_{+k}[X]$. We may now fire the null congruence $N_{-\ell}(\sigma)$ from $\sigma$ in the $-\ell$ direction (i.e. towards the past and away from $X$), towards the slice $\Sigma$ on which $X$ is the minimal area surface. We know that $\sigma'=N_{-\ell}(\sigma)\cap \Sigma$ must have area greater than or equal to the area of $X$. By taking $\sigma$ to be sufficiently close to $X$, we can guarantee that $\sigma'\subset U$, where $U$ is any open set on $\Sigma$. Then: \[’\]\[X\]\[\] where the first inequality follows from the minimality of $X$ and the second inequality follows by the focusing theorem. This means that the area of cross-sections of $N_{-\ell}(\sigma)$ has to grow (or remain unchanged) in moving from $\sigma$ to $\sigma'$, i.e. the expansion $\theta_{(\ell)}$ at some point on $N_{-\ell}(\sigma)$ between $\sigma$ and $\sigma'$ has to be nonpositive. By the null energy condition, once the expansion is negative, it remains nonpositive. This means that $\left. \theta[N_{-\ell}(\sigma)]\right|_{\sigma}\leq0$. Since $\sigma$ may be taken to be arbitrarily close to $X$, we find: \_[,k]{}\[X\]0. [^1]: Our coarse-grained entropy depends on the choice of the surface $\sigma$. We believe that this is analogous to the ambiguity in thermodynamics, where it is also necessary to devise a prescription for a demarcation between macrostate and microstate, which to some extent is dependent on the scheme. [^2]: Some techniques in numerical relativity, such as the “turducken black hole” [@Turducken; @Stuffed] or the characteristic-Cauchy matching of [@GomWin97] do use initial data matching across a surface (rather than across a buffer region); but in the former case, the matched initial data is taken to be arbitrary, with allowed violations of the Einstein constraint equations; in the latter case, the matching is adapted specifically to the apparent horizon’s exterior (and to finding the apparent horizon). While some case-by-case examples of initial data matching exist (see [@Bona2009] for an example), we are not aware of existing algorithmic junction conditions besides the specialized ones of [@GomWin97]. [^3]: We will also extend this construction to the case of nonminimal extremal surfaces anchored to the boundary, in order to define a coarse-grained entropy for subregions of the CFT. [^4]: This is in contrast with complicated nonlocal CFT operators that could modify fields deep in the bulk. For this reason we call these operators “simple”. [^5]: i.e. except on a measure zero subset. This will be important to allow the gluing constructions that are an essential part of this paper. [^6]: This equation assumes, as stated in section \[sec:Defs\], that $k^a$ is orthogonal to the hypersurface, i.e. that the vorticity $\omega_{ab}$ vanishes. [^7]: The HRT prescription can also be used to calculate the von Neumann entropy of subregions of the boundary that do not constitute a complete connected component, but except in Sec.\[sec:extremal\] our results will be shown in the case where $R = B$. However, we believe that much of what we say could be extended to the case of a general region $R$. [^8]: This excludes spacetimes featuring e.g. an inflating de Sitter asymptotic region behind the horizon (see [@FisMar14]), where maximin/HRT surfaces do not necessarily exist in the (real, nonconformally compactified) spacetime geometry, and the holographic interpretation is unclear. [^9]: Technically, the proofs of the initial data problem guarantee only local existence and normally assume a somewhat higher differentiability order than we do. We will discuss these subtleties in Sec. \[sec:MultiJunction\]. [^10]: As a sanity check, one could test our initial data against the null constraint equations on $N_{k}$. Eqs. - and continuity of quantities tangent to $N_{k}$ and $N_{\ell}$ together with the constraint equations imply that the stress-energy tensor has at most step-function discontinuities, as desired. [^11]: When $\mu$ is not simply connected, we should also demand matching of the integral of $A_i$ along noncontractible Wilson loops. [^12]: The shear $\varsigma_{v}$ may be discontinuous across $\mu$, but such solutions are believed to be valid characteristic initial data [@Ren90]; indeed, Ref. [@LukRod12; @LukRod13] studied a discontinuous shear sourcing a $\delta$-function in the curvature, which was still distributionally well-behaved. (We expect that this continues to be true even if the shear discontinuity reflects off of the AdS boundary.) [^13]: $F_{iu}$ is excluded from the table because it is not needed as initial data for the characteristic initial data formulation of electromagnetism. Furthermore, continuity of $F_{iu}$ and $F_{iv}$ is not required as junction conditions. [^14]: Note that the full CPT is required for this, since the twist $\chi_i$ is odd under $P$ and $T$ separately, while the gauge potential $A_a$ has an extra sign in its transformation under $C$ and $T$ separately. [^15]: We believe that the resulting spacetime is the bulk dual of the GNS construction [@GelNeu43; @Seg47] acting on the state $\rho_{B}'$ in the algebra of $B$. The GNS construction is a natural purification of the state respecting all symmetries, including a $\mathbb{Z}_2$ antiunitary symmetry relating $B$ to a complementary system $\widetilde{B}$. [^16]: A similar conclusion is reached in [@NomRem18]. [^17]: The argument would roughly work by constructing a surface which is maximin in $I_{W}[\upsilon]$, and showing that its area is smaller than that of $\upsilon$; the conclusion then follows immediately. A possible issue, however, is the equivalence of maximin and HRT if the maximin surface lies on $\partial I_{W}[\mu]$; a complete proof would likely require showing that the maximin surface cannot do so. [^18]: The effects of finite sized sources are given by a time-ordered exponential, whereas in the case of operators we are only be interested in their expectation value; that is why we only require “infinitesimal” causality in the definition of simplicity for operators. Thus the simple operators lie in a vector subspace of operators, while the set of simple sources may not have a vector space structure. [^19]: We will assume that $t_f - t_i$ is much longer than any other time scale in the problem. [^20]: As stated above, in the classical bulk regime, the ${\cal O}_{J}$’s can themselves be written as spatial integrals over local operators ${\cal O}_{J}(t',x')$, but for ease of notation we have not written the spatial dependence explicitly in Eq. . [^21]: To prove this rigorously, we would need to analyze the analogue of Eq. when $\theta_v \ne 0$. [^22]: On a *smooth, spacelike* Cauchy slice, an outermost marginally trapped surface is generically guaranteed to exist [@AndMarMet08], and satisfies the “stability” property $\nabla_\ell \theta{(k)} \le 0$ [@AndMar08]. Since there always exist spacelike slices very close to $N_\ell[t_i]$, we therefore expect our strict form of stability to hold generically. [^23]: This is equivalent to the one-point data of [@KelWal13], where their domain of dependence is taken to be the whole boundary. That work erroneously conjectured that, in the case where there are no boundary sources, maximizing $S_{vN}$ subject to this one-point data would give Area$[N_\ell[t_i] \cap H^+]$, where $H^+ = \partial I^-[\partial M]$ is the future event horizon. [^24]: In situations involving caustics, the domain of dependence is larger. [^25]: In a regime where gravitational back-reaction is important, it is necessary to “dress” these local bulk operators with suitable gravitational field lines extending out to the boundary $B$. [^26]: There exists a counterexample to HKLL for a scalar field with evolution equation $\Box \phi = V \phi$, where $V$ is a complex potential (without analyticity, spherical symmetry, or stationarity) [@AliBao95]. However, this theory is unphysical due to $V$ being complex. [^27]: This does not exclude the possibility that we may be able to deduce *some* information outside of $R[t_i]_J$ from $R[t_i]_J$ using the bulk equations of motion and/or constraint equations, but since this information is determined by $R[t_i]_J$ it is not independent data. That is why, in the argument above, we consider only valid initial data that does not change $R[t_i]_J$. [^28]: In order to comply with the No Hair Theorem [@MTW; @Isr67; @Isr68; @Car70], it may be necessary to leave some matter crossing the horizon to the future of some very late time $t_f$, but this should make an exponentially small difference to the event horizon location at early times. Note also that, since an equilbrium black hole should be stable under small perturbations, the perturbations to the fields should not diverge at late times. [^29]: This last point follows from the stability requirement that $\nabla_{k}\theta_{(\ell)}<0$ for a minimar surface. [^30]: This ought to be true at least for small perturbations to a stationary black hole. [^31]: Unfortunately, we do not know how to give an explanation of the area law for the timelike (or mixed signature) parts of holographic screens. In this case, the marginally trapped surfaces on the timelike component have $\nabla_{k}\theta_{(\ell)}>0$ and are thus not minimar surfaces, so we cannot prove that their area corresponds to the entropy associated to ignorance of their interior. Another obstacle is that in the timelike case, their outer wedges $O_W[\mu]$ are not nested: we cannot yet explain thermodynamically why the entropy on the timelike component of the holographic screen increases *towards the past*. This apparent contradiction with our usual intuitions about the direction of entropy increase remains a mystery.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The *assignment flow* is a smooth dynamical system that evolves on an elementary statistical manifold and performs contextual data labeling on a graph. We derive and introduce the *linear assignment flow* that evolves nonlinearly on the manifold, but is governed by a linear ODE on the tangent space. Various numerical schemes adapted to the mathematical structure of these two models are designed and studied, for the geometric numerical integration of both flows: embedded Runge-Kutta-Munthe-Kaas schemes for the nonlinear flow, adaptive Runge-Kutta schemes and exponential integrators for the linear flow. All algorithms are parameter free, except for setting a tolerance value that specifies adaptive step size selection by monitoring the local integration error, or fixing the dimension of the Krylov subspace approximation. These algorithms provide a basis for applying the assignment flow to machine learning scenarios beyond supervised labeling, including unsupervised labeling and learning from controlled assignment flows.' address: - 'Image and Pattern Analysis Group, Heidelberg University, Germany' - 'Image and Pattern Analysis Group, Heidelberg University, Germany' - 'Mathematical Imaging Group, Heidelberg University, Germany' - 'Image and Pattern Analysis Group, Heidelberg University, Germany' author: - 'Alexander Zeilmann, Fabrizio Savarino, Stefania Petra, Christoph Schnörr' bibliography: - 'ExponentialIntegrators.bib' title: Geometric Numerical Integration of the Assignment Flow --- Introduction {#sec:Introduction} ============ The Assignment Flow {#sec:AssignmentFlow} =================== Geometric Runge-Kutta Integration {#sec:Runge-Kutta} ================================= Linear Assignment Flow, Exponential Integrator {#sec:ExponentialIntegrators} ============================================== Step Sizes, Adaptivity {#sec:Adaptivity} ====================== Experiments and Discussion {#sec:Experiments} ========================== Conclusion {#sec:Conclusion} ========== **Acknowledgements.** This work was supported by the German Research Foundation (DFG), grant GRK 1653.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Day-ahead scheduling of electricity generation or unit commitment is an important and challenging optimization problem in power systems. Variability in net load arising from the increasing penetration of renewable technologies have motivated study of various classes of stochastic unit commitment models. In two-stage models, the generation schedule for the entire day is fixed while the dispatch is adapted to the uncertainty, whereas in multi-stage models the generation schedule is also allowed to dynamically adapt to the uncertainty realization. Multi-stage models provide more flexibility in the generation schedule, however, they require significantly higher computational effort than two-stage models. To justify this additional computational effort, we provide theoretical and empirical analyses of the *value of multi-stage solution* for risk-averse multi-stage stochastic unit commitment models. The value of multi-stage solution measures the relative advantage of multi-stage solutions over their two-stage counterparts. Our results indicate that, for unit commitment models, value of multi-stage solution increases with the level of uncertainty and number of periods, and decreases with the degree of risk aversion of the decision maker.' author: - 'Ali İrfan Mahmutoğullar[i]{}, Shabbir Ahmed, [Ö]{}zlem [Ç]{}avu[ş]{} and M. Selim Akt[ü]{}rk [^1] [^2] [^3] [^4]' title: 'The Value of Multi-stage Stochastic Programming in Risk-averse Unit Commitment under Uncertainty' --- = \[rectangle, draw, text width=7em, text centered, minimum height=4.5em\] = \[rectangle, draw, text width=5em, text centered, rounded corners, minimum height=4em\] = \[rectangle, text width=1em, text centered, minimum height=1em\] = \[draw, -latex’\] Unit commitment, risk-averse optimization, stochastic programming. Introduction {#sec:intro} ============ *nit commitment* (UC) is a challenging optimization problem used for day-ahead generation scheduling given net load forecasts and various operational constraints [@kazarlis1996genetic]. The output schedule includes on-off status of generators and the production amounts, called *economic dispatch* [@huang2017electrical], for every time step. There has been a great deal of research on deterministic UC models where the problem parameters are assumed to be known exactly[@padhy2004unit]. These models cannot capture variability and uncertainty. Common sources of uncertainty are departures from forecasts and unreliable equipment. The departures from forecasts generally stem from the variability in net load and production amounts, whereas unreliable equipment may result in generator and transmission line outages [@huang2017electrical], [@ruiz2009uncertainty]. The penetration of renewable energy has increased the volatility of power systems in recent years. The production amount of energy from wind and solar power are not controllable but can only be forecasted [@brown2008optimization]. *Robust optimization* and *stochastic programming* are two common frameworks used to address the uncertainty in UC problems. In robust optimization models, it is assumed that the uncertain parameters take values in some uncertainty sets and the objective is to minimize the worst case cost (cf. [@bertsimas2013adaptive], [@lorca2017multistage], [@zhao2012robust], [@jiang2012robust] and [@wang2013two]). In stochastic programming models, the uncertainty is represented by a probability distribution (cf. [@cheung2015toward], [@tahanan2015], [@papavasiliou2015applying], [@wang2008security] and [@takriti1996stochastic]). In *two-stage* stochastic programming UC models, the generation schedule is fixed for the entire day before the beginning of the day while dispatch is adapted to uncertainty as in [@caroe1998two; @wang2012chance] and [@zheng2013decomposition]. On the other hand, in *multi-stage* stochastic programming UC models both the generation schedule and dispatch are allowed to dynamically adapt to uncertainty realization at each hour (see for example, [@takriti1996stochastic; @takayuki2004stochastic] and [@jiang2016cutting]). Therefore, they incorporate multistage forecasting information with varying accuracy and express relation between time periods appropriately. However, in general, the multi-stage models are computationally difficult. A detailed comparison of two- and multi-stage models can be found in [@zheng2015stochastic] and [@lorca2016multistage]. The computational challenge of multi-stage models motivates the question on whether the effort to solve them is worthwhile. In [@huang2009value], this question is addressed for a risk-neutral stochastic capacity planning problem. In the present paper, we address this question for risk-averse UC (RA-UC) problems where the objective is a dynamic measure of risk. We provide theoretical and empirical analysis on the value of the *multi stage solution* (VMS) where VMS measures the relative advantage to solve the multi-stage models over their two-stage counterparts. The rest of the paper is organized as follows: In Section \[sec:problem\], we define the RA-UC problem and present two- and multi-stage stochastic models. In Section \[sec:vms\], we define VMS and provide analytical bounds for it. In Section \[sec:computation\], we present results of computational experiments. In Section \[sec:conclusion\], we discuss possible future extensions of the current work. Risk-averse Unit Commitment Problem {#sec:problem} =================================== Deterministic UC formulation ---------------------------- We first present an abstract deterministic formulation of the UC problem. Let $I$ be the number of generators and $T$ be the number of periods. Also, let $\mathcal{I} := \{1,\ldots,I\}$ and $\mathcal{T} := \{1,\ldots,T\}$ be the sets of generators and time periods, respectively. A formulation of the UC problem is as follows: $$\begin{aligned} \text{min} \; & \sum_{t=1}^T f_t(\boldsymbol{u}_t, \boldsymbol{v}_t, \boldsymbol{w}_t) \label{det-obj}\\ \text{s.t.} \; & \sum_{i = 1}^I v_{it} \geq d_t, \; \forall t \in \mathcal{T} \label{det-dem} \\ & \underline{q}_i u_{it} \leq v_{it} \leq \overline{q}_i u_{it}, \; \forall i \in \mathcal{I}, t \in \mathcal{T} \label{det-cap} \\ & (\boldsymbol{u}_{1},\boldsymbol{v}_{1},\boldsymbol{w}_{1}) \in \mathcal{X}_1, \label{det-set1} \\ & (\boldsymbol{u}_{t},\boldsymbol{v}_{t},\boldsymbol{w}_{t}) \in \mathcal{X}_t(\boldsymbol{u}_{t-1},\boldsymbol{v}_{t-1},\boldsymbol{w}_{t-1}), \; \forall t \in \mathcal{T} \setminus \{1\} \label{det-set} \\ & \boldsymbol{u}_t \in \{0,1\}^I, \boldsymbol{v}_t \in \mathbb{R}_+^I, \boldsymbol{w}_t \in \mathbb{R}^k, \; \forall t \in \mathcal{T} \label{det-dom} \end{aligned}$$ Decision variables $u_{it}$ and $v_{it}$ represent the binary on/off status and production of generator $i \in \mathcal{I}$ in period $t \in \mathcal{T}$, respectively. The bold symbols $\boldsymbol{u}_t := (u_{1t},u_{2t},\ldots,u_{It})$ and $\boldsymbol{v}_t := (v_{1t},v_{2t},\ldots,v_{It})$ are the vectors of status and production decisions in period $t \in \mathcal{T}$, respectively. The vector $\boldsymbol{w}_t$ denotes auxiliary variables associated with period $t \in \mathcal{T}$. These variables are used model various operational constraints. The objective (\[det-obj\]) is the sum of production, start-up and shut-down costs in all periods. The function $f_t(\cdot)$ represents the total cost in a period $t \in \mathcal{T}$. Constraint (\[det-dem\]) ensures satisfaction of the power demand. Constraint (\[det-cap\]) enforces lower and upper production limits on the generators. Other operational restrictions are represented by constraints (\[det-set1\]) and (\[det-set\]). The temporal relationship between consecutive periods such as start-up, rump-up, shut-down and rump-down restrictions are modeled by the set constraint (\[det-set\]). Domain restrictions of the decision variables are given by constraint (\[det-dom\]). A concrete version of the above abstract formulation is presented in Appendix \[app:model\]. Uncertainty and Risk models --------------------------- In the deterministic formulation above, net load values are assumed to be known exactly. This is a restrictive assumption in practice. We assume that the net load is random and denoted by a random variable $\widetilde{d}_t$ in period $t \in \mathcal{T}$ from a probability space $(\Omega,\mathcal{F},P)$. Here $\Omega$ is a sample space equipped with sigma algebra $\mathcal{F}$ and probability measure $P$. An element of the sample space $\Omega$ is called as a *scenario* (or a sample path) and represents a possible realization of the net load values in all periods. The sequence of sigma algebras $ \{\emptyset,\Omega\}=\mathcal{F}_1 \subseteq \mathcal{F}_2 \subseteq \cdots \subseteq \mathcal{F}_T = \mathcal{F}$ is called as a *filtration* and it represents the gradually increasing information through the decision horizon $1,2,\ldots,T$. The set of $\mathcal{F}_t-$measurable random variables is denoted by $\mathcal{Z}_t$ for $t \in \mathcal{T}$. The random demand $\widetilde{d}_t$ in period $t$ is $\mathcal{F}_t-$measurable, that is $\widetilde{d}_t \in \mathcal{Z}_t$ for $t \in \mathcal{T}$. Note that since $\mathcal{F}_1 = \{\emptyset,\Omega\}$ by definition, $\mathcal{Z}_1 = \mathbb{R}$ and the demand in the first period is deterministic. To extend the deterministic UC model to this ucertainty setting, we have that the decisions in period $t$ to depend on realization of the history of net load process $\widetilde{d}_{[t]} := (\widetilde{d}_{1},\ldots,\widetilde{d}_{t})$ up to period $t$. Therefore, we use the $\mathcal{F}_t-$measurable vectors $\widetilde{\boldsymbol{u}}_t(\widetilde{d}_{[t]})$, $\widetilde{\boldsymbol{v}}_t(\widetilde{d}_{[t]})$ and $\widetilde{\boldsymbol{w}}_t(\widetilde{d}_{[t]})$ to represent status, production and auxiliary decisions in period $t \in \mathcal{T}$, respectively. The total cost at period $t$ is also $\mathcal{F}_t-$measurable, i.e., $f_t(\widetilde{\boldsymbol{u}}_t(\widetilde{d}_{[t]}), \widetilde{\boldsymbol{v}}_t(\widetilde{d}_{[t]}), \widetilde{\boldsymbol{w}}_t(\widetilde{d}_{[t]}))\in \mathcal{Z}_t$. We use conditional risk measures in order to quantify the risk involved in a random cost at period $t+1$ based on the available informations at period $t$ for $t \in \mathcal{T}\setminus\{T\}$. The mapping $\rho_t : \mathcal{Z}_{t+1} \rightarrow \mathcal{Z}_t$ is called a *conditional risk measure* if it satisfies the following four axioms of coherent risk measures (the subscript $t$ is suppressed for notational brevity): - *Convexity*: $\rho(\alpha Z + (1-\alpha)W) \leq \alpha \rho(Z) + (1-\alpha)\rho(W)$ for all $Z,W \in \mathcal{Z}$ and $\alpha \in [0,1]$, - *Monotonicity*: $Z \succeq W$ implies $\rho(Z) \geq \rho(W)$ for all $Z,W \in \mathcal{Z}$, - *Translational Equivariance*: $\rho(Z + c) = \rho(Z) + c$ for all $c \in \mathbb{R}$ and $Z \in \mathcal{Z}$, - *Positive Homogeneity*: $\rho(cZ) = c\rho(Z)$ for all $c > 0$ and $Z \in \mathcal{Z}$, where $Z \succeq W$ indicates point-wise partial ordering defined on set $\mathcal{Z}$. See [@artzner1999coherent] and [@shapiro2009lectures] for a detailed discussions on coherent and conditional risk measures. An example of a conditional risk measure is the *conditional mean-upper semi deviation* $$\label{musd} \rho_t(Z_{t+1}) = \mathbb{E}[Z_{t+1}|\mathcal{F}_t] + \lambda\mathbb{E}[(Z_{t+1}-\mathbb{E}[Z_{t+1}|\mathcal{F}_t])_+|\mathcal{F}_t],$$ where $\mathbb{E}[\cdot|\mathcal{F}_t]$ is the conditional expectation with respect to the sigma algebra $\mathcal{F}_t$, $\lambda \in [0,1]$ is a parameter controlling the degree of risk aversion and $(\cdot)_{+}$ is the positive part function for all $Z_{t+1} \in \mathcal{Z}_{t+1}$. The objective of the risk averse UC (RA-UC) problem is to minimize the risk involved with the cost sequence $\{Z_t\}_{t=1}^T$ where $Z_t := f_t(\widetilde{\boldsymbol{u}}_t(\widetilde{d}_{[t]}), \widetilde{\boldsymbol{v}}_t(\widetilde{d}_{[t]}), \widetilde{\boldsymbol{w}}_t(\widetilde{d}_{[t]}))$ is a shorthand notation for the total cost in period $t \in \mathcal{T}$. Thus, as in [@collado2012scenario; @shapiro2009lectures], we define the dynamic coherent risk measure $\varrho: \mathcal{Z}_1 \times \mathcal{Z}_2 \times \cdots \times \mathcal{Z}_T \rightarrow \mathbb{R}$ by using nested composition of the conditional risk measures $\rho_{1}(\cdot),\rho_{2}(\cdot),\ldots,\rho_{T-1}(\cdot)$, that is, $$\varrho(Z_1,Z_2,\ldots,Z_T) := Z_1 + \rho_1(Z_2 + \cdots \rho_{T-1}(Z_T) \cdots )$$ is the risk associated with this cost sequence. Due to translational equivariance property of conditional risk measures, we have an alternative representation of the dynamic coherent measure of risk $\varrho(\cdot)$ as $$\label{rhobar} \rho\left(\sum_{t=1}^T Z_t \right) := \varrho(Z_1,Z_2,\ldots,Z_T)$$ where $\rho = \rho_1 \circ \rho_2 \circ \cdots \circ \rho_{T-1} : \mathcal{Z} \rightarrow \mathbb{R}$ is called as a *composite risk measure* and $\mathcal{Z} := \mathcal{Z}_T$. The composite risk measure $\rho(\cdot)$ satisfies the coherence axioms (A1)-(A4). Therefore, $\rho(\cdot)$ is a coherent risk measure as shown in [@shapiro2009lectures Eqn. 6.234]. Two-stage and Multi-stage models -------------------------------- We consider two different models for the RA-UC problem. In the *two-stage model*, the on/off status decisions are fixed at the beginning of the day and production (or dispatch) decisions are adapted to uncertainty in the random demand. On the other hand, in the *multi-stage model*, both the status and production decisions are fully adapted to uncertainty in net load. In order to clarify the distinction between two models, the decision dynamics in the two- and multi-stage models are depicted as in and , respectively. The two-stage model (TS) for the RA-UC problem is given as $$\begin{aligned} \text{min} \; &\rho\left[\sum_{t=1}^T f_t(\boldsymbol{u}_t, \widetilde{\boldsymbol{v}}_t(\widetilde{d}_{[t]}), \widetilde{\boldsymbol{w}}_t(\widetilde{d}_{[t]}))\right] \label{two-obj} \\ \text{s.t.} \; & \sum_{i \in \mathcal{I}} \widetilde{v}_{it}(\widetilde{d}_{[t]}) \geq \widetilde{d}_t, \; \forall t \in \mathcal{T} \label{two-dem} \\ & \underline{q}_i u_{it} \leq \widetilde{v}_{it}(\widetilde{d}_{[t]}) \leq \overline{q}_i u_{it}, \; \forall i \in \mathcal{I}, t \in \mathcal{T} \label{two-cap} \\ &(\boldsymbol{u}_{1},\boldsymbol{v}_{1},\boldsymbol{w}_{1}) \in \mathcal{X}_1 \label{two-set1} \\ & (\boldsymbol{u}_{t},\widetilde{\boldsymbol{v}}_{t}(\widetilde{d}_{[t]}),\widetilde{\boldsymbol{w}}_{t}(\widetilde{d}_{[t]})) \in \nonumber\\ & \mathcal{X}_t( \boldsymbol{u}_{t-1},\widetilde{\boldsymbol{v}}_{t-1}(\widetilde{d}_{[t-1]}),\widetilde{\boldsymbol{w}}_{t-1}(\widetilde{d}_{[t-1]}), \widetilde{d}_{[t]}), \; \forall t \in \mathcal{T} \setminus \{1\} \label{two-set} \\ & \boldsymbol{u}_t \in \{0,1\}^I, \widetilde{\boldsymbol{v}}_t(\widetilde{d}_{[t]}) \in \mathbb{R}_+^I, \widetilde{\boldsymbol{w}}_t(\widetilde{d}_{[t]}) \in \mathbb{R}^k, \; \forall t \in \mathcal{T} \label{two-dom} \end{aligned}$$ The objective (\[two-obj\]) of TS is the composite risk measure defined in (\[rhobar\]) applied to the total cost sequence. The inequalities (\[two-dem\]) and (\[two-cap\]) are analogous to the constraints (\[det-dem\]) and (\[det-cap\]), respectively. The set constraint (\[two-set1\]) is identical to (\[det-set1\]) since the net load in the first period is deterministic. In constraint (\[two-set\]), $\mathcal{X}_t$ is an $\mathcal{F}_t-$measurable feasibility set. The domain constraint (\[two-dom\]) states that only production and auxiliary decisions depend on the demand history and the status decisions are deterministic. However, in the multi-stage model of the RA-UC problem, all decisions are made based on the history. Hence, the multi-stage model (MS) can be written as $$\begin{aligned} \text{min} \; &\rho\left[\sum_{t=1}^T f_t(\widetilde{\boldsymbol{u}}_t(\widetilde{d}_{[t]}), \widetilde{\boldsymbol{v}}_t(\widetilde{d}_{[t]}), \widetilde{\boldsymbol{w}}_t(\widetilde{d}_{[t]}))\right] \label{mul-obj} \\ \text{s.t.} \; & \sum_{i \in \mathcal{I}} \widetilde{v}_{it}(\widetilde{d}_{[t]}) \geq \widetilde{d}_t, \; \forall t \in \mathcal{T} \label{mul-dem} \\ \text{s.t.} \; & \underline{q}_i \widetilde{u}_{it}(\widetilde{d}_{[t]}) \leq \widetilde{v}_{it}(\widetilde{d}_{[t]}) \leq \overline{q}_i \widetilde{u}_{it}(\widetilde{d}_{[t]}), \; \forall i \in \mathcal{I}, t \in \mathcal{T} \label{mul-cap} \\ &(\boldsymbol{u}_{1},\boldsymbol{v}_{1},\boldsymbol{w}_{1}) \in \mathcal{X}_1 \label{mul-set1} \\ & (\widetilde{\boldsymbol{u}}_{t}(\widetilde{d}_{[t]}),\widetilde{\boldsymbol{v}}_{t}(\widetilde{d}_{[t]}),\widetilde{\boldsymbol{w}}_{t}(\widetilde{d}_{[t]})) \in \nonumber\\ & \mathcal{X}_t( \widetilde{\boldsymbol{u}}_{t-1}(\widetilde{d}_{[t-1]}),\widetilde{\boldsymbol{v}}_{t-1}(\widetilde{d}_{[t-1]}),\widetilde{\boldsymbol{w}}_{t-1}(\widetilde{d}_{[t-1]}), \widetilde{d}_{[t]}), \nonumber \\ & \forall t \in \mathcal{T} \setminus \{1\} \label{mul-set} \\ & \widetilde{\boldsymbol{u}}_t(\widetilde{d}_{[t]}) \in \{0,1\}^I, \widetilde{\boldsymbol{v}}_t(\widetilde{d}_{[t]}) \in \mathbb{R}_+^I, \widetilde{\boldsymbol{w}}_t(\widetilde{d}_{[t]}) \in \mathbb{R}^k,\nonumber \\ & \forall t \in \mathcal{T} \label{mul-dom} %\\ %& (\ref{two-dem}) \text{ and } (\ref{two-set1})\nonumber\end{aligned}$$ Note that the multi-stage model MS is identical with TS except the status decisions are fully adaptive to the random net load process. An optimal solution of either TS and MS is a policy that minimizes the value of the dynamic coherent risk measure. Both in TS and MS, the optimality of a policy should only be with respect to possible future realizations given the available information at the time when the decision is made. This principle is called as *time consistency*. In [@shapiro2009time Example 2], it is shown that time consistency enables us to use the composite risk measure in minimization among all possible decisions instead of nested minimizations in a dynamic coherent measure of risk. Value of The Multi-stage Solution {#sec:vms} ================================= Although an optimal solution of MS provides more flexible day-ahead schedule with respect to different realizations of parameters, the number of binary variables in MS is proportional to $\mathcal{N} \times I$ where $\mathcal{N}$ is the number of possible demand realizations in all periods if $\Omega$ is finite. However, the number of binary variables in TS is proportional to $T \times I$. Since $\mathcal{N} >> T$ for any non-trivial problem, computational difficulty of MS is significantly more than TS. Therefore, it is important to figure out if the additional effort to solve MS is worthwhile. We define the VMS in order to quantify the relative advantage of the multi-stage solution over their two-stage counterparts. The value of multi-stage solution (VMS) is the difference between the optimal values of TS and MS, that is, $\text{VMS} = z^{TS}-z^{MS}$ where $z^{TS}$ and $z^{MS}$ are the optimal values of TS and MS, respectively. Since an optimal solution of MS provides more flexibility in status decisions with respect to uncertain net load realizations, we have $z^{TS} \geq z^{MS}$ and therefore $\text{VMS} \geq 0$. Next we provide theoretical bounds on the VMS under some assumptions. \[as:recourse\] There exists a generator $j^* \in \mathcal{I}$ such that $\underline{q}_{j^*}\leq \widetilde{d}_t \leq \overline{q}_{j^*}$ with probability 1 and with no minimum start up and shut down time for each $t \in \mathcal{T}$. \[as:boundeddemand\] There exists an upper bound $d_t^{\max} \in \mathbb{R}_+$ on the net load values such that $0 \leq \widetilde{d}_t \leq d_t^{\max}$ with probability 1 for each $t \in \mathcal{T}$. \[as:linearcost\] The production cost of the each generator $i \in \mathcal{I}$ is linear and stationary, and there are no start-up and shut-down costs. In this case the total cost function in each period is of the form $ f_t(\boldsymbol{u}_t, \boldsymbol{v}_t, \boldsymbol{w}_t) = \sum_{i \in \mathcal{I}} (a_{i} u_{it} + b_i v_{it})$ for some positive coefficients $a_i$ and $b_i$ for all $i \in \mathcal{I}$. Assumption \[as:recourse\] ensures that TS and MS always have at least one feasible solutions and therefore both problems have *complete recourse*. Assumption \[as:boundeddemand\] states that the net load in each period is bounded. We also define $\widetilde{D} := \sum_{t =1}^T \widetilde{d}_t$ as the total net load and $D^{\max} := \sum_{t =1}^T d^{\max}_t$ as an upper bound on $\widetilde{D}$. The above assumptions are somewhat restrictive but necessary for the analytical result next. In Section \[sec:computation\], we will provide numerical results showing that the analytical results hold even without these assumptions. \[thepro\] Under Assumptions 1, 2 and 3 we have that $$\alpha_*D^{\max} - \alpha^* \rho( \widetilde{D}) \leq \text{ \emph{VMS} } \leq \alpha^* D^{\max} - \alpha_* \rho( \widetilde{D}).$$ where $$\begin{aligned} \alpha_{*} &:= \underset{i \in \mathcal{I}}{\min} \left\lbrace a_{i} +b_{i}\underline{q}_i\right\rbrace \Big / \underset{i \in \mathcal{I}}{\max} \left\lbrace\overline{q}_i\right\rbrace \text{ and} \nonumber \\ \alpha^{*} &:= \underset{i \in \mathcal{I}}{\max} \left\lbrace a_{i} +b_{i}\overline{q}_i\right\rbrace \Big / \underset{i \in \mathcal{I}}{\min} \left\lbrace\underline{q}_i\right\rbrace \nonumber \nonumber \end{aligned}$$ are cost related problem parameters. Assumption \[as:recourse\] implies that both TS and MS are feasible. Since the net loads are bounded due to Assumption \[as:boundeddemand\], both models have at least one optimal solution. Let $\{\widetilde{\boldsymbol{u}}^*_t,\widetilde{\boldsymbol{v}}^*_t,\widetilde{\boldsymbol{w}}^*_t\}_{t \in \mathcal{T}}$ be an optimal policy obtained by solving the multi-stage model MS. By Assumption \[as:linearcost\], we have $f_t(\widetilde{\boldsymbol{u}}^*_t(\widetilde{d}_{[t]}),\widetilde{\boldsymbol{v}}^*_t(\widetilde{d}_{[t]}),\widetilde{\boldsymbol{w}}^*_t(\widetilde{d}_{[t]})) = \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} a_{i}\widetilde{u}^*_{it}(\widetilde{d}_{[t]}) + b_{i}\widetilde{v}^*_{it}(\widetilde{d}_{[t]})$ For a realization $d_1,d_2,\ldots,d_T$ of the random net load process $\widetilde{d}_1,\widetilde{d}_2,\ldots,\widetilde{d}_T$, let $[\boldsymbol{u}^*_t,\boldsymbol{v}^*_t] := [\widetilde{\boldsymbol{u}}^*_t,\widetilde{\boldsymbol{v}}^*_t](d_{[t]})$ be the optimal status and production decisions for $t \in \mathcal{T}$. Then, we have $$\begin{aligned} & \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} a_{i}u^*_{it} + b_{i}v^*_{it} \geq \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} a_{i}u^*_{it} + b_{i}\underline{q}_i u^*_{it} \nonumber \\ & = \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} (a_{i} + b_{i}\underline{q}_i)u^*_{it} \geq \underset{i \in \mathcal{I}}{\min} \{a_{i} + b_{i}\underline{q}_i\} \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} u^*_{it} \nonumber \\ & \geq \underset{i \in \mathcal{I}}{\min} \{a_{i} + b_{i}\underline{q}_i\} \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} \frac{v^*_{it}}{\overline{q}_i} \geq \frac{ \underset{i \in \mathcal{I}}{\min} \{a_{i} + b_{i}\underline{q}_i\}}{\underset{i \in \mathcal{I}}{\max} \{\overline{q}_i\}} \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} v^*_{it} \nonumber \\ & = a_* \sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} v^*_{it} \geq a_* \sum_{t \in \mathcal{T}} d_t \nonumber \end{aligned}$$ where the first, third and fifth inequalities follow from feasibility. Since $\sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} a_{i}u^*_{it} + b_{i}v^*_{it} \geq a_* \sum_{t \in \mathcal{T}} d_t$ for any sample path $d_1,d_2,\ldots,d_T$, we have $\sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} a_{i}\widetilde{\boldsymbol{u}}^*_{it}(d_{[t]}) + b_{i}\widetilde{\boldsymbol{v}}^*_{it}(d_{[t]}) \succeq a_* \sum_{t \in \mathcal{T}} \widetilde{d}_t = \alpha_{*}\widetilde{D}$. Due to the monotonicity axiom (A2) and positive homogeneity axiom (A4), we get $$\begin{aligned} z^{MS} & = \rho\left(\sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} a_{i}\widetilde{u}^*_{ti}(d_{[t]}) + b_{i}\widetilde{v}^*_{ti}(d_{[t]}) \right) \nonumber \\ & \geq \rho(\alpha_{*}\widetilde{D}) = \alpha_{*} \rho(\widetilde{D}). \nonumber\end{aligned}$$ Next, we consider a feasible policy $\{\widehat{\boldsymbol{u}}^*_t,\widehat{\boldsymbol{v}}^*_t,\widehat{\boldsymbol{w}}^*_t\}_{t \in \mathcal{T}}$ to the multi-stage model where $\widehat{u}_{j^*t}(\widetilde{d}_{[t]})=1$, $\widehat{v}_{j^*t}(\widetilde{d}_{[t]})=\widetilde{d}_t$ and all other status and generation variables are set to zero for a sample path $d_1,d_2,\ldots,d_t$. The feasibility of the solution is guaranteed by Assumption \[as:recourse\]. Then, $$\begin{aligned} & z^{MS} \leq \rho \left( \sum_{t \in\mathcal{T}} \sum_{i \in\mathcal{I}} a_{i}\widehat{u}_{it}(\widetilde{d}_{[t]}) + b_{i}\widehat{v}_{it}(\widetilde{d}_{[t]}) \right) \nonumber \\ & = \rho \left( \sum_{t \in\mathcal{T}} a_{j^*}\widehat{u}_{j^*t}(\widetilde{d}_{[t]}) + b_{j^*}\widehat{v}_{j^*t}(\widetilde{d}_{[t]})\right) = \rho \left( \sum_{t \in\mathcal{T}} a_{j^*} + b_{j^*} \widetilde{d}_{t}\right) \nonumber \\ & = \rho \left( \sum_{t \in\mathcal{T}} \frac{a_{j^*} + b_{j^*} \widetilde{d}_{t}}{\widetilde{d}_{t}}\widetilde{d}_{t} \right) \leq \rho \left( \sum_{t \in\mathcal{T}} \frac{a_{j^*} + b_{j^*} \overline{q}_{j^*}}{\underline{q}_{j^*}}\widetilde{d}_{t} \right) \nonumber \\ & \leq \frac{ \underset{i \in \mathcal{I}}{\max} \{a_{i} + b_{i} \overline{q}_i\}}{\underset{i \in \mathcal{I}}{\min}\{\underline{q}_i\}} \rho \left( \sum_{t \in\mathcal{T}} \widetilde{d}_{t} \right) = \alpha^* \rho \left( \sum_{t \in\mathcal{T}} \widetilde{d}_{t} \right) \leq \alpha^* \rho (\widetilde{D}) \nonumber\end{aligned}$$ where the first inequality follows from feasibility, the second inequality follows from Assumption \[as:recourse\] and the third equality follows from axiom (A4) and the definition of $\alpha^{*}$. Thus, we get lower and upper bounds for the multi-stage problems, that is, $$\label{multibounds} \alpha_* \rho(\widetilde{D}) \leq z^{MS} \leq \alpha^* \rho(\widetilde{D}).$$ Note that in the two-stage model, the status decisions in period $t \in \mathcal{T}$ is identical for all realizations of problem parameters in that period and satisfies $\max\{\widetilde{v}^*_{it}(\widetilde{d}_{[t]})\} \leq \overline{q}u^*_{it}$. Then, using this fact, a similar analysis can be used to obtain lower and upper bounds for the two-stage model and we get $$\label{twobounds} \alpha_* D^{\max} \leq z^{TS} \leq \alpha^* D^{\max}.$$ The claim of the theorem follows from (\[multibounds\]) and (\[twobounds\]). If the generators are almost identical and lower and upper production limits are close enough, we have $\alpha_{*} \approx \alpha \approx \alpha^*$. Then, we have $$\label{almost} \text{VMS} \approx \alpha(D^{\max} - \rho(\widetilde{D})).$$ Note that $ 0 \leq \rho(\widetilde{D}) \leq D^{\max}$ and the approximation (\[almost\]) implies that the VMS increases with $D^{\max}$ and therefore variability in the net load. However, for fixed variability, the VMS decreases with $\rho(\widetilde{D})$ and therefore the degree of risk aversion. Assume that the net load in period $t \in \mathcal{T}$ is $\widetilde{d}_t = \overline{d}_{t} + \mathcal{U}[-\Delta,\Delta]$ where $\overline{d}_{t}$ is a deterministic value and $\mathcal{U}[-\Delta,\Delta]$ is an error term uniformly distributed between $-\Delta$ and $\Delta$ for some $\Delta \in \mathbb{R}_+$. Also assume that the composite risk measure $\rho(\cdot)$ is obtained using conditional mean-upper semi deviation as given in (\[musd\]). Then, $$\begin{aligned} \text{VMS} & \approx \alpha(D^{\max} - \rho(\widetilde{D})) \nonumber \\ & = \alpha \left( \sum_{t = 1}^T d^{\max}_t - \rho\left(\sum_{t = 1}^T \widetilde{d}_t\right)\right) \nonumber \\ & = \alpha T \left(1-\frac{\lambda}{4}\right) \Delta \label{final}\end{aligned}$$ where the second equality follows from definitions of $d^{\max}_t$, $\widetilde{d}_t$ and evaluation of mean-upper semi deviation risk measure $\rho(\cdot)$. The approximation in (\[final\]) suggests that the VMS increases with the number of periods $T$ and the variability in the net load $\Delta$. However, VMS decreases with the degree of risk aversion $\lambda$. Computational Experiments {#sec:computation} ========================= The analytical result of the previous section rely on restrictive assumptions to simplify the structure of the RA-UC problem. In order to see how the VMS behave in the absence of these assumptions, we conduct a set of computational experiments next. We consider a power system with 10 generators in the computational experiments. We use the data set presented in [@kazarlis1996genetic] with some modifications. We also consider a random net load process with eight scenarios where the power demand at each hour is subject to uncertainty. The scenario tree depicting the random process is given . A similar scenario tree structure is used in [@shiina2004stochastic]. ![Scenario tree \[stree\]](scenariotree.jpg){width="35.00000%"} The test data is presented in Appendix \[app:data\]. We use the base demand values presented in Table \[tab:dem\] to generate random demands. A variability parameter $\epsilon$ is used to control the dispersion of demand across all scenarios. Demand values for each scenario are presented in Table \[tab:sce\]. All other parameters are set to the values presented in Table \[tab:gen\]. A PC with two 2.2GHz processors and 6 GB of RAM is used in the computational experiments. The quadratic production cost functions $\{g_i(\cdot)\}_{i \in \mathcal{I}}$ are approximated by a piecewise linear cost function with four pieces of equal lengths. We use a conditional mean-upper semi deviation risk measure (\[musd\]) in each period. The conditional risk measures $\rho_{1}(\cdot),\rho_2(\cdot),\ldots,\rho_{T-1}(\cdot)$, the dynamic coherent risk measure $\varrho(\cdot)$ and the composite risk measure $\rho(\cdot)$ are defined accordingly. We model and solve the two-stage model TS and the multi-stage model MS for five different values of variability parameter $\epsilon$ and six different values of the penalty parameter $\lambda$. For each $\epsilon$ and $\lambda$ pair, we calculate VMS in terms of difference of optimal values, that is, $$\text{VMS (\$)} = z^{TS} - z^{MS},$$ and in terms of percentage $$\text{VMS (\%)} = \frac{z^{TS} - z^{MS}}{z^{MS}},$$ The results on the VMS are presented in . [0.485]{} ![image](vmsdollar.pdf){width="\textwidth"} [0.485]{} ![image](vmspecentage.pdf){width="\textwidth"} verifies our analytical findings on VMS. We observe an increase in VMS with the uncertainty in net load values. The VMS and hence importance of the multi-stage model increases as the dispersion among the scenarios increases. As expected, the day-ahead schedule obtained by solving the multi-stage model is more adaptive and provides more flexibility in case of high variability of problem parameters. We also observe decrease in the VMS with the level of risk aversion. In parallel with the analytical results in Theorem \[thepro\], higher risk aversion leads lower VMS. Hence, the importance of the multi-stage model decreases as risk aversion increases. We also consider a rolling horizon policy obtained by solving two-stage approximations to the multi-stage problem in each period and fixing the decisions at that stage with respect to the optimal solution of the two-stage model. In order to the measure the quality of the rolling horizon policy, we calculate the gap between the value of the rolling horizon policy and the optimal value of MS. The gap value GAP is calculated in terms of difference of objective values $$\text{GAP (\$)} = z^{RH} - z^{MS},$$ and in terms of percentage $$\text{GAP (\%)} = \frac{z^{RH} - z^{MS}}{z^{MS}}.$$ where $z^{RH}$ is the value of the rolling horizon policy. Note that since rolling horizon provides a feasible policy to the multistage problem that is at least as good as that of TS, we have that $0 \leq \text{GAP} \leq\text{VMS} $. The results are presented in . [0.485]{} ![image](gapdollar.pdf){width="\textwidth"} [0.485]{} ![image](gappercentage.pdf){width="\textwidth"} We present the solution times for each TS and MS instance at Table \[tab:timets\] and Table \[tab:timems\], respectively. The required time to obtain the rolling horizon policy is also presented in Table \[tab:timerh\]. $\epsilon \backslash \lambda$ 0 0.1 0.2 0.3 0.4 0.5 -------------------------------- ------ ------ ----- ----- ----- ----- 0.1 7.5 10.4 9.6 7.7 7.2 7.2 0.2 4.2 3.8 3.5 4.0 3.7 3.2 0.3 12.2 10.9 9.5 8.1 7.8 6.0 0.4 7.9 3.8 4.1 4.0 3.3 2.7 0.5 8.8 5.4 6.3 4.8 4.8 4.6 : Solution times of TS (in seconds) \[tab:timets\] $\epsilon \backslash \lambda$ 0 0.1 0.2 0.3 0.4 0.5 -------------------------------- -------- -------- -------- -------- -------- -------- 0.1 1004.2 1280.0 1255.2 1489.7 1789.6 2009.1 0.2 328.3 381.6 400.4 444.7 324.6 393.8 0.3 480.0 1042.4 435.8 780.0 453.8 358.5 0.4 192.9 674.5 529.4 323.0 328.6 279.8 0.5 85.7 147.5 116.6 119.0 118.5 113.1 : Solution times of MS (in seconds) \[tab:timems\] $\epsilon \backslash \lambda$ 0 0.1 0.2 0.3 0.4 0.5 -------------------------------- ------ ------ ------ ------ ------ ------ 0.1 16.6 15.1 14.7 13.6 14.9 12.8 0.2 8.0 9.0 9.0 8.7 8.0 8.5 0.3 15.1 17.3 15.1 15.2 14.6 11.4 0.4 9.0 10.4 8.3 9.1 7.7 7.8 0.5 10.2 9.6 9.0 12.3 9.7 9.5 : Required time to obtain the rolling horizon policy (in seconds) \[tab:timerh\] In all instances, the rolling horizon policy performs much better than the policy obtained by solving the two-stage problem with a small increase in computational effort. The $\text{GAP (\%)}$ of rolling horizon policy is $0.12\%$ on the average (with maximum $0.32\%$) whereas the $\text{VMS (\%)}$ is $1.42\%$ on the average (with maximum $3.20\%$). Thus, the rolling horizon policy obtained by using two-stage approximations to the multi-stage solution can provide enough flexibility in generation schedule to obtain a near-optimal schedule in RA-UC problems with a reasonable computational effort. The computational effort to solve the MS model is much larger than that of the TS model and the rolling horizon policy in all instances. The higher the demand variability leads higher VMS while decreasing the solution times as an additional benefit. Conclusion {#sec:conclusion} ========== Recent improvements in the renewable power production technologies have motivated the stochastic unit commitment problems, since these models can explicitly address the variability in net load. Multi-stage models provide completely flexible schedules where all decisions are adapted to the uncertainty. However, these models require high computational effort, and therefore, their two-stage counterparts are used to obtain approximate policies. In order to justify the additional effort to solve the multi-stage model rather than its two-stage counterpart, we define the VMS and provide analytical and computational results on it. These results reveal that, for RA-UC problems, the VMS decreases with the degree of risk aversion, and increases with the level of uncertainty and number of time periods. Performance of the rolling horizon polices obtained by two-stage approximations of the multi-stage models are promising. As a future research direction, it would be interesting to consider the rolling horizon policies in instances with more complicated random net load processes. However, in that case, the number of two-stage models to be solved would be large and their solution would require significant computation time. Theoretical analysis of the value of rolling horizon policies is also an important future step. Deterministic Unit Commitment Formulation {#app:model} ========================================= *Indexes and Sets* $$\begin{aligned} t: \; & \text{Period index, } & i: \; & \text{Generator index}, \nonumber\\ T: \; & \text{Number of periods, } & I: \; & \text{Number of generators}, \nonumber\\ \mathcal{T}: \; & \text{Set of periods, } & \mathcal{I}: \; & \text{Set of generators}, \nonumber\end{aligned}$$ *Parameters* $$\begin{aligned} a_{i}: \; & \text{Fixed cost of running generator } i \in \mathcal{I}, \nonumber \\ g_{i}(\cdot): \; & \text{Production cost function of running generator } i \in \mathcal{I}, \nonumber \\ & \text{ specifically, } g_{i}(v) = b_{i}v+c_{i}v^2 \text{ for } v \geq 0 \nonumber \\ & \text{ with parameters } b_i,c_i \in \mathbb{R}_+, \nonumber \\ SU_{i}: \; & \text{Start-up cost of generator } i \in \mathcal{I}, \nonumber \\ SD_{i}: \; & \text{Shut-down cost of generator } i \in \mathcal{I}, \nonumber \\ \underline{q}_{i}: \; & \text{Minimum production amount of generator } i \in \mathcal{I}, \nonumber \\ \overline{q}_{i}: \; & \text{Maximum production amount of generator } i \in \mathcal{I}, \nonumber \\ d_{t}: \; & \text{Net load in period } t \in \mathcal{T}, \nonumber \\ M_i: \; & \text{Minimum up time of generator } i \in \mathcal{I}, \nonumber \\ L_i: \; & \text{Minimum down time of generator } i \in \mathcal{I}, \nonumber \\ V'_i: \; & \text{Start up rate of generator } i \in \mathcal{I}, \nonumber \\ V_i: \; & \text{Ramp up rate of generator } i \in \mathcal{I}, \nonumber \\ B'_i: \; & \text{Shut down rate of generator } i \in \mathcal{I}, \nonumber \\ B_i: \; & \text{Ramp down production limit of generator } i \in \mathcal{I}. \nonumber \end{aligned}$$ *Variables* $$\begin{aligned} u_{it}: \; & \text{Status of generator } i \in \mathcal{I} \text{ in period } t \in \mathcal{T}, \nonumber \\ \; & (1 \text{ if generator $i$ is ON in period $t$; } 0 \text{ otherwise}), \nonumber \\ v_{it}: \; & \text{Production amount of generator } i \in \mathcal{I} \text{ in period } t \in \mathcal{T}, \nonumber \\ y_{it}: \; & \text{Start up decision of generator } i \in \mathcal{I} \text{ in period } t \in \mathcal{T}, \nonumber \\ & (1 \text{ if } u_{i(t-1)} = 0 \text{ and } u_{it} = 1 \text{; } 0 \text{ otherwise}), \nonumber\\ z_{it}: \; & \text{Shut down decision of generator } i \in \mathcal{I} \text{ in period } t \in \mathcal{T}, \nonumber \\ & (1 \text{ if } u_{i(t-1)} = 1 \text{ and } u_{it} = 0 \text{; } 0 \text{ otherwise}). \nonumber\end{aligned}$$ *Model*\ $$\begin{aligned} \underset{u, v, y ,z}{\text{min}} \; & \sum_{t = 1}^T \sum_{i = 1}^I a_{i}u_{it} + g_{t}(v_{it}) + SU_{i}y_{it} + SD_{i}z_{it}, \label{deter:obj} \\ \text{s.t.} \; & (\ref{det-dem}), (\ref{det-cap}) \nonumber\\ & u_{it}-u_{i(t-1)} \leq u_{i \tau}, \; \forall t \in \mathcal{T}, \forall i \in \mathcal{I}, \nonumber \\ & \quad \forall \tau \in \{t+1,\ldots,\min\{t+M_i,T\}\} \label{deter:upt} \\ & u_{i(t-1)} - u_{it} \leq 1 - u_{i\tau}, \; \forall t \in \mathcal{T}, \forall i \in \mathcal{I}, \nonumber \\ & \quad \forall \tau \in \{t+1,\ldots,\min\{t+L_i,T\}\} \label{deter:dot} \\ & u_{it}-u_{i(t-1)} \leq y_{it}, \; \forall t \in \mathcal{T}, \forall i \in \mathcal{I} \label{deter:sup} \\ & u_{i(t-1)}-u_{it} \leq z_{it} , \; \forall t \in \mathcal{T}, \forall i \in \mathcal{I} \label{deter:sdo} \\ & v_{it}-v_{i(t-1)} \leq V'_i y_{it} + V_i u_{i(t-1)}, \nonumber \\ & \quad \forall t \in \mathcal{T}, \forall i \in \mathcal{I} \label{deter:rup} \\ & v_{i(t-1)}-v_{it} \leq B'_i z_{it} + B_i u_{it}, \nonumber \\ & \quad \forall t \in \mathcal{T}, \forall i \in \mathcal{I} \label{deter:rdo} \\ & u_{it}, y_{it}, z_{it} \in \{0,1\}, v_{ti} \geq 0, \; \forall t \in \mathcal{T}, \forall i \in \mathcal{I}. \nonumber \end{aligned}$$ The objective (\[deter:obj\]) is total fixed, production, start up and shut down costs. Constraints (\[deter:upt\]), (\[deter:dot\]), (\[deter:sup\]) and (\[deter:sdo\]) are minimum up time, minimum down time, start up and shut down constraints, respectively. The rump/start up rate constraint is given in (\[deter:rup\]). Similarly, (\[deter:rdo\]) is the rump/shut down rate constraint. Computational Experiment Data {#app:data} ============================= $t$ 1 2 3 4 5 6 ----------------------- ------ ------ ------ ------ ------ ------ $\overline{d}_t$ (MW) 700 750 850 950 1000 1100 $t$ 7 8 9 10 11 12 $\overline{d}_t$ (MW) 1150 1200 1300 1400 1450 1500 $t$ 13 14 15 16 17 18 $\overline{d}_t$ (MW) 1400 1300 1200 1050 1000 1100 $t$ 19 20 21 22 23 24 $\overline{d}_t$ (MW) 1200 1400 1300 1100 900 800 : Demand Data (MW = megawatt) \[tab:dem\] --- ------- ------------------ ------------------------------ ------------------------------ ------------------------------ 1-6 7-12 13-18 19-24 1 0.125 $\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ 2 0.125 $\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ 3 0.125 $\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ 4 0.125 $\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ 5 0.125 $\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ 6 0.125 $\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ 7 0.125 $\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1-\epsilon)\overline{d}_t$ 8 0.125 $\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ $(1+\epsilon)\overline{d}_t$ --- ------- ------------------ ------------------------------ ------------------------------ ------------------------------ : Scenario Data \[tab:sce\] $i$ 1 2 3 4 5 ------------------------ --------- --------- --------- --------- --------- $a_i$ (\$/h) 1000 970 700 680 450 $b_i$ (\$/MWh) 16.19 17.26 16.6 16.5 19.7 $c_i$ (\$/MW$^{2}$h) 0.00048 0.00031 0.002 0.00211 0.00398 $\overline{q}_i$ (MW) 682.5 682.5 195 195 243 $\underline{q}_i$ (MW) 225 225 30 30 37.5 $V'_i$ (MW) 337.5 337.5 45 45 56.25 $V_i$ (MW) 405 405 54 54 67.5 $B'_i$ (MW) 337.5 337.5 45 45 56.25 $B_i$ (MW) 405 405 54 54 67.5 $M_i$ (h) 8 8 5 5 6 $L_i$ (h) 8 8 5 5 6 $SU_{i}$ (\$/h) 4500 5000 550 560 900 $SD_{i}$ (\$/h) 0 0 0 0 0 $i$ 6 7 8 9 10 $a_i$ (\$/h) 370 480 660 665 670 $b_i$ (\$/MWh) 22.26 27.74 25.92 27.27 27.79 $c_i$ (\$/MW$^{2}$h) 0.00712 0.00079 0.00413 0.00222 0.00173 $\overline{q}_i$ (MW) 120 127.5 82.5 82.5 82.5 $\underline{q}_i$ (MW) 30 37.5 15 15 15 $V'_i$ (MW) 45 56.25 22.5 22.5 22.5 $V_i$ (MW) 54 67.5 27 27 27 $B'_i$ (MW) 45 56.25 22.5 22.5 22.5 $B_i$ (MW) 54 67.5 27 27 27 $M_i$ (h) 3 3 1 1 1 $L_i$ (h) 3 3 1 1 1 $SU_i$ (\$/h) 170 260 30 30 30 $SD_i$ (\$/h) 0 0 0 0 0 : Generator Data (MW = megawatt) \[tab:gen\] [1]{} S. A. Kazarlis, A. G. Bakirtzis and V. Petridis, “A genetic algorithm solution to the unit commitment problem”, *IEEE Transactions on Power Systems*, vol. 11, no. 1, pp. 83-92, 1996. Y. Huang, P. M. Pardalos and Q. P. Zheng “Electrical Power Unit Commitment: Deterministic and Two-Stage Stochastic Programming Models and Algorithms”, Springer, 2017. N. P. Padhy, “Unit commitment - A bibliographical survey”, *IEEE Transactions on Power Systems*, vol. 19, no. 2, pp. 1196-1205, 2004. P. A. Ruiz, C. R. Philbrick, E. Zak, K. W. Cheung and P. W. Sauer “Uncertainty management in the unit commitment problem”, *IEEE Transactions on Power Systems*, vol. 24, no. 2, pp. 642-651, 2009. P. D. Brown, J. A. P. Lopes and M. A. Matos, “Optimization of pumped storage capacity in an isolated power system with large renewable penetration”, *IEEE Transactions on Power Systems*, vol. 23, no. 2, pp. 523-531, 2008. D. Bertsimas, E. Litvinov, X. A. Sun, J. Zhao and T. Zheng , “Adaptive robust optimization for the security constrained unit commitment problem”, *IEEE Transactions on Power Systems*, vol. 28, no. 1, pp. 52-63, 2013. A. Lorca and X. A. Sun , “Multistage robust unit commitment with dynamic uncertainty sets and energy storage”, *IEEE Transactions on Power Systems*, vol. 32, no. 3, pp. 1678-1688, 2017. L. Zhao and B. Zeng, “Robust unit commitment problem with demand response and wind energy”, *Power and Energy Society General Meeting*, pp. 1-8, IEEE, 2013. R. Jiang, J. Wang and Y. Guan, “Robust unit commitment with wind power and pumped storage hydro”, *IEEE Transactions on Power Systems*, vol. 27, no. 2, pp. 800-810, 2012. Q. Wang, J. P. Watson and Y. Guan, “Two-stage robust optimization for N-k contingency-constrained unit commitment”, *IEEE Transactions on Power Systems*, vol. 28, no. 3, pp. 2366-2375, 2013. K. Cheung, D. Gade, C. Silva-Monroy, S. M. Ryan, J. P. Watson, R. J. Wets and D. L. Woodruff , “Toward scalable stochastic unit commitment: Part 2: solver configuration and performance assessment”, *Energy Systems*, vol. 6, no. 3, pp. 417-438, 2015. M. Tahanan, W. van Ackooij, A. Frangioni, and F. Lacalandra ,“Large-scale Unit Commitment under uncertainty”, *4OR*, vol.13, no.2, pp. 115-171, 2015. A. Papavasiliou, S. S. Oren and B. Rountree , “Applying high performance computing to transmission-constrained stochastic unit commitment for renewable energy integration”, *IEEE Transactions on Power Systems*, vol. 30, no. 3, pp. 1109-1120, 2015. J. Wang, M. Shahidehpour and Z. Li , “Security-constrained unit commitment with volatile wind power generation”, *IEEE Transactions on Power Systems*, vol. 23, no. 3, pp. 1319-1327, 2008. S. Takriti, J. R. Birge and E. Long, “A stochastic model for the unit commitment problems”, *IEEE Transactions on Power Systems*, vol. 11, no. 3, pp. 1497-1508, 1996. C. C. Car[ø]{}e and R. Schultz “A two-stage stochastic program for unit commitment under uncertainty in a hydro-thermal power system”, ZIB, 1998. Q. Wang, Y. Guan and J. Wang, “A chance-constrained two-stage stochastic program for unit commitment with uncertain wind power output”, *IEEE Transactions on Power Systems*, vol. 27, no. 1, pp. 206-215, 2012. Q. P. Zheng, J. Wang, P. M. Pardalos and Y. Guan, “A decomposition approach to the two-stage stochastic unit commitment problem”, *Annals of Operations Research*, vol. 210, no. 1, pp. 387-410, 2013. S. Takayuki and J. R. Birge, “Stochastic unit commitment problem”, *International Transactions in Operational Research*, vol. 11, no. 1, pp. 19-32, 2004. R. Jiang, Y. Guan and J. P. Watson, “Cutting planes for the multistage stochastic unit commitment problem”, *Mathematical Programming*, vol. 157, no. 1, pp. 121-151, 2016. Q. P. Zheng, J. Wang and A. L. Liu, “Stochastic optimization for unit commitment - A review”,*IEEE Transactions on Power Systems*, vol. 30, no. 4, pp. 1913-1924, 2015. A. Lorca, X. A. Sun, E. Litvinov and T. Zheng, “Multistage adaptive robust optimization for the unit commitment problem”,*Operations Research*, vol. 64, no. 1, pp. 32-51, 2016. K. Huang and S. Ahmed, “The value of multistage stochastic programming in capacity planning under uncertainty”, *Operations Research*, vol. 57, no. 4, pp. 893-904, 2009. P. Artzner, F. Delbaen, J.M. Eber and D. Heath, “Coherent measures of risk”, *Mathematical Finance*, vol. 9, no. 3, pp. 203-228, 1999. R. A. Collado, D. Papp and A. Ruszczynski, “Scenario decomposition of risk-averse multistage stochastic programming problems”, *Annals of Operations Research*, vol. 200, no. 1, pp. 147-170, 2012. A. Shapiro, D. Dentcheva and A. Ruszczynski, “Lectures on stochastic programming: modeling and theory”, *Society for Industrial and Applied Mathematics*, 2009. A. Shapiro “On a time consistency concept in risk averse multistage stochastic programming”, *Operations Research Letters*, vol. 37, no. 3, pp. 143-147, 2009. T. Shiina and J. R. Birge, “Stochastic unit commitment problem” *International Transactions in Operational Research*, vol. 11, no. 1, pp. 19-32, 2004. M. Carrion and J. M. Arroyo, “A computationally efficient mixed-integer linear formulation for the thermal unit commitment problem”*IEEE Transactions on Power Systems*, vol. 21, no. 3, pp. 1371-1378, 2006. [Ali İrfan Mahmutoğullar[i]{}]{} is a Ph.D. candidate in Department of Industrial Engineering, Bilkent University, Ankara, Turkey. He holds B.S. and M.S. degrees from the same department in 2011 and 2013, respectively. His research focuses on developing efficient solution methods for multi-stage mixed-integer stochastic programming models. He is also interested in application of these models to the problems emerging from different areas of operations research. [Shabbir Ahmed]{} is the Anderson-Interface Chair and Professor in the H. Milton Stewart School of Industrial & Systems Engineering at the Georgia Institute of Technology. His research interests are in stochastic and discrete optimization. Dr. Ahmed is a past Chair of the Stochastic Programming Society and serves on the editorial board of several journals. His honors include the INFORMS Computing Society Prize, the National Science Foundation CAREER award, two IBM Faculty Awards, and the INFORMS Dantzig Dissertation award. He is a Fellow of INFORMS. [[Ö]{}zlem [Ç]{}avu[ş]{}]{} is currently an Assistant Professor of Industrial Engineering at Bilkent University. She received her B.S. and M.S. degrees in Industrial Engineering from Boğaziçi University in 2004 and 2007, respectively, and the Ph.D. degree in Operations Research from Rutgers Center for Operations Research (RUTCOR) at Rutgers University in 2012. Her research interests include stochastic optimization, risk-averse optimization and Markov decision processes. [M. Selim Akt[ü]{}rk]{} is a Professor and Chair of the Department of Industrial Engineering at Bilkent University. His recent research interests are discrete optimization, production scheduling and airline disruption management. [^1]: A. İ. Mahmutoğullar[i]{}, [Ö]{}. [Ç]{}avu[ş]{} and M. S. Akt[ü]{}rk are with Department of Industrial Engineering, Bilkent University, Ankara, 06800, Turkey. e-mail: [email protected], [email protected], [email protected] [^2]: S. Ahmed is with H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, 30318, GA, USA. e-mail: [email protected] [^3]: The first author is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) program number BİDEB-2214-A. The second author is supported by the National Science Foundation Grant 1633196. [^4]: Manuscript submitted .
{ "pile_set_name": "ArXiv" }
--- abstract: | We present evidence for a small glitch in the spin evolution of the millisecond pulsar J0613$-$0200, using the EPTA Data Release 1.0, combined with Jodrell Bank analogue filterbank TOAs recorded with the Lovell telescope and Effelsberg Pulsar Observing System TOAs. A spin frequency step of 0.82(3)nHz and frequency derivative step of ${-1.6(39) \times 10^{-19}\,\text{Hz} \ \text{s}^{-1}}$ are measured at the epoch of MJD$50888(30)$. After PSRB1821$-$24A, this is only the second glitch ever observed in a millisecond pulsar, with a fractional size in frequency of ${\Delta \nu/\nu=2.5(1) \times 10^{-12}}$, which is several times smaller than the previous smallest glitch. PSRJ0613$-$0200 is used in gravitational wave searches with pulsar timing arrays, and is to date only the second such pulsar to have experienced a glitch in a combined 886 pulsar-years of observations. We find that accurately modelling the glitch does not impact the timing precision for pulsar timing array applications. We estimate that for the current set of millisecond pulsars included in the International Pulsar Timing Array, there is a probability of $\sim 50$% that another glitch will be observed in a timing array pulsar within 10 years. author: - | J.W.McKee,$^{1}$[^1] G.H.Janssen,$^{2}$ B.W.Stappers,$^{1}$ A.G.Lyne,$^{1}$ R.N.Caballero,$^{3}$ L.Lentati,$^{4}$ G.Desvignes,$^{3}$ A.Jessner,$^{3}$ C.A.Jordan,$^{1}$ R.Karuppusamy,$^{3}$ M.Kramer,$^{1,3}$ I.Cognard,$^{5,6}$ D.J.Champion,$^{3}$ E.Graikou,$^{3}$ P.Lazarus,$^{3}$ S.Osłowski,$^{7,3}$ D.Perrodin,$^{8}$ G.Shaifullah,$^{3,7}$ C.Tiburzi,$^{3,7}$ and J.P.W.Verbiest$^{7,3}$\ $^{1}$Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, UK\ $^{2}$ASTRON, the Netherlands Institute for Radio Astronomy, Postbus 2, 7990 AA, Dwingeloo, The Netherlands\ $^{3}$Max Planck Institut f[ü]{}r Radioastronomie, Auf dem H[ü]{}gel 69, 53121, Bonn, Germany\ $^{4}$Institute of Astronomy / Battcock Centre for Astrophysics, University of Cambridge, Madingley Road, Cambridge CB3 0HA, United Kingdom\ $^{5}$Laboratoire de Physique et Chimie de l’Environnement et de l’Espace LPC2E CNRS-Universit[é]{} d’Orl[é]{}ans, F-45071 Orl[é]{}ans, France\ $^{6}$Station de radioastronomie de Nançay, Observatoire de Paris, CNRS/INSU F-18330 Nançay, France\ $^{7}$Fakult[ä]{}t f[ü]{}r Physik, Universit[ä]{}t Bielefeld, Postfach 100131, 33501 Bielefeld, Germany\ $^{8}$INAF - Osservatorio Astronomico di Cagliari, via della Scienza 5, I-09047 Selargius (CA), Italy\ bibliography: - 'mjs+16v2.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'A glitch in the millisecond pulsar J0613$-$0200' --- \[firstpage\] pulsars:general – pulsars:individual (PSRJ0613$-$0200) – stars:neutron – stars:rotation Introduction ============ Pulsars spin with remarkable stability, allowing pulse times of arrival (TOAs) to be accurately predicted with precisions, in the best cases, as high as fractions of microseconds over timescales of decades. Millisecond pulsars (MSPs) in particular have such highly stable rotation that they are used as extremely precise clocks in timing experiments, and the most stable are used as probes of space-time in pulsar timing array (PTA) experiments. The ultimate goal is a direct gravitational wave (GW) detection in the nano-Hertz regime (recent stochastic background limits are given in [@ltm+15], [@abb+15], [@srl+15]). Since the influence of GWs on pulse TOAs is extremely small, the accuracy of the timing model describing the spin evolution of a pulsar needs to be very high in order to make a GW detection. This also requires the precise measurement and removal of other influences on the TOAs, such as those caused by changes in the interstellar medium (ISM) or irregularities in the pulsar spin evolution. PSRJ0613$-$0200 was discovered by [@lnl+95] and is a MSP which is included in all currently-ongoing PTA experiments: the European Pulsar Timing Array (EPTA; [@dcl+16]), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav; [@abb+15]), the Parkes Pulsar Timing Array (PPTA; [@rhc+16]), and the International Pulsar Timing Array (IPTA; [@vlh+16]). It has been timed to a precision of $1.2\,\mu$s over a time span of 13.7 years using the combined IPTA data set ([@vlh+16]). Although the spin evolution of pulsars is generally very stable and predictable, a small fraction of pulsars exhibit sudden changes in spin frequency and/or frequency derivative, known as timing glitches. Timing glitches are usually associated with non-recycled and low-characteristic-age pulsars, notably the Crab pulsar (PSRB0531+21) and the Vela pulsar (PSRB0833$-$45), which have been observed to glitch 25 and 19 times respectively in 45 years of observations ([@els+11]). Conversely, glitches in MSPs are exceedingly rare, with only one small glitch ever observed in the MSP B1821$-$24A [@cb04], which is near the core of the globular cluster M28, and which displays significant timing noise (Figure \[fig:ppdot\]). The mechanism which causes timing glitches is not fully understood, but is assumed to be linked to a sudden transfer of angular momentum from superfluid neutrons to the solid crust. The superfluid is thought to rotate independently from the rest of the neutron star and contains vortices. An ensemble of vortices becomes unpinned and a coupling to the solid component of the neutron star crust occurs, abruptly transferring angular momentum to it (a review of glitch models can be found in [@hm+15]). The transfer of angular momentum generally increases the rotational frequency of the pulsar, which is occasionally observed to relax back to the pre-glitch value, although [@akn+13] have reported evidence for ‘anti-glitches’, a sudden *decrease* in the spin frequency in X-ray observations of the magnetar 1E2259+586. The change in spin frequency and slowdown rate caused by a glitch is reflected in the deviation of the observed TOAs from the arrival times predicted by a pre-glitch timing model. The glitch observed in PSRB1821$-$24A was notable as it was the first glitch to be observed in a MSP, and had a size ${\Delta \nu/\nu=8(1) \times 10^{-12}}$, two orders of magnitude smaller than the next smallest glitch (at the time). The rarity of glitches in MSPs and the small size of the PSRB1821$-$24A glitch has led to speculation that MSPs have different structures to the rest of the population, or a different physical process could be responsible. Some proposed explanations are that PSRB1821$-$24A is a strange star, which experienced a crust-cracking event that would alter the angular momentum and cause the same effect on timing residuals as a small glitch [@msb+06], or that the small size of the glitch is evidence of influence on the pulsar-term by a GW burst with memory, which could be indistinguishable from a post-glitch frequency step [@cj12]. Timing noise is a phenomenon where the observed arrival times of pulses deviate systematically from the timing solution through a process similar to a random walk in the spin parameters (e.g. [@sc10]). This manifests as structure in the timing residuals. Timing noise is thought to arise through unmodelled small-scale instabilities in the rotation of the pulsar. It has been shown that in slow pulsars, timing noise can be modelled as: a series of microglitches [@js06], frequency derivative variations caused by magnetospheric switching [@lhk+10], or as post-glitch recovery stages [@hlk10]. The structure for this paper is as follows: we describe our observations and data in section 2, present our findings in section 3, discuss the implications of our results in section 4, and make closing conclusions in section 5. Observations ============ The data set is comprised of TOAs from a variety of pulsar backends used with the Lovell Telescope at Jodrell Bank in the UK, the Nançay Radio Telescope in France, the Effelsberg Radio Telescope in Germany, and the Westerbork Synthesis Radio Telescope in the Netherlands (Table\[tab:tels\]). We have used the EPTA Data Release 1.0 (DR1; [@dcl+16]) covering the time span MJD50931-56795 and combined this with TOAs recorded using the Lovell telescope’s analogue filterbank (AFB) backend for the epoch MJD49030-55333, as well as some pre-DR1 Effelsberg TOAs using the Effelsberg-Berkeley pulsar processor (EBPP) backend and the Effelsberg Pulsar Observing System (EPOS). The TOAs and ephemeris for this pulsar will be made available on the EPTA webpage [^2]. The AFB and EPOS data were aligned with the DR1 data set using the default procedure of fitting constant phase offsets between the data sets at each observing frequency, as described in [@dcl+16]. For a small subset of the AFB data at 1400MHz, known hardware configuration changes were corrected for by adding a phase offset to the corresponding TOAs. Jodrell Bank analogue filterbank data ------------------------------------- The AFB backend was used for pulsar observations with the Lovell telescope during the years 1982-2010. TOAs were derived from observations at centre frequencies of 400MHz, 600MHz, and 1400MHz, and a time resolution of $250\,\mu$s (see [@hlk+04]). Observations were hardware-dedispersed and average profiles were produced via pulse folding. Each TOA was generated through cross-correlation with an observing-frequency-specific template, and systematic offsets between different instruments and observing configurations were corrected for by fitting for constant offsets. The AFB data used a separate clock file for timing analysis (effectively treating the AFB as using a separate observatory to the digital filterbank used in more recent Lovell Telescope observations), as the AFB data had clock corrections already applied to the profiles, effectively absorbing the correction into the TOAs. Effelsberg Pulsar Observing System ---------------------------------- The EPOS backend ([@j96]) recorded observations using a 1390MHz centre frequency, with a 40MHz band split into sixty 666kHz channels, which were digitally delayed (incoherently dedispersed) to correct for the DM of the pulsar. Observations were recorded at a time resolution of 60$\mu$s, and folded using early Jodrell Bank timing models. The observations were timestamped using a local hydrogen maser which was corrected to GPS. For more information on the EPOS system, see e.g. [@kxl+98]. [c c c c c c]{} Telescope & Backend & Centre Freq. (MHz) & $N_{\text{TOAs}}$ & MJD Range & RMS ($\mu$s)\ Effelsberg & EPOS & 1390 & 239 & 49768-51894 & 86.6\   & EBPP &1360 & 46 & 54483-56486 & 1.5\   & EBPP & 1410 & 253 & 50362-54924 & 1.7\  & EBPP & 2638 & 72 & 53952-56486 & 6.1\ Lovell & AFB & 400 & 132 & 49030-50696 & 47.7\  & AFB & 600 & 142 & 49034-54632 & 22.3\  & AFB & 1400 & 586 & 49091-55333 & 18.3\  & DFB & 1400 & 24 & 54847-54987 & 5.4\  & DFB & 1520 & 191 & 55054-56760 & 2.0\ NRT & BON & 1400 & 334 & 53373-55850 & 1.1\  & BON & 1600 & 84 & 54836-56795 & 1.3\ & BON & 2000 & 51 & 54063-56224 & 2.3\ WSRT & PUMA1 & 328 & 34 & 51770-55375 & 10.5\  & PUMA1 & 382 & 27 & 51770-55375 & 8.0\  & PUMA1 & 1380 & 99 & 51389-55375 & 3.0\ Total & - & 328-2638 & 2314 & 49030-56795 & 2.7\ [l c c c]{} Parameter & Frequentist Value & Bayesian Model (inc. sys. noise) & Bayesian Model (no sys. noise)\ Frequency epoch (MJD) & 55000 & - & -\ Frequency (Hz) &326.6005620227(2)& - & -\ Frequency derivative (Hz/s) & $-1.0228(4) \times 10^{-15}$ & - & -\ Glitch epoch (MJD) & $50888(30)$& 50874(25) & 50922(14)\ Glitch frequency step (Hz) & $8.2(3) \times 10^{-10}$ & $8.7(6) \times 10^{-10}$ & $7.6(3) \times 10^{-10}$\ Glitch frequency derivative step (Hz/s) &$ -1.6(39) \times 10^{-19}$ & $+1.1(65) \times 10^{-19}$ & $-1.2(4) \times 10^{-18}$\ Results ======= ![image](residuals_plot) ![image](frequency_evolution_plot) Combining the DR1 and earlier AFB TOAs revealed a sharp drift away from the DR1 timing solution, which was derived over the epoch MJD50931 to 56795 (Figure \[fig:toas\]). Comparing the TOAs of pulsars timed to similar or better precision over the same time span did not show any similar drift. This rules out the possibility of an instrumental effect or an error in the clock corrections as the cause of the drift seen in the early data for PSRJ0613$-$0200. TOAs recorded using the Effelsberg EBPP backend in the epoch MJD50362-50460 (i.e. preceding the start of DR1) and those from the EPOS backend were found to follow the same trend away from the predicted arrival time as the Lovell Telescope AFB data, excluding instrumental effects as the cause. ISM effects such as a steadily changing dispersion measure can also be ruled out, as the effect is present and identical in data from three widely-separated observing frequencies without showing any frequency-dependent trend which would be expected if the cause was ISM related. The observed quasi-linear trend in the residuals is strong evidence of a timing glitch, and can be removed completely by fitting for glitch parameters not previously included in the timing solution, using a glitch epoch MJD50888 ($16^{\text{th}}$ March 1998), which allows the pre-glitch and post-glitch frequency and frequency derivative to be derived (Table \[tab:glitch\]). Fitting for the spin parameters before and after the measured glitch epoch, we measure the fractional frequency step to be ${\Delta \nu/\nu=2.5(1) \times 10^{-12}}$, and the fractional frequency derivative step to be ${\Delta \dot{\nu}/\dot{\nu}=1.6(39) \times 10^{-4}}$, where here and elsewhere we use a $1\sigma$ uncertainty. The change in spin frequency over time was investigated by using a stride fit through our full data set, using a 500-day fitting window, and a step size of 100 days, which was necessary for deriving precise values for the spin frequency from the relatively large uncertainties in AFB and EPOS TOAs, while still allowing the sudden change in $\nu$ to be clearly identified (Figure \[fig:f0\]). The glitch epoch was estimated by fitting for all model parameters using <span style="font-variant:small-caps;">Tempo2</span> while varying the glitch epoch, and selecting the epoch corresponding to the minimum $\chi^{2}$. The uncertainty in glitch epoch was taken as the region over which varying the epoch results in ${\Delta \chi^{2}=1}$. This is the smallest glitch ever recorded, with the next smallest also occurring in a MSP, with fractional frequency step ${\Delta \nu/\nu=8(1) \times 10^{-12}}$ ([@cb04], [@els+11]) i.e. several times larger than the glitch we report. The possibility of a magnetospherically-induced change in pulse shape related to a change in frequency derivative [@lhk+10] was considered as an alternative to a change in spin frequency, but no significant change of the pulse profile associated with the glitch was observed. However, it should be noted that the relatively low time resolution of the AFB and EPOS observations is insufficient for small pulse-shape changes to be detected. This effect was also tested for by fitting for separate frequency derivatives only (i.e. no change in spin frequency) for the pre-glitch and post-glitch residuals, using a range of epochs for the change in frequency derivative while keeping the rest of the parameters constant. The glitch signature was not effectively removed by this approach, with significant structure introduced to the timing residuals. To remove the glitch signature, a model that includes a step in spin frequency is required, therefore we rule out magnetospheric effects as an explanation for this event. Following the EPTA timing and noise analysis in [@dcl+16] and [@cll+16], we use a Bayesian approach to confirm the findings of our frequentist analysis. We estimate the properties of the glitch simultaneously with different noise models using the Bayesian pulsar timing package <span style="font-variant:small-caps;">TempoNest</span> [@lah+14]. These noise models include parameters to modify the properties of the white noise, as well as time-correlated stochastic signals that describe DM variations, timing noise, and system-dependent noise. For this final term, we use the approach described in [@lsc+16]. For all noise models, we marginalise analytically over the full timing model while simultaneously searching for a glitch epoch, and changes in the spin frequency and frequency derivative at that epoch. We use priors that are uniform in the glitch parameters, where the glitch epoch is the full MJD range of the data set, and the glitch frequency and frequency derivative priors are uniform in amplitude. All Bayesian evidence comparisons thus do not assume *a priori* that a glitch is present in the data set. We confirm the presence of a glitch, and find a model that includes both DM variations and additional system noise in the AFB 1400MHz data set. We estimate a glitch epoch MJD50874(25) from the system noise model, and MJD50922(14) from the model without system noise. Using the system noise model, we estimate a spin-frequency step of 0.87(6)nHz and a spin-down rate step of ${1.1(65) \times 10^{-19}\,\text{Hz} \ \text{s}^{-1}}$, and using the model with no system noise, we measure these quantities as 0.76(3)nHz and $-1.2(4) \times 10^{-18}\,\text{Hz} \ \text{s}^{-1}$ respectively. In Figure \[fig:GlitchPost\], we plot the mean signal realisation with 1$\sigma$ confidence intervals for the DM variations (top panel) and system noise (bottom panel) models. We find no evidence for a timing noise term that is coherent across all observing systems and is independent of the observing frequency (‘spin noise’ in [@lsc+16]). In principle, as we have only added additional observations to this data set compared to the DR1, we would expect that the sensitivity to timing noise would either be the same or improve relative to that analysis. However, the presence of significant system noise in the early AFB 1400MHz data implies that the TOA estimates are affected by some time-correlated process that is potentially not well modelled by a stationary power-law noise process. If this early data were poorly modelled, then we would expect that including it in the data set would decrease our sensitivity to timing noise compared to DR1 as observed. We test the stationarity of this system noise term by including two additional parameters that define the start time and duration of noise process. We find that the evidence does not increase with the addition of these parameters, with the start time consistent with the beginning of the AFB 1400MHz data set, and the duration consistent with the full length of the AFB 1400MHz data, implying that this system noise is not the result of mismodelling the glitch or a temporary increase in the noise level of the data set. However, there is not a sufficient overlap of data to distinguish the system noise term in this data set, as explained in [@lsc+16]. In Figure \[fig:Signals\], we show the one- and two-dimensional posterior probability distributions from our analysis for two different models. The black lines are from the optimal model that includes system noise, DM variations, and white noise parameters. The grey lines are from an analysis that includes DM variations and white noise parameters only in the stochastic model. We find the increase in the log evidence for the model that includes system noise is 24.7, which definitively supports their inclusion in the model. We confirm the detection of the glitch and find that the the parameter estimates for the glitch model change significantly when including, or not, this additional system-dependent term. In particular the uncertainties in the change in frequency and spin-down rate increase by a factor of 1.8, and the mean of the change in spin-down rate is consistent with zero at the $\sim 0.2\sigma$ level compared to the greater than $3\sigma$ detection in the model without system noise, but the results are consistent with the results of the frequentist approach presented in Table \[tab:glitch\]. We stress that with these results we do not claim that the model for system noise used in the analysis is the most optimal that could be used. However, it is significantly preferred by the data compared to a model that does not include it at all. Discussion ========== Pulsar Timing Array Relevance ----------------------------- The detection prospects for GWs using a PTA rely on the timing of the pulsars included in the array to be extremely stable. Therefore it may be reason for caution when a glitch is found in one of the most stable MSPs included in current PTA projects. However, our results show that the presence of a glitch in PSRJ0613$-$0200 does not affect timing stability for PTA analysis, as TOAs used by the EPTA and IPTA for this pulsar are all derived using post-glitch observations. The occurrence of a glitch before the PTA epoch has not limited our ability to precisely time this pulsar. This is shown by statistical analyses of pulsar timing noise in PTAs, most recently by [@cll+16], in which red noise is only semi-defined for this pulsar. As the glitch is small and the red noise of the pulsar is not well-defined, it is therefore likely that potential unmodelled glitches outside the timing baseline for other PTA pulsars have no significant effect on timing array sensitivity. Including this work, only two glitches have been reported in MSPs, and one in the recycled pulsar B1913+16 [@wnt10]. Although small, the PSRJ0613$-$0200 glitch was easy to detect with a data set covering a long baseline. We can therefore be confident that no other glitches with similar sizes have been missed in the spin-evolution of pulsars observed at Jodrell Bank Observatory (JBO). In this case, the effect of the glitch was easily removed without loss of timing precision, and so a glitch occurring in another PTA pulsar in the future may not be cause to remove the pulsar from future analysis. However, due to the unknown complexities of glitch models needed for MSPs, this is not completely certain. For future glitches in PTA pulsars, only the pre-glitch data would be usable until sufficient time had passed for the post-glitch spin parameters to be measured, or for any post-glitch pulse profile variation ([@wje11], [@ksj13]) to be recognised in the case of a magnetospheric variation. ![image](posteriors) ![image](DMSignal) ![image](SystemSignal) MSP Glitch Rates ---------------- Following the discovery of the first MSP glitch, [@cb04] calculated an event rate of $\sim 1$ glitch per 500 pulsar-years of combined observations (or $\sim 0.2 \ \text{century}^{-1}$). A total of 105 MSPs (period $P\,<\,10\,$ms) are observed at JBO, with a combined total of 1118 pulsar-years observing time. This allows us to estimate an event rate of $\sim 1$ glitch every 559 pulsar-years (or $\sim 0.18 \ \text{century}^{-1}$), for glitches in MSPs of a size ${\Delta \nu/\nu \gtrsim 2 \times 10^{-12}}$, a rate consistent with Cognard & Backer. We can extend this calculation to fully-recycled pulsars by including PTA pulsars with $P\,>\,10\,$ms and some double neutron star binaries (where we choose an upper limit of $P \sim 59$ms). We use this definition due to the difficulties in precisely defining recycled pulsars, but this allows us to estimate the order of magnitude of the glitch rate for this population. Following this, the glitch rate for recycled pulsars observed at JBO is $\sim 0.22 \ \text{century}^{-1}$. We note that this rate is much lower than that for ‘normal’ pulsars of $\sim 1$ glitch every 78 pulsar-years of observation (or $\sim 1.3 \ \text{century}^{-1}$). At JBO, 42 of the 49 IPTA pulsars ([@vlh+16]) have been observed for a combined total of 793 pulsar-years, while the other 7 have 93.5 pulsar-years of observations in the IPTA data release. This gives a combined total of 886 pulsar-years, in which time only two glitches have been observed (a rate of $\sim 0.23 \ \text{century}^{-1}$). If we assume this is a good approximation of the PTA glitch return rate $r$ pulsar-years, then the probability of a glitch occurring in $t$ years is ${P=1-(1-r^{-1})^{t}}$. For observations of the 49 pulsars in the IPTA, this gives a probability of $\sim 70\%$ that another glitch will be observed in a PTA in the next 10 years, and $\sim 95\%$ that a glitch will be observed in the next 27 years. If we exclude PSRB1821$-$24A from the analysis, due to its unusual timing noise and acceleration within the host globular cluster, the return rate is $\sim 0.12$ century$^{-1}$, with a $\sim 50\%$ probability of a glitch in the next 10 years, and $\sim 95\%$ in the next $\sim 50$ years. As discussed earlier, the clear detection of such a small glitch in relatively low-precision data suggests that no such glitches have avoided detection in similarly precisely-timed MSPs. It should be noted that the event rates calculated here assume that all MSPs and recycled pulsars are equally likely to experience a glitch, and that the probability remains the same following a glitch. This is probably not true, as the internal structure of the neutron stars is an important factor in the true glitch rate. The calculated glitch rates are therefore only estimates, but allow us to consider how likely it is that a glitch will occur in a $10+$ year data set required for a GW detection. There are also biases in our calculations which would need to be addressed for the true rates to be obtained. For example, we are biased against glitches occurring very early or late in a data set, due to the difficulty in recognising their effect. Neutron Star Structure ---------------------- The discovery of a glitch in PSRB1821$-$24A led to speculation on the nature of neutron star structure, due to the small size of the glitch, the high rotation frequency of the pulsar, and the relatively low magnetic field strength (${2.25 \times 10^{9}}$G) compared to other glitching pulsars ($\gtrsim 10^{11}$G). [@mkd+09] interpreted this as evidence for PSRB1821$-$24A being a strange star, due to the magnitude of the observed glitch being consistent with the modelled values arising from a cracking of the strange star crust. By comparison, we derive the inferred surface magnetic field strength of PSRJ0613$-$0200 from the period and spin-down rate to be ${1.7 \times 10^{8}}$G, an order of magnitude lower than that of PSRB1821$-$24A. [@mkd+09] also noted that the PSRB1821$-$24A glitch energy budget $\Delta E \sim 10^{40}$erg, given by ${\Delta E=\delta(I \nu^{2}) \sim I \nu^{2}(\frac{\delta \nu}{\nu}) \sim E_{\text{rot}}(\frac{\delta \nu}{\nu})}$, stood out from the rest of the population, which follow a line on a $\log \Delta E$ vs. $\log \Delta \nu/\nu$ plot, when assuming all neutron stars have the same moment of inertia ${I=10^{45}\,\text{g}\,\text{cm}^{-2}}$. This implies that the large amount of energy required for such a change in angular momentum may not be readily available to millisecond pulsars. The PSRJ0613$-$0200 energy budget ${\Delta E \sim 2 \times 10^{39}}$erg also does not follow the same distribution. It is therefore apparent that the combination of small glitch sizes, greater characteristic ages (30Myr and 5Gyr for PSRB1821$-$24A and PSRJ0613$-$0200 respectively), lower magnetic field strengths, and energy budgets imply that while MSPs are most likely neutron stars (e.g. recent MSP mass measurements in [@ato+16]), they could potentially have a different interior structure to the rest of the population, which may cause the glitch mechanism or properties to be different. The uniqueness of the three glitching recycled pulsars can be seen in the $P$-$\dot{P}$ diagram (Figure \[fig:ppdot\]). Gravitational Wave Memory ------------------------- One of the proposed causes of a GW signal in PTA data is a burst with memory (BWM), caused by a merger of a supermassive black hole binary (SMBHB), which will leave a lasting change (offset) in space-time [@bt87]. The main signature of such a burst in pulsar TOAs is a step in frequency, without a step in frequency derivative. When a BWM passes over the Earth, a step will be seen at the same time in all pulsars that are observed (Earth term). However, since the signal travels at the speed of light, when a BWM passes over a pulsar, it will not be seen in other pulsars in the PTA at the same time, due to the large light travel time between pulsars. If a BWM affects only the pulsar term, this could be difficult to distinguish from a glitch, as only a single pulsar is affected. There will also be no exponential recovery, as seen for some glitches [@cj12]. This would be difficult to identify in PSRJ0613$-$0200, as [@lps95] noted the percentage glitch recovery decreases with characteristic age of the pulsar, making it effectively zero for a 5Gyr characteristic age. [@mcc14] compare the BWM effect with the size of the glitch in PSRB1821$-$24A. They conclude that if the frequency change in that pulsar would have been caused by a BWM instead of a glitch, this would have required an impossible scenario of a merger of a $\sim 10^{10} M_{\odot}$ edge-on SMBHB only 10Mpc from the Milky Way. Such a system is excluded by single-source GW limits e.g. [@bps+16], [@dec+15], [@yss+14]. We use the same argument to rule out a BWM scenario for the signature in our data on PSRJ0613$-$0200. Although the change in spin-down rate is consistent with zero, the glitch size is too large to make a BWM a realistic scenario for our measurements. Conclusions =========== We have measured a spin frequency step in PSRJ0613$-$0200 that we attribute to a small glitch, making this only the second detection of a glitch in a MSP, and the smallest glitch size recorded to date. We rule out other possibilities, such as magnetospherically-induced variations in rotation and pulse shape, and a gravitational wave BWM, due to the absence of effects associated with these causes. We interpret the difference between glitches in MSPs and the general pulsar population as potential indications of differences in MSP interior structure, and find that the glitch rate for MSPs is significantly different to that of the general population. We demonstrate that glitch events are rare in PTA pulsars. Although their effect on the TOAs is significant, they can be accounted for without any further consequences for GW experiments when sufficient post-glitch timing data are available to correct for the glitch signature. Acknowledgements {#acknowledgements .unnumbered} ================ We thank L.Levin for useful discussions. The authors acknowledge the support of the colleagues in the The European Pulsar Timing Array (EPTA). The EPTA is a collaboration between European institutes, namely ASTRON (NL), INAF/Osservatorio di Cagliari (IT), Max Planck Institut f[ü]{}r Radioastronomie (GER), Nançay/Paris Observatory (FRA), University of Leiden (NL) and the University of Manchester (UK), with the aim to provide high precision pulsar timing to work towards the direct detection of low-frequency gravitational waves. An Advanced Grant of the European Research Council to implement the Large European Array for Pulsars (LEAP) also provides funding. Part of this work is based on observations with the 100-m telescope of the Max-Planck-Institut f[ü]{}r Radioastronomie (MPIfR) at Effelsberg. Access to the Lovell Telescope is supported through an STFC consolidated grant. The Nançay radio telescope is part of the Paris Observatory, associated with the Centre National de la Recherche Scientifique (CNRS), and partially supported by the Région Centre in France. The Westerbork Synthesis Radio Telescope is operated by the Netherlands Institute for Radio Astronomy (ASTRON) with support from The Netherlands Foundation for Scientific Research NWO. S.O. is supported by the Alexander von Humboldt Foundation. P.L. gratefully acknowledges financial support by the European Research Council for the ERC Starting Grant BEACON under contract no. 279702. This work was supported by the UK Science and Technology Research Council, under grant number ST/L000768/1 \[lastpage\] **[$P$-$\dot{P}$]{} Diagram** ============================= ![image](ppdot) [^1]: E-mail: [email protected] [^2]: <http://www.epta.eu.org/aom/>
{ "pile_set_name": "ArXiv" }
--- author: - 'Humire, Pedro K., Nagar, Neil M., Finlez, Carolina, Firpo, Verónica, Slater, Roy, Lena, Davide, Soto, Pamela R., Muñoz, Dania, Riffel, Rogemar A., Schmitt, H.R., Kraemer, S.B., Schnorr-Müller, Allan, Fischer, T.C., Robinson, Andrew, Storchi-Bergmann, Thaisa, Crenshaw, Mike' - 'Elvis, Martin S.' bibliography: - 'aanda.bib' date: 'Received xxx, 2017; accepted xxx, 2017' title: 'An outflow in the Seyfert ESO 362-G18 revealed by Gemini-GMOS/IFU observations' --- Introduction ============ It is now widely accepted that the intense radiation emitted by an active galactic nucleus (AGN) is due to accretion onto a supermassive black hole (SMBH) [@Lynden-Bell1969; @Begelman1984] in the mass range $\sim$ 10$^{6}$-10$^{9}\,{\rm M_\odot}$. However, the mechanisms responsible for transferring the mass from galactic (kpc) scales down to nuclear scales (sub-parsec) to feed the SMBH are still under debate. This has been the subject of many theoretical and observational studies [@Shlosman1990; @Maciejewski2004a; @Maciejewski2004b; @Knapen2005; @Emsellem2006; @SchnorrMuller2014a; @SchnorrMuller2014b; @SchnorrMuller2017]. Theoretical studies and simulations have shown that non-axisymmetric potentials efficiently promote gas inflow towards the inner regions of galaxies [@Englmaier2004]. Close encounters and galactic mergers have been identified as a mechanism capable of driving gas from tens of kiloparsecs down to a few kiloparsecs [@Hernquist1989; @DiMatteo2005]. Major mergers are apparently a requirement for triggering the most luminous AGNs [@Treister2012]. Simulations by @Hopkins2010 suggest that in gas-rich systems, at scales of 10 to 100 pc, inflows are achieved through a system of gravitational instabilities over a wide range of morphologies such as nuclear spirals, bars, rings, barred rings, clumpy disks, and streams. Indeed, several observations support the hypothesis that large-scale bars channel the gas to the centres of galaxies [@Crenshaw2003a]. Recent studies have concluded that there is an excess of bars among Seyfert galaxies as compared to non-active galaxies of about 75% versus 57%, respectively [@Knapen2000; @Laine2002; @Laurikainen2004]. Further, structures such as disks or small-scale nuclear bars and the associated spiral arms are often found in the inner kiloparsec of active galaxies [@Erwin1999; @Pogge2002; @Laine2003; @Combes2014]. In general, the most common nuclear structures are dusty spirals, estimated to reside in more than half of active and inactive galaxies [71% and 61%, respectively; @Martini2003]. @Simoes2007 reported a marked difference in the dust and gas content of early-type active and non-active galaxies: the former always have dusty structures and only 25% of the latter have such structures. Thus, a reservoir of gas and dust is required for the nuclear activity suggesting that the dusty structures are tracers of feeding channels to the AGN. This fact, along with the enhanced frequency of dusty spirals, supports the hypothesis that nuclear spirals are a mechanism for fueling the SMBH, transporting the gas from kiloparsec scales down to a few tens of parsecs of the nucleus. Accretion onto the SMBH requires the removal of angular momentum, which can be achieved not only through gravitational torques, but also via outflows or winds [@Bridle1984]. The most powerful of these outflows are produced by the interaction between the ionized gas and magnetic field [@BisnovatyiKogan2001] reaching velocities of up to 1000 km s$^{-1}$ [@Rupke2011; @Greene2012] and outflow rates several times larger than host galaxy star formation rates (hereafter SFR) [@Sturm2011]. Massive AGN-driven outflows have been observed in many AGN, from Seyfert galaxies to quasars at low [@Morganti2007] and high redshifts [@Nesvadba2011], and could dramatically affect the evolution of galaxies due to the large amounts of energy they feed back into the interstellar medium [@DiMatteo2005]. At the less powerful end, studies of nearby Seyferts show that compact outflows ($\sim$100 pc in extent) with velocities of $\sim$100 km s$^{-1}$ and mass outflow rates of a few solar masses per year are common even in low-luminosity AGNs [e.g. @MullerSanchez2011; @Davies2014]. At low outflow velocities, it can be difficult to identify if AGNs or host galaxy starbursts are responsible for the outflow: a cut-off of 500 km s$^{-1}$ is often used [@Fabian2012; @Cicone2014] to differentiate the two. Identifying low-velocity outflows requires relatively high spectral resolutions and two-dimensional spectroscopy (integral-field spectrographs) to disentangle the different velocity components present: from the galactic disk and from outflow(s) and/or inflow(s) [@Storchi-Bergmann2010; @SchnorrMuller2014a]. Moreover, in some special cases these outflows are detected more frequently as redshifted, rather than blueshifted, winds since the light from the ionized regions reaches us preferentially from the receding side of the outflow which, for our LOS, is more illuminated by the AGN [e.g. @Lena2015], and the kinematics can often be modelled as a combination of a biconical outflow and a rotating disk coincident with the molecular gas [@MullerSanchez2011; @Fischer2013]. A better understanding of low-velocity outflows in nearby Seyferts is important to understand the kinematics of Seyfert galaxies at higher redshift, which could share the same model. Outflows could be very important for the evolution of galaxies because they can be the most efficient way for the interaction between the AGN and its host galaxy, a process called AGN feedback, affecting the interstellar medium and star formation. Several works explored whether this process triggers (positive feedback) or extinguishes (negative feedback) the host galaxy star formation [@Fabian2012; @Karouzos2014]. Empirical scaling relations between the masses of the SMBH and the host-galaxy bulge [e.g. @Gultekin2009], and between the AGN luminosity and the molecular outflow velocity [@Sturm2011] or dynamical mass [@Lin2016], have motivated a more intensive study of outflows. In this work, we present results obtained from integral field spectroscopy observations of the nuclear region of ESO 362-G18 (a.k.a. MCG 05-13-17), a nearby galaxy of morphological type Sa [@Malkan1998 hereafter MGT] or S0/a [RC3; @deVaucouleurs1991] harbouring a Seyfert 1.5 nucleus [@Bennert2006]. ESO 362-G18 has a redshift of 0.012445 and a systemic velocity of 3731 km s$^{-1}$ [@Paturel2003] or 3707 km s$^{-1}$ [@Makarov2014]; we consider the former estimate since it represents our data very well. Assuming H$_{0}$=73.0${{\rm ~km~s^{-1}~Mpc^{-1}}}$, this corresponds to a distance of 50.8 Mpc and a linear scale of 246 pc arcsec$^{-1}$. Previous studies estimated morphological position angles (PA) between 110$^{\circ}$ and 160$^{\circ}$ [RC3, @Fraquelli2000 and references therein] and a disk inclination (i), ranging from 37$^{\circ}$ to 54$^{\circ}$ [@Fraquelli2000 RC3, respectively]. ESO 362-G18 has been studied in the radio, near-infrared (NIR), optical, UV, and X-ray; its nucleus has typically been classified as Seyfert 1 [MGT, @Mulchaey1996; @RodriguezArdila2000; @Fraquelli2000; @AgisGonzalez2014]. Previous studies indicate that ESO 362-G18 is a highly disturbed galaxy [@Mulchaey1996] with a “long faint plume” to the NE [@Corwin1985]; this plume is likely an infalling less massive galaxy 10 to the NE, i.e. a minor merger. The emission-line maps of @Mulchaey1996 revealed strong \[O[iii]{}\] emission centred near the continuum peak with a fan-shaped emission of $\sim$10 in the SE direction, roughly along the host galaxy major axis and coincident with the strongest H$\alpha$ emission that is more symmetrically distributed about the nucleus. @Mulchaey1996 estimated that the highest excitation gas is located $\sim$7 SE from the nucleus on one edge of the ionization cone, but @Bennert2006 found that only the central $\pm$3 show line ratios typical of AGN ionized gas, and confirm the suggestion of @Fraquelli2000 that the ionization parameter is peaked in the nucleus and rapidly decreases within the narrow line region (NLR) based on the increased \[OII\]/\[O[iii]{}\] ratio. @Fraquelli2000 also suggested that the nuclear continuum ionizes the gas in the disk along PA = 158$^{\circ}$, giving rise to the fan-shaped region observed in \[O[iii]{}\]. Arcsecond resolution centimeter radio maps of ESO 362-G18 do not show any obvious extensions [@Nagar1999]. @Bennert2006 find that the spectra out to r $\sim$ 11 NW and out to r $\sim$ 6 SE have line ratios that fall in the regime of H[ii]{} regions. Indeed, @Tsvetanov1995 identified 38 H[ii]{} regions in their ground-based H$\alpha$+\[N[ii]{}\] image of ESO 362-G18, distributed in a cloud around the nucleus with distances between 3-18. The nuclear optical spectrum is dominated by broad permitted lines and narrow permitted and forbidden lines [@RodriguezArdila2000] and shows a featureless nuclear continuum due to the AGN [@Fraquelli2000], and main stellar features of Ca II K, G band, Mg I b and Na I D, as well as high order Balmer absorptions lines outside the nucleus. A broad Balmer decrement H$\alpha$$_{broad}$/H$\beta$$_{broad}$ of 5.7 indicates a slightly higher reddening of the broad line region (BLR) with respect to the central NLR [@Bennert2006]. Near-infrared spectroscopy [@Riffel2006] shows strong, broad H I, and He I lines with a full width at half maximum (FWHM) of $\approx$4500 km s$^{-1}$ and $\approx$5400 km s$^{-1}$, respectively. Besides, numerous forbidden lines are seen, including high-ionization lines of \[S[ix]{}\], \[Si[x]{}\], and \[Si[vi]{}\]. The Br$\gamma$ and $H_{2}$ molecular emission lines are observed as well, although they are intrinsically weak. The NIR continuum emission present stellar absorption features of Ca II, CO in $H$ band, and the 2.3 $\mu$m CO band heads in $K$ band on top of a steep power-law like the continuum. The most recent detailed study of this galaxy has been conducted by @AgisGonzalez2014. Their most important result was to detect a large variability in X-ray absorption which they explain as a clumpy, dusty torus lying in a compact region within $\sim$ 50r$_{g}$ (probably with 7 r$_{g}$, 1r$_{g}$ = GM$_{BH}$/c$^{2}$) from the central black hole. They also estimated an inner accretion disk inclination of i=53$^{\circ}$ $\pm$ 5$^{\circ}$, i.e. aligned with the large-scale galaxy disk (RC3; 54$^{\circ}$). This paper is organized as follows. Section 2 describes the observations, data processing, and analysis; Section 3 presents our results; Section 4 discusses the results and presents estimates of the mass outflow and inflow rates; and Section 5 presents our conclusions. Observations, data processing and analysis software =================================================== The observations were obtained with the integral field unit of the Gemini Multi-Object Spectrograph [GMOS-IFU, @Gemini_South] at the Gemini South telescope on the night of December 23, 2014 (Gemini project GS-2014B-Q-20). The observations were made in the one-slit mode of GMOS-IFU, in which the science IFU has a field of view (hereafter FOV) of 35 $\times$ 5. Two pointings, shifted by 05 were observed so that the total sky coverage was 4 $\times$ 5, centred on the nucleus. Two exposures of 900 seconds were made at each pointing with a shift in the central wavelength of 50Å between the two. The seeing at the time of the science observations was 07, as listed in the Gemini observations log, and we confirmed this value by fitting the luminosity profile of the broad line component of H$\alpha$ (see Sect. 3.5). This corresponds to a linear resolution of 172 pc at the distance of the galaxy. The spectroscopic standard star LTT1788 (V=13.16) was observed in a 360 s exposure $\sim$1 hr before observing ESO 362-G18, under similar atmospheric conditions and with the same instrument set-up. The selected wavelength range was 4092-7338 Å to cover the H$\beta\,\lambda$4861, \[O[iii]{}\]$\,\lambda \lambda$4959,5007, H$\alpha$+\[N[ii]{}\]$\lambda \lambda$6548,6583 and \[S[ii]{}\]$\,\lambda \lambda$6716,6731 emission lines, observed with the grating GMOS B600-G5323 (set to central wavelength of either $\lambda$5700Å or $\lambda$5750Å) at a spectral resolution of R$\approx$3534 at $\lambda$6440Å corresponding to an instrumental dispersion ($\sigma_{inst}$) of $\approx$ 36kms$^{-1}$. Wavelength calibration is expected to be accurate to the order of 8kms$^{-1}$. The data reduction was performed using specific tasks developed for GMOS data in the GEMINI.GMOS version 1.13 package and generic tasks in IRAF[^1]. The reduction process [see @Lena2014] comprised bias subtraction, flat-fielding, trimming, wavelength calibration, sky subtraction, relative flux calibration, building of the data cubes at a sampling of 008$\times$008, and finally the alignment and combination of the four data cubes. Owing to signal-to-noise (S/N) limitations, we used only the overlapping area of the two spatial pointings and also eliminated spaxels at the edge of the IFU. The final science FOV was thus 28 $\times$ 48. Sky subtraction, performed using spectra from the sky IFU. In Fig. 1, we note a residual telluric absorption at $\sim$6870Å but the \[S[ii]{}\]$\,\lambda \lambda$6716,6731 emission lines are not affected by these telluric absorptions. Flux calibration was performed using the spectroscopic standard star LTT1788 (V=13.16) for which fluxes are tabulated every 50Å. In order to measure the stellar kinematics and create an emission-line-only cube, we employed the penalized pixel fitting technique (pPXF) [@Capellari2004], using single stellar population (SSP) templates derived within the MILES Stellar Library [@Sanchez-Blazquez2006]. These templates have a spectral resolution of 2.51 (FWHM) or $\sigma_{inst}$ $\sim$58kms$^{-1}$ and cover a spectral range of 3525 to 7500. Although the spectral resolution of the MILES templates is lower than that of our science data (36kms$^{-1}$), the MILES Stellar Library give better results than, for example, the Indo-U.S. Library [$\sigma_{inst}$ $\approx$ 30kms$^{-1}$, @Valdes2004], since the later does not give optimal fits especially within the inner seeing disk. Comparing the pPXF results obtained using the two libraries individually, we observe that the differences in the fits are within the errors for the most part. When running pPXF with the MILES templates we did not convolve either to a lower spectral resolution. This is valid as the intrinsic stellar velocity dispersion in each spaxel is almost always above 60kms$^{-1}$ (as confirmed when using the INDO-US template library). The resulting stellar velocity dispersion map was corrected for the instrumental resolution of the science data (36kms$^{-1}$). Spatial averages (over large and small apertures) of spectra over various regions of the cube were first used to identify the 20 template spectra most used in the fits. These 20 spectra were then used to fit all individual spaxels in the cube. Before running pPXF, we masked spectral regions covering all broad and narrow emission lines; note that the former are present mainly within the inner seeing disk of 07 (172pc). We used a tenth order additive polynomial in pPXF to take away the effects of the continuum shape of the stellar templates, host galaxy, and any AGN power-law continuum. The resulting best-fit templates were used to create an emission-line-only spectrum for each spaxel. Examples of this process are shown in Fig. 2. The centroid velocities, velocity dispersions and the emission-line fluxes of the gas were initially obtained from the emission-line-only cube by fitting a single Gaussian to the H${\alpha}$, H${\beta}$, \[N[ii]{}\], \[O[i]{}\], \[O[iii]{}\], \[S[ii]{}\]$\,\lambda$6716 and \[S[ii]{}\]$\,\lambda$6731 emission lines using FLUXER[^2], which allows us to determine the residual continuum level around the emission lines in an interactive way; this is necessary for the \[S[ii]{}\] lines since they are very close to the broad component of H${\alpha}$. The resulting gaseous velocities are similar to those obtained from the Gas AND Absorption Line Fitting code [GANDALF; @Sarzi2006] and PROFIT [@Riffel2010]. To obtain the final velocity and dispersion maps we performed a sigma clip of 3$\sigma$ for all radial and velocity dispersion maps except in the nuclear regions of \[O[iii]{}\] (because of their high S/N ratio). We also performed a double-Gaussian fit to the \[O[iii]{}\] and H${\alpha}$ emission lines using a series of Python codes. To decide whether the observed line profile is better fit with a single or double Gaussian, we used the corrected Akaike information criterion [@Akaike1974] with the additional caveats that all Gaussian amplitudes are positive. All emission-line velocity dispersion maps were corrected for the instrumental resolution (36kms$^{-1}$). The kinematic PAs for both stars and gas were estimated using the code Kinemetry: a generalization of photometry to the higher moments of the LOS velocity distribution [@Krajnovic2006]. The systemic velocity (hereafter $V_{sys}$) of 3731kms$^{-1}$ was taken from @Paturel2003. ![image](graf_like_emsellem.pdf){width="70.00000%"} Results ======= The top left panel of Fig. 1 presents the Gemini acquisition image filter r of ESO362-G18, where the posited minor merger approaching from the NE direction is also clearly seen; the rectangle shows the FOV of the IFU. The top right panel shows the stellar continuum image obtained from our IFU data cube by integrating the flux within a spectral window from $\lambda$5345 to $\lambda$ 5455 . We assume a nuclear position that coincides with the location of the continuum peak in this image. In the bottom panel, we present four spectra from the locations indicated as 1 (nucleus), 2 and 3 (intermediate regions) and 4 (boundary region) in the IFU image and extracted within apertures of 02 $\times$ 02. The nuclear spectrum (identified as 1 in Fig. 1) shows broad H$\alpha$ and H${\beta}$ components, which led to the classification of ESO 362-G18 as a Seyfert 1 galaxy [@Fraquelli2000], and also narrow \[O[iii]{}\]$\lambda \lambda$4959,5007, \[O[i]{}\]$\lambda \lambda$ 6300,6363, \[N[ii]{}\]$\lambda \lambda$6548,6583 and \[S[ii]{}\]$\lambda \lambda$6717,6731 emission lines. Large variations in the broad H${\beta}$ emission line of ESO 362-G18 has occasionally led to its classification as a Seyfert 1.5; such variations are not uncommon in Seyfert galaxies. We created a structure map of ESO 362-G18 (right panel of Fig. 5) by running the IDL routine unsharp\_mask.pro on an image obtained with WFPC3 (Wide Field Planetary Camera 3) through the filter F547M aboard the Hubble Space Telescope (hereafter HST; Program ID 13816). Inspection of the dust structure shows signs of spiral arms together with stronger obscuration to the SW than to the NE. We thus conclude that the SW is the near side of the galaxy. This is also consistent with flux asymmetries (Section 3.1) and a trailing spiral pattern (Section 3.2). **Morphology and excitation of the emitting gas** ------------------------------------------------- In Fig 3. we present the flux distributions derived from single Gaussian fits to the H$\alpha$, H$\beta$, \[N[ii]{}\]$\,\lambda$6583, \[O[i]{}\]$\lambda$6300, \[O[iii]{}\]$\lambda$5007 and \[S[ii]{}\]$\,\lambda$6716 emission lines. The flux distributions show a relatively smooth and symmetric pattern for all the emission lines, slightly elongated along the kinematic major axis (see Figures 6 and 7), as expected given the inclination. The highest fluxes are within the inner 1 (246pc). If the best-fitting two-dimensional Gaussian is subtracted from these flux distributions, the greatest asymmetries are found in the SW, implying a stronger presence of dust here. This supports our previous interpretation of the SW as the near side of the galaxy. Maps of the estimated electron density and the \[N[ii]{}\]/H$\alpha$, \[O[i]{}\]/H$\alpha$, H$\alpha$/H$\beta$, \[O[iii]{}\]/H$\beta$ line ratios are presented in Fig. 4. The electron density was obtained from the \[S[ii]{}\]$\,\lambda\lambda$6716/6731 line ratio assuming an electronic temperature of 10000 K [@Osterbrock2006]. The electron density reaches a peak value of $\sim$ 2900cm$^{-3}$ at the nucleus, decreasing to 1000 cm$^{-3}$ at 1 and 800cm$^{-3}$ at 1.5 from the nucleus. These values are in agreement with those obtained by @Bennert2006 who estimated values from 1000cm$^{-3}$ up to 2500cm$^{-3}$ in that region. The \[N[ii]{}\]/H$\alpha$ line ratio shows values of 0.55-0.78 within the inner 1 and reach its highest values of close to 0.9 in a nuclear-centred ring of radius 1.5. The \[O[iii]{}\]/H$\beta$ ratio varies between 5.6 and 9 in the inner 0.5  and has a depression in the nucleus and increasing to 11 at 1. Taking into account both the line ratios, the values can be considered typical of Seyfert galaxies [@CidFernandes2010]. **Stellar kinematics** ---------------------- The stellar velocity (V$_{\star}$) field, obtained from pPXF, is shown in the left panel of Fig. 5. This field displays a rotation pattern reaching amplitudes of $\approx$ 75kms$^{-1}$ within our FOV; the line of nodes are orientated approximately along the NW–SE direction with the SE side approaching and the NW side receding. With our adopted orientation, this implies trailing spiral arms as expected. The stellar velocity dispersion (fourth panel of Fig. 5) reaches values of 120kms$^{-1}$ at the nucleus, staying up to 100kms$^{-1}$ to the NW and decreasing to 75kms$^{-1}$ to the SE and towards the edges of the FOV. Median radial velocity errors reported by pPXF are 32.6kms$^{-1}$ in the inner 075 and 33.2kms$^{-1}$ in the inner 125. Median errors in the velocity dispersion (also from pPXF) are 30.0kms$^{-1}$ in the inner 075, and 40.2kms$^{-1}$ in the inner 125. We employed Kinemetry [@Krajnovic2006], in which the kinematic centre is fixed to the continuum peak, to obtain the PA of the stellar kinematics at various radii from the nucleus. The resulting values range from 130$^{\circ}$ to 139$^{\circ}$, thus we chose the median value of 137$^{\circ}$ as the ‘global kinematic’ PA, as suggested by @Krajnovic2006 [Appendix C]. This value is consistent with the morphological major axes from the literature of between 110$^{\circ}$ and 160$^{\circ}$ [RC3, @Fraquelli2000 and references therein]. We model the stellar velocity field by assuming circular orbits in a plane and a spherical potential [@Bertola1991], where the observed radial velocity at a position ($R,\psi$) in the plane of the sky is given by $$V=V_{s} + \frac{ARcos(\psi-\psi_{0})}{ [R^{2}[sin^{2}(\psi-\psi_{0})+cos^{2}\theta cos^{2}(\psi-\psi_{0})]+c^{2}cos^{2}\theta ]^{p/2}}$$ where $\theta$ is the inclination of the disk (with $\theta$ = 0 for a face-on disk), $\psi_{0}$ is the PA of the line of nodes measured with respect to the x-axis within the field shown, $V_{s}$ is the systemic velocity, $R$ is the radius and $A$, $c$, and $p$ are parameters of the model. We assumed the kinematical centre to be cospatial with the peak of the continuum emission, a PA of 137$^{\circ}$ as derived via Kinemetry above, a disk inclination of 37$^{\circ}$, obtained from the apparent axial ratio [@Winkler1997; @Fraquelli2000] under the assumption of a thin disk geometry, and an initial guess of 3731kms$^{-1}$ for $V_{sys}$ [@Paturel2003]. We used Levenberg-Marquardt least-squares algorithm to fit the rotation model to the velocity map. The resulting parameters $A$, $c$ and $p$, are 203kms$^{-1}$, 109 and 1.93, respectively. The fitted V$_{sys}$ corrected to the heliocentric reference frame is very similar to our initial guess, so we continue to use the latter. The model stellar velocity field and velocity residuals are shown in Fig. 5. **Gas kinematics** ------------------ The velocity fields of all strongly detected emission lines, that is H$\alpha$, H$\beta$, \[N[ii]{}\]$\lambda$6583 , \[O[i]{}\]$\lambda$6300 , \[O[iii]{}\]$\lambda$5007 and \[S[ii]{}\]$\lambda$6716 , show clear signatures of rotation; the projected peak rotation velocities range from 66 km s$^{-1}$ up to 82 km s$^{-1}$, although non-rotation signatures and offset kinematical centres are also present in most lines. The velocity maps from a single Gaussian fit of the \[N[ii]{}\]$\lambda$6583 , H$\alpha$, and \[O[iii]{}\]$\lambda$5007 emission lines are shown in the left column of Fig. 6 and first and third panels of Fig. 7. Inspecting the velocity maps of these emission lines, we find an offset of $\approx$ 05 (128pc) between the continuum peak and kinematic centre for H$\alpha$, H$\beta$, \[O[i]{}\], and \[O[iii]{}\], while no significant offset is present in \[N[ii]{}\]$\lambda$6583 or \[S[ii]{}\]$\lambda$6716. To more clearly visualize these offsets, we plot the rotation curves of the stars and the stronger emission lines along their respective kinematic PAs in Fig. 8. The offsets cause an apparent asymmetry in the velocity fields within our FOV, reaching greater blueshifts than redshifts in the majority of cases. However, comparing this feature with previous long-slit spectroscopy of the inner 10 and 30 [@Bennert2006; @Fraquelli2000 respectively], we can infer that this asymmetry is exclusively due to the offset in the kinematic centre. We used Kinemetry to fit the velocity maps of all emission lines. Given that the rotation curves of the emission lines are offset from that of the stars and each other we obtained reasonable results from Kinemetry only if the kinematic centre was set to a position 05 to the SE of the continuum peak for H$\alpha$, H$\beta$, \[O[i]{}\]$\lambda$6300 , and \[O[iii]{}\]$\lambda$5007 ; for \[N[ii]{}\]$\lambda$6583 and \[S[ii]{}\]$\lambda$6716 , setting the kinematic centre to the continuum peak gave meaningful results. Fitting the individual emission line velocity fields with Kinemetry resulted in global kinematic PAs ranging between 121$^{\circ}$ and 139$^{\circ}$. For each given emission line, the radial variations of the PA do not exceed 20$^{\circ}$, and all emission line global kinematic PAs are in rough agreement (within 16$^{\circ}$) with the stellar kinematic PA except for \[S[ii]{}\] for which the difference is 24$^{\circ}$. While we expected similar kinematics in H${\alpha}$ and H$\beta$, Kinemetry gives global PAs of 139$^{\circ}$ for H${\alpha}$ and 130$^{\circ}$ for H${\beta}$, and indeed at most radii the fitted PA for H${\alpha}$ is $\sim$9$^{\circ}$ larger than that of H${\beta}$. We thus use a global PA of 134.5$^{\circ}$ for both H${\alpha}$ and H${\beta}$. ![image](rotation_curves.pdf){width="90.00000%"} Since the \[N[ii]{}\] emission line is both strong and without a significant kinematic offset from the stellar continuum peak, we fit a gas-kinematic Bertola model to its velocity field, following the procedure outlined in Sect. 3.2. Once more we fix the disk inclination to 37$^{\circ}$, V$_{sys}$ to 3731 km s$^{-1}$, and a kinematic centre cospatial with the stellar continuum peak. The resulting values are 125 km s$^{-1}$, 106, 1.04, and 125$^{\circ}$ for A, c, p and the PA, respectively. This gas-kinematics model is shown in the second column of Fig. 6. The third and fourth columns of Fig. 6 show the residual velocity fields of the emission line gas after subtraction of gas-kinematics model and the stellar velocity model, respectively. For all lines except \[N[ii]{}\] and \[S[ii]{}\] we see the above-mentioned excess redshift SE of the nucleus. Maps of the velocity dispersion (henceforth referred to as $\sigma$ before the name of the corresponding emission line) of the corresponding emission lines, derived with a single Gaussian fit, are shown in Fig. 9. The uncertainty here is $\sim$36 km s$^{-1}$ (instrumental dispersion, $\sigma_{inst}$). The highest nuclear dispersions, $\gtrsim$200 km s$^{-1}$, are seen in $\sigma_{H\alpha}$ and $\sigma_{[N\,{\sc II}]}$; as we leave the nucleus, their dispersions decrease faster along the kinematic major axis than along the kinematic minor axis. On the other hand, $\sigma_{[O\,{\sc III}]}$ is predominantly homogeneous in the non-nuclear regions with its nuclear value rapidly increasing to 170 km s$^{-1}$. In the following sections, we interpret this large nuclear dispersion as due to the presence of a new offset velocity component that is most prominent in the nuclear region causing blended profiles and thus non-reliable fits to a single Gaussian. The dispersion of the H$\beta$ line is similar to that of H$\alpha$, while both $\sigma_{[O\,{\sc I}]}$ and $\sigma_{[S\,{\sc II}]}$ do not present centrally peaked distributions. Radial velocity errors were taken directly from FLUXER (Sect. 2). For H$\alpha$, \[O[iii]{}\] and \[N[ii]{}\] emission lines, these errors vary between 1 and 24 km s$^{-1}$ in the inner 125. As the remaining emission lines observed are not present throughout our FOV, we obtained the errors within the inner 075, where they vary in the range between 0.8 and 27 km s$^{-1}$. Errors in the velocity dispersion (also from FLUXER) for H$\alpha$, \[O[iii]{}\] and \[N[ii]{}\] emission lines vary between 0.7 and 16 km s$^{-1}$ in the inner 125, and between 1.6 and 29 km s$^{-1}$ in the inner 075 for the remaining emission lines. **Position–velocity diagrams** ------------------------------ To better constrain the emission-line kinematics we built position-velocity (PV) diagrams (Fig. 10) for the three strongest emission lines, \[O[iii]{}\], H${\alpha}$, and \[N[ii]{}\]. We centred these PV diagrams on the continuum peak and along PA 130$^{\circ}$ since this is the kinematic major axis found in the single Gaussian fit. The pseudo slit is 08 wide. The velocity prediction from the single Gaussian fit is superposed for an easy direct comparison. While the PV diagram of \[N[ii]{}\] shows a good agreement with the single Gaussian fit, the diagrams of both \[O[iii]{}\] and H${\alpha}$ show a second velocity component redshifted by $\sim$150 km s$^{-1}$ in the nuclear region. If the prominent emission in H${\alpha}$ and \[O[iii]{}\] is seen from a larger velocity range, it may be noted that the most of the emission occurs at velocities below $\pm$ 500 km s$^{-1}$ for both these lines. [0.33]{} ![image](Ha310.pdf){width="\textwidth"} [0.33]{} ![image](OIII310.pdf){width="\textwidth"} [0.33]{} ![image](NII310.pdf){width="\textwidth"} **Double Gaussian fit** ----------------------- Given the clear evidence for a second velocity component in some of the emission lines, we fit the \[O[iii]{}\] and H${\alpha}$ emission lines with a double Gaussian; these two lines were chosen as they have the highest S/N ratio among the double peaked lines. Given the clear velocity separation seen between the two components in the PV diagrams, we discriminated these components by their radial velocity (rather than, e.g. width), and they are henceforth referred to as the low-velocity component and high-velocity component. The line profile at each spaxel in the data cube was fit with a double Gaussian using a series of Python codes, mainly within the lmfit package[^3]. To decide whether the observed line profile is a better fit with a single or double Gaussian, we use the corrected Akaike information criterion [AIC$_{c}$, @Akaike1974] with the additional caveats that all Gaussian amplitudes are positive. For the H${\alpha}$ emission line, the double Gaussian fit was performed after subtraction of the broad component (Fig. 1), which can be detected as far as $\sim$1 from the nucleus, even though the seeing was $\sim$07. This broad component is also present in H${\beta}$ (Fig. 1). The top three panels of Fig. 11 show a detailed example of the fitting process to the H$\alpha$ emission line in a nuclear spaxel. These panels show the multi-component Gaussian fit to the H$\alpha$ and \[N[ii]{}\] $\lambda \lambda$6548,6583 emission lines, before (left) and after (middle) the subtraction of the broad H$\alpha$ component, and the subsequent double Gaussian fit to the narrow H$\alpha$ emission. In the bottom panels of the same figure, we present examples of single/double (as decided by the AIC$_{c}$) Gaussian fits for \[O[iii]{}\] in different spaxels (($x$,$y$) axes) of our data cube. To estimate the errors in the velocities of the narrow components of H$\alpha$ produced by an erroneous broad component subtraction, we use the following iterative process. For each spaxel, we vary the central velocity of the broad line over the range $\pm3$ times the $1\sigma$ velocity error reported by $lmfit$. The double narrow components are then fit after subtraction of this broad line, and the results compared to those of the best fit (Fig. 12, bottom panels). We find that the narrow component velocities vary by less than 1.8 km s$^{-1}$ for the low-velocity component and typically less than 6 km s$^{-1}$ for the high-velocity component (in this latter two spaxels show velocity differences of up to 38 km s$^{-1}$). The errors recorded by lmfit for the narrow H$\alpha$ component velocities are negligible. Less than 3% of the spaxels which were originally better fitted with a double (instead of single) Gaussian are occasionally better fitted with a single Gaussian during our iterative process. Thus, overall, the two narrow component velocities are robust w.r.t. the seeing-smeared contamination of the (unresolved) BLR. Does the velocity profile of the unresolved (but seeing-smeared) nuclear NLR also affect the velocities of the two-component Gaussian fit? It is difficult to quantify this effect, but we note that the PV diagrams of \[O[iii]{}\] and H${\alpha}$ (see Figs. 10 and 12 for those along the major axis) clearly show that the velocity profiles are not symmetric about the nucleus even in the nuclear seeing disk. The velocities from the double component Gaussian fit appear to relatively well trace features seen in the PV diagrams, so any unresolved nuclear component would smooth out but not hide variations. We note however that to the NW of the nucleus, the two-component fit to the H${\alpha}$ line does not appear to follow the PV diagram. Instead the PV diagram is more consistent with a single Gaussian at a slightly larger redshift. The results of the two-component fits to \[O[iii]{}\] and H${\alpha}$ are shown in Fig. 12. The left four panels show the velocity fields of the two components of H$\alpha$ and \[O[iii]{}\], and the two rightmost panels of Fig. 12 show the same major axis PV diagrams as Fig. 10 (along PA 130$^{\circ}$). But this time we overlay the velocities of each of the two velocity components (high-velocity component in red and low-velocity component in blue) along with the velocities obtained from the single Gaussian fit to the respective line (green), the velocities obtained by a single Gaussian fit to the \[N[ii]{}\] line (brown) and the Bertola model fit to the stellar kinematics (black). According to the AIC$_{c}$, the two-component fit is required only to the SE. To the NW, a single component fit gives better results. The lower \[O[iii]{}\] velocity component has velocities ranging from -30 km s$^{-1}$ to -135 km s$^{-1}$ and a kinematic PA of $\sim$ 123$^{\circ}$, while the higher \[O[iii]{}\] velocity component shows values $\approx$ 200 km s$^{-1}$ higher. The lower H${\alpha}$ velocity component shows velocities of -80 km s$^{-1}$ to 70 km s$^{-1}$, with a PA of $\sim$ 123$^{\circ}$, while the higher H${\alpha}$ velocity component show values of 100 km s$^{-1}$ up to 255 km s$^{-1}$. The corresponding velocity dispersions are shown in Fig. 13. The low-velocity component of H${\alpha}$ shows a centrally peaked velocity dispersion map, while in the higher velocity component shows high dispersions only in disjoint regions $\gtrsim$06 from the nucleus. The low-velocity component of \[O[iii]{}\] has systematically higher dispersions (except in the nucleus) in comparison to the high-velocity component. **Black hole mass** ------------------- Black hole mass estimations for ESO 362-G18 were rigorously explored by @AgisGonzalez2014; our new observations allow us a better constraint on the value of the FWHM of $H{\beta}$ since we have a better spectral resolution and two-dimensional data for this line. We fit the $H{\beta}$ profile with a double Gaussian using the lmfit package as described in Sec. 3.5, and use the FWHM of the broad component (hereafter FWHM$_{H{\beta}}$) to estimate the black hole mass. Assuming a disk-like BLR geometry to avoid assuming a virial coefficient $f$, which can vary widely, @AgisGonzalez2014 used the following expression: $$M_{BH} = R_{BLR} FWHM_{H{\beta}}^{2} (4G sin^{2} i )^{-1}$$ where $i$ is the angle between the LOS and the angular momentum vector of the disk-like BLR (53$^{\circ}$ $\pm$ 5$^{\circ}$), R$_{BLR}$ is the radius of the BLR ($\sim$ 5.2 $\times$ 10$^{16}$ cm), and $G$ is the gravitational constant. Using our new value of FWHM$_{H{\beta}}$, 5689$^{+398}_{-723}$ km s$^{-1}$, calculated in the inner $\sim$035 around the continuum peak (i.e. within the seeing disk), we obtain a black hole mass $M_{BH}$ of 4.97$^{+1.60}_{-1.61}$ $\times$ 10$^{7}$ M$_{\odot}$, which is consistent with the value (4.5 $\pm$ 1.5 $\times$ 10$^{7}$ M$_{\odot}$) obtained by @AgisGonzalez2014, and in a range typical of both narrow line and broad line Seyfert 1 galaxies [@Greene2005 Fig. 4]. Discussion ========== The structure map (rightmost panel of Fig. 5) shows spiral arms that get increasingly fainter as they approach the nucleus. Given the scale of the FOV and the asymmetries we find after subtracting two-dimensional Gaussians from the flux maps of each emission line, we interpret the unusual spiral arm structure as a result of instabilities produced by the inner Lindblad resonance, which is expected to lie in the inner 1.6 kpc [@Laine2002]. We thus argue that the unusually high value of the parameter p ($>$1.5) in the Bertola stellar model is a direct consequence of this resonance. The abundant dust seen in the structure map also supports the hypothesis of [@Simoes2007] that the presence of dust is a necessary condition for accretion onto the nuclear SMBH. Previous studies of ESO 362-G18 have found an asymmetric \[O[iii]{}\] emission-line morphology with a fan-shaped structure extending 10 from the nucleus to the SE, and an asymmetric morphology in the stellar continuum that is reminiscent of a minor merger system [@Mulchaey1996 Fig. 29]. @Fraquelli2000, using long-slit spectra, noted that the kinematics of this extended emission-line region is similar to that of the stars. They posited, primarily based on the morphological appearance, the presence of an AGN outflow with a collimating axis orientated at an angle $\leq$30$^{\circ}$ with respect to the galactic plane; this small angle is required to allow the nuclear radiation to intercept the gas in the disk and to allow a direct LOS to the BLR. They proposed an opening angle larger than 60$^{\circ}$ for the ionizing radiation cone. We note that the spectral resolution of the long-slit spectra used by @Fraquelli2000 was not high enough to resolve the nuclear outflow that we posit. Our results are consistent with the outflow scenario posited by @Fraquelli2000: both \[N[ii]{}\] and \[S[ii]{}\] share the same kinematic PA ($\approx$120$^{\circ}$), $\approx$10$^{\circ}$ less than the kinematic PA of H$\alpha$, H$\beta$, \[O[i]{}\], and \[O[iii]{}\] ($\approx$130$^{\circ}$). The offset in the kinematic centres of H$\alpha$, H$\beta$, \[O[i]{}\] and \[O[iii]{}\] are closest to the direction of the ionization cone [158$^{\circ}$, @Fraquelli2000]. Both facts allow us to infer that the high-ionization emission lines are more affected by this cone, while \[N[ii]{}\] and \[S[ii]{}\] appear to be dominantly from gas rotating in the galactic plane, and following a rotation curve similar to that of the stars (Fig. 8). The PV diagrams (Fig. 10) of \[O[iii]{}\] and H${\alpha}$ clearly show a second velocity component $\sim$150 km s$^{-1}$ to the red. Its contribution is most significant in the inner arcsecond and is the reason why the single Gaussian fit gives velocities redder than the expectation of pure rotation in the nucleus (see Fig. 7). The equivalent PV diagrams for \[O[i]{}\] and H${\beta}$ (not shown) are also consistent with the presence of the second higher velocity component, but we could not rigorously fit double Gaussians to these profiles owing to their relative faintness, especially at distances $\gtrsim$ 1.5 from the nucleus (see Fig. 1). Further, both \[O[i]{}\] and H${\beta}$ (not shown) show the same asymmetries as \[O[iii]{}\] and H${\alpha}$ in their velocity maps derived from a single Gaussian fit (Fig. 7). The appearance of the velocity fields (PAs, velocity ranges, and rotation curves) of the low-velocity component of \[O[iii]{}\] and H${\alpha}$ (middle panels of Fig. 12) are very similar to those derived from \[N[ii]{}\] in the same region. Therefore we interpret this component as emission from gas in the galactic disk that is rotating in the same manner as the \[N[ii]{}\]-emitting gas and the stars. Only the negative velocities in \[O[iii]{}\], reached very close to the nucleus (from -75 km s$^{-1}$ to -120 km s$^{-1}$), can be attributed to an outflow approaching the observer. The high-velocity component of both these lines shows a very different velocity field, with values that exceed 200 km s$^{-1}$ ($>$ 330 km s$^{-1}$ deprojected). We thus conclude that the high-velocity component corresponds to the bright gas within the AGN ionization cone. Given that (deprojected) velocities larger than 150 km s$^{-1}$ are typically observed in outflows instead of inflows in nearby galaxies (see @Barbosa2009, @Storchi-Bergmann2010 and @Riffel2011 for examples of outflow velocities, and @Fathi2006, @SchnorrMuller2011 and @SchnorrMuller2014b for examples of inflow velocities), the most plausible explanation is that the high-velocity component is gas entrained by the AGN outflow at an angle *i* greater than zero, located behind the plane of the sky from our LOS, and thus redshifted to the observer. Why would we preferentially see gas on the far side of the ionization cone (redshifted to us) rather than on the near side (which would be blueshifted)? The explanation lies in the illumination of the NLR clouds by the AGN: on the far side of the cone in ESO 362-G18 we see the side of the gas cloud that is illuminated by the AGN, while on the near side we see primarily the dark side of the NLR clouds [see e.g. @Lena2015]. The low- and high-velocity components in ESO 362-G18 are reminiscent of the case in NGC 4151 [@Storchi-Bergmann2010]. The difference is that in NGC 4151 the high-velocity component corresponds to gas illuminated by a symmetric bicone, extending both in front and behind the galactic plane, while in ESO 362-G18 only the gas in front the galactic disk, illuminated by a single ionization cone, is seen. @AgisGonzalez2014 have estimated an inner accretion disk inclination of 53$^{\circ}$ $\pm$ 5$^{\circ}$; on the other hand, using the ratio of the minor to major photometric axis from @Winkler1997, @Fraquelli2000 derived a galactic disk inclination of $\approx$ 37$^{\circ}$. Therefore, we suggest a picture for ESO 362-G18 in which the ionization cone has an inclination angle *i* $\sim$ 8$^{\circ}$ $\pm$ 5$^{\circ}$ with respect to the plane of the sky with a half-opening angle of 45$^{\circ}$ in such a way that the cone intersects with the galactic disk in the SE direction, illuminating gas receding from our LOS due to the outflow; this value of the half-opening angle of the ionization cone was also suggested by @AgisGonzalez2014. We only see blueshifted gas very close to the nucleus (see Fig. 12, middle top panel), where the \[O[iii]{}\] gas is entrained by the approaching side of the cone in a small region corresponding to the thickness of the disk or bulge of the galaxy. This proposed configuration for the nuclear region in ESO 362-G18 is shown schematically in Fig. 14. For both \[O[iii]{}\] and H${\alpha}$, the highest dispersion values are predominantly reached in the low-velocity component (Fig. 13), which we interpret as gas rotation in the galactic disk. Only the central region of the high-velocity component (the outflow component) in \[O[iii]{}\] shows dispersions higher than 105 km s$^{-1}$, which we can interpret as coming from the approaching side of the outflow where the outflow is still within the galactic disk. The value of $\sigma$$_{H{\alpha}}$ of the outflow component is very sensitive to the subtraction of the broad emission in H$\alpha$. This process was carried out before the two Gaussian fit, so the large dispersion values ($\sim$ 08 SE from the nucleus) should be taken with some reserve since this is potentially due to confusion with the broad emission. We can only deduce that the decrease of $\sigma$$_{[O\,{\sc iii}]}$ and $\sigma$$_{H{\alpha}}$ in the outflow component is due to its partial occultation by the galactic disk. Since the outflow velocities posited above are not as high as the dividing line commonly adopted to discern immediately between starburst-driven superwinds and AGN-driven outflows [$>$500 km s$^{-1}$, @Fabian2012; @Cicone2014], we must test whether the kinetic power injected by supernovae in the inner 100 pc is sufficient to drive the outflow. The outflow, while it extends over at least $\sim$0.5 kpc to the SE, is already seen at high velocities in the inner seeing disk ($\sim$0.1 kpc). The global SFR of ESO 362-G18 is relatively low: the H$\alpha$-derived global SFR is $\sim$ 5.5 $\times$ 10$^{-3}$ M$_{\odot}$ yr$^{-1}$ [@Galbany2016] and we derive a similar value from the far-infrared luminosity (based on IRAS fluxes). We note that @Melendez2008 quoted a relatively high instantaneous SFR of 0.85 M$_{\odot}$ yr$^{-1}$ in the inner kpc, based on Spitzer spectroscopy of the \[Ne[ii]{}\] line; these authors, however, made many estimations and assumptions beyond those previously used [e.g. @Genzel1998; @Ho2007] in the disentanglement of the AGN versus star formation contribution to the \[Ne[ii]{}\] line and its posterior conversion to a SFR method. Given that the outflow is seen in the inner kpc and starts within the inner $\sim$100 pc, the SFR in this corresponding nuclear region is expected to be significantly lower than 10$^{-2}$ M$_{\odot}$ yr$^{-1}$. Using the relationship of @Veilleux2005, $P_{kin,SF}$(erg s$^{-1}$) = 7$\times$ 10$^{41}$ SFR(M$_{\odot}$ yr$^{-1}$), the galaxy-wide kinetic power injected by supernovae, $P_{kin,SF}$, is $\sim$ 10$^{40}$ erg s$^{-1}$, which is similar to the kinetic power of the outflow (see Sect. 4.2). However, the typical SFRs of galaxies with candidate starburst-driven outflows are in the range 1 M$_{\odot}$ yr$^{-1}$ to hundred M$_{\odot}$ yr$^{-1}$ [see e.g. @Cicone2014]. Attributing the outflow to the starburst thus requires a concentration of the galaxy-wide SFR in the inner 100 pc and/or an instantaneous SFR significantly larger than the long-term average SFR and a 100% coupling of the supernova kinetic energy to the outflow. We thus conclude that starburst superwinds as the origin of the outflow in ESO 362-G18 are possible, but very unlikely. **Feeding versus feedback** --------------------------- We can estimate the mass outflow rate as the ratio of the mass of the outflowing gas to the dynamical time at the nucleus, $M_{g}/t_{d}$. The gas mass is given by $$M_{g} = N_{e}m_{p}Vf,$$ where $N_{e}$ is the electron density, $m_{p}$ is the mass of the proton, $V$ is the volume of the region where the outflow is detected; we fix this value as 035 around the nucleus (i.e. within the seeing) and $f$ is the filling factor. The filling factor can be estimated from $$L_{H\alpha} \approx f N_{e}^{2} j_{H\alpha}(T)V,$$ where $j_{H\alpha}$(T)=3.3534$\times$10$^{-25}$ erg cm$^{-3}$s$^{-1}$ [@Osterbrock1989] and $L_{H\alpha}$ is the H$\alpha$ luminosity emitted within the volume $V$. Substituting equation (2) into equation (1) we have $$M_{g} = \frac{m_{p}L_{H\alpha}}{N_{e}j_{H\alpha}(T)}.$$ In Section 4, we concluded that the high-velocity component detected both in \[O[iii]{}\] and H${\alpha}$ are produced by the emission of clouds in front of the galactic disk, which is entrained by the AGN outflow. Therefore, we estimate the H${\alpha}$ luminosity from the highest velocity component, yielding $L_{\alpha}$ = 1.5 $\times$ 10$^{40}$ erg s$^{-1}$, considering a luminosity distance to ESO 362-G18 of 52.1 Mpc (from the NASA/IPAC Extragalactic Database). On the other hand, we assumed the $N_{e}$ as the mean value of 2453 cm$^{-3}$ within the inner 035 from the nucleus. Taking into account all those values, we estimate an ionized gas mass of 1.5 $\times$ 10$^{4}$ M$_{\odot}$ for the outflow component. The dynamical time $t_{d}$ can be estimated as the ratio of the radius where we are considering the outflow (035 $\approx$ 86pc) to the mean deprojected velocity of the outflow ($\sim$394 km s$^{-1}$). This gives a $t_{d}$ of $\sim$ 2 $\times$ 10$^{5}$ years. Finally, the mass outflow rate $\dot{M}$ is 0.074 M$_{\odot}$ yr$^{-1}$. We point out that this $\dot{M}$ is only a lower limit since it corresponds only to the outflowing mass associated with the ionized side of the clouds. Additionally, if we consider a biconical outflow, assuming most of the far side of the bicone is hidden by the galactic disk, the $\dot{M}$ can be twice the calculated value, that is, $\dot{M}$ $\approx$ 0.15 M$_{\odot}$ yr$^{-1}$, in agreement with others $\dot{M}$ observed in nearby galaxies [@Barbosa2009; @MullerSanchez2011; @Lena2015] We can now compare the mass outflow rate with the mass accretion rate required to feed the SMBH $\dot{M}$$_{acc}$, which can be estimated as $$\dot{M}_{acc} = \frac{L_{bol}}{\eta c^{2}},$$ where $c$ is the speed of light and $L_{bol}$ is the bolometric luminosity. Using the $L_{bol}$ estimated by [@AgisGonzalez2014] of 1.3 $\times$ 10$^{44}$ erg s$^{-1}$, we derive a mass accretion rate of 2.2 $\times$ 10$^{-2}$ M$_{\odot}$ yr$^{-1}$, where we assume the radiative efficiency, $\eta$, to 0.1, the typical value derived from Shakura-Sunyaev accretion models onto a non-rotating black hole [@Shakura1973]. Therefore, the $\dot{M}$$_{acc}$ is $\sim$ 7 times lower than the mass outflow rate. This is consistent with previous works in nearby galaxies [@Barbosa2009; @MullerSanchez2011] and indicates that most of the observed outflowing gas is mass entrained by the surrounding interstellar medium [@Veilleux2005; @Storchi-Bergmann_review_2014]. In a sample of 15 pairs of Seyfert plus inactive galaxies, @Dumas2007 found that SMBHs with accretion rates larger than 10$^{-4.5}$ M$_{\odot}$ yr$^{-1}$ tend to lie in galaxies with disturbed kinematics. However, in the case of ESO 362-G18, although the value of $\dot{M}$$_{acc}$ is almost a thousand times greater than 10$^{-4.5}$ M$_{\odot}$ yr$^{-1}$, large twists in the gas kinematics or significant misalignments between the gas and stellar kinematics are not observed. Thus, the nuclear activity in ESO 362-G18 may be related to major mergers in the past [@Hopkins_mergers2010] that do not leave current disturbances in its kinematics, rather than perturbations in the ionized gas or misalignments between the stellar and gas rotations [@Dumas2007]. The posited minor merger observed in the acquisition image (Fig. 1) produces no discernible disturbances in the gas or stellar kinematics within our nuclear FOV. Considering the galactic disk as well as the BLR from the broad component of Ha, the total ionized gas mass within $\sim$ 84 pc of the nucleus is $\sim$ 3.3 $\times$ 10$^{5}$ M$_{\odot}$. Assuming that this gas lies in the disk and that a small fraction ($\sim$ 10%) suffers a radial inflow within the disk, infall velocities of $\sim$ 34 km s$^{-1}$ would be required to feed the outflow, the SMBH accretion, and maintain a SFR of 5.5 $\times$ 10$^{-3}$ M$_{\odot}$ yr$^{-1}$ [@Galbany2016] in ESO 362-G18. An inflow velocity of this magnitude (i.e. $\sim$20 km s$^{-1}$ in projection if it lies in the plane of the disk) is at the limit of detectability in our observations and analysis. The residual (observed - stellar model) \[N[ii]{}\] velocity map (rightmost panel of Fig. 6) shows blue (red) residual velocities on the far (near) side of the galaxy disk, which could be interpreted as a signature of inflow to the nucleus: the inflow velocities would then be of this order of magnitude. However, this residual pattern is primarily attributable to the mismatch between the PAs of the stellar and ionized gas kinematics (e.g. Fig. 1 of @vanderKruit1978); the residual (observed \[N[ii]{}\] - model \[N[ii]{}\]) \[N[ii]{}\] velocity map does not show this pattern. One alternative to explain the relatively high accretion rate and the relatively low inflow rate is that the AGN is now passing through a period of maximum activity, which would be transient but cause an overestimation of the accretion rate and a greater required infall velocity to maintain it. Another possibility is that the total ionized gas mass is only the tracer of the true (dominated by molecular gas) amount of available gas. **Kinetic power** ----------------- Considering an outflow bicone, with a mass outflow rate ($\dot{M}$) of 0.148 M$_{\odot}$ yr$^{-1}$, we can obtain the kinetic power ($\dot{E}_{out}$) using the following expression $$\dot{E}_{out} = \frac{1}{2}\dot{M}(v^{2} + 3\sigma^{2}),$$ where *$v$* and *$\sigma$* are the average velocity and velocity dispersion of the outflowing gas, respectively. Taking these values from the nuclear region ($\leqslant$ 035) of the outflow component in H$\alpha$, we have *v* = 394 km s$^{-1}$ and *$\sigma$* = 74 km s$^{-1}$; then we obtain a kinetic power of $\dot{E}_{out}$ = 8 $\times$ 10$^{39}$ erg s$^{-1}$. With the aim to measure the effect (feedback) of the ionized gas outflow on the galactic bulge, we compare the kinetic power with the accretion luminosity ($L_{bol}$ = 1.3 $\times$ 10$^{44}$ erg s$^{-1}$), obtaining a value of $\dot{E}_{out}/L_{bol}$ = 6.1 $\times$ 10$^{-5}$. This is at the lower end of the range (10$^{-4}$ – 5 $\times$ 10$^{-2}$) found by @MullerSanchez2011[^4]. Our lower ratio could be attributed to large uncertainties in the velocity dispersion ($\sigma$$_{inst}$ = 36 km s$^{-1}$) and the bolometric correction used to calculate L$_{bol}$ from $L_{X}$(2-10 keV) in @AgisGonzalez2014. This correction can vary widely (between 4 and 110) [e.g. @Lusso2012 Figs. 7 and 8]. Summary and conclusions ======================= We observed the gaseous and stellar kinematics of the inner 0.7 $\times$ 1.2 kpc$^{2}$ of the nearby Seyfert 1.5 galaxy ESO 362-G18 using optical spectra (4092-7338 Å) from the GMOS integral field spectrograph on the Gemini South telescope, which allows the detection of a number of prominent emission lines, i.e. H$\beta \lambda$4861, \[O[iii]{}\]$\lambda \lambda$4959,5007, H$\alpha$+\[N[ii]{}\] $\lambda \lambda$6548,6583 and \[S[ii]{}\]$\lambda \lambda$6716,6731. We employed a variety of IDL and Python programmes to analyze these lines and obtain spatially resolved radial velocities, velocity dispersions, and fluxes at a spatial resolution of $\sim$170 pc and a spectral resolution of 36 km s$^{-1}$. The main results of this paper are as follows. - The H$\alpha$ and \[O[iii]{}\] lines clearly show double-peaked emission lines near to the nucleus and to the SE. We used a two Gaussian fit to separate these profiles into two kinematic components: a low-velocity component and high-velocity component. - The stars, \[N[ii]{}\] and \[S[ii]{}\] emission lines, and low-velocity component of H$\alpha$ and \[O[iii]{}\] lines typically have radial velocities between -80 km s$^{-1}$ and 70 km s$^{-1}$, and have very similar rotation patterns, so we interpret all of these to originate in the rotating galactic disk. - The high-velocity component of H$\alpha$ and \[O[iii]{}\] reach values in excess of 200 km s$^{-1}$ with respect to the systemic velocity, and we argue that these spectral components originate from gas outflowing within the AGN radiation cone. We present a toy model to explain why this gas is preferentially redshifted to our LOS, except at the nucleus where blueshifted \[O[iii]{}\] emission from the outflow traces the region where the outflow is still breaking out of the galactic disk. The effects of AGN ionization has been previously observed, showing a fan-shaped morphology with an extension of $\approx$ 10 to the SE in emission-line and excitation maps. - The assumption that the outflow component is behind the plane of the sky is also motivated by the velocity dispersions observed in \[O[iii]{}\]: while the disk component presents the highest dispersions in most of our FOV, the outflow component exceeds it in the nuclear region, where the highest blueshift velocities are reached. This difference is consistent with attenuation from the galactic disk except very close to the nucleus, where the approaching side of the cone can be seen. - The structure of the nuclear region of ESO 362-G18 presents spiral arms in a trailing pattern, which are increasingly fainter as we approach the nucleus. Considering the linear scale of our observations, we posit that the unusual dust morphology is a result of instabilities produced near to the inner Lindblad resonance, which is expected within the inner 1.6kpc. The presence of the dust structures also supports the hypothesis of [@Simoes2007] that the presence of dust is a necessary condition for the nuclear activity in AGNs. - While morphologically there is evidence that ESO 362-G18 is participating in a minor merger, we do not find any effect of this in the stellar or gas kinematics within our relatively small FOV. - Using the H${\alpha}$ luminosity, we estimate a lower limit for the mass outflow rate $\dot{M}$ of 0.074 M$_{\odot}$ yr$^{-1}$. This value will double if a biconical outflow is assumed. Further, the value we calculate is likely a lower limit to the outflow mass and rate, as we have argued that our H$_\alpha$ luminosity used to calculate the outflow gas mass only represents the fraction of the NLR gas clouds illuminated by the AGN, rather than all of the outflowing NLR gas. In any case, our estimated outflow rate is significantly higher than the accretion rate necessary to sustain the AGN bolometric luminosity, that is, $\dot{M}$$_{acc}$ $\sim$ 2.2 $\times$ 10$^{-2}$ M$_{\odot}$ yr$^{-1}$. This work is based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministerio da Ciência e Tecnologia (Brazil) and south-eastCYT (Argentina). NN gratefully acknowledges support from the Chilean BASAL Centro de Excelencia en Astrofísica y Tecnologías Afines (CATA) grant PFB-06/2007. PH, NN, PS and DM acknowledge support from Fondecyt 1171506. VF acknowledges support from CONICYT Astronomy Program-2015 Research Fellow GEMINI-CONICYT (32RF0002). R.A.R. acknowledges support from FAPERGS (project No. 16/2551-0000251-7) and CNPq (project N0. 303373/2016-4). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. [^1]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by de Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. [^2]: Interactive IDL routine written by Christof Iserlohe. http://www.ciserlohe.de/fluxer/fluxer.html [^3]: https://lmfit.github.io/lmfit-py/intro.html [^4]: With the caveat that they used the following equation for the kinetic power: $\dot{E}_{out} = \frac{1}{2}\dot{M}(v_{max}^{2} + \sigma^{2})$ which, for ESO 362-G18, produces $\dot{E}_{out}/L_{bol}$ = 8.9 $\times$ 10$^{-5}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we provide an analytical procedure which leads to a system of $(n-2)^2$ polynomial equations whose solutions give the parameterisation of the complex $n\times n$ Hadamard matrices. It is shown that in general the Hadamard matrices depend on a number of arbitrary phases and a lower bound for this number is given. The moduli equations define interesting geometrical objects whose study will shed light on the parameterisation of Hadamard matrices, as well as on some interesting geometrical varieties defined by them.' author: - | P Diţă\ Institute of Physics and Nuclear Engineering,\ P.O. Box MG6, Bucharest, Romania\ email: [email protected] title: New Results on the Parameterisation of Complex Hadamard Matrices --- Introduction ============ Quantum information theory whose main source comes of a few astonishing features in the foundations of quantum mechanics is the theory of that kind of information which is carried by quantum systems from the preparation device to the measuring apparatus in a quantum mechanical experiment, see e.g. [@We]. Defining new concepts like entangled states, teleportation or dense coding one hopes to be able to design and construct new devices, like quantum computers, which will be useful in solving many “unresolvable” problems by the classical methods. Recently the mathematical structure which is behind such miracle machines was better understood by establishing a one-to-one correspondence between quantum teleportation schemes, dense coding schemes, orthogonal bases of maximally entangled vectors, bases of unitary operators and unitary depolarizers by showing that given any object of any one of the above types one can construct any object of each of these types by using a precise procedure. See Vollbrecht and Werner [@VW] and Werner [@We1] for details. The construction procedure will be efficient to the extent that the unitary bases can be generated, and the construction of these bases makes explicit use of the complex Hadamard matrices and Latin squares. The aim of this paper is to provide a procedure for the parametrisation of the complex Hadamard matrices for an arbitrary integer $n$. More precisely we will obtain a set of $(n-2)^2$ equations whose solutions will give all the complex Hadamard matrices of size $n$. Complex $n$-dimensional Hadamard matrices are unitary $n\times n$ matrices whose entries have modulus $1/\sqrt{n}$. The term [*Hadamard matrix*]{} has its root in the Hadamard’s paper [@Ha], where he gave the solution to the question of the maximum possible absolute value of the determinant of a complex $n\times n$ matrix whose entries are bounded by some constant, which, without loss of generality, can be taken equal to unity. Hadamard has shown that the maximum is attained by complex unitary matrices whose entries have the same modulus and he asked the question if the maximum can also be attained by orthogonal matrices. These last matrices have come to be known as [*Hadamard matrices*]{} in his honor, and have many applications in combinatorics, coding theory, orthogonal designs, quantum information theory, etc., and a good reference about the obtained results is Agaian [@Ag]. However the first complex Hadamard matrices were found by Sylvester [@Sy]. He observed that if $a_i,\,\, i=0,1,\dots,n-1$ denote the solutions of the equation $x^n-1=0$ for a prime $n$ then the Vandermonde matrix $${1\over\sqrt{n}}\left(\begin{array}{ccccc} 1&1&1&\cdots&1\\ 1&a_1&a_1^2&\cdots&a_1^{n-1}\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ 1&a_{n-1}&a_{n-1}^2&\cdots&a_{n-1}^{n-1} \end{array} \right)$$ is unitary and Hadamard. In the same paper Sylvester found a method to obtain a Hadamard matrix of size $m n$ if one knows two Hadamard matrices of order $m$ and respectively $n$ by taking their Kronecker product. Soon after the publication of the paper by Hadamard the interest was mainly on the [*real*]{} Hadamard matrices such that the Sylvester contribution fell into oblivion and the [*complex Hadamard matrices*]{} have been much later reinvented in a particular case: only those matrices whose entries are $\pm\, 1,\pm\, i$ where $i=\sqrt{-1}$. Nevertheless a few other problems apparently unrelated to complex Hadamard matrices were those connected with bounds on polynomial coefficients when the indeterminate runs on the unit circle. They are better expressed in terms of the discrete Fourier transform. For any finite sequence $x=(x_0,x_1,\dots,x_{n-1})$ of $n$ complex numbers, its (discrete) Fourier transform is defined by $$y_j=n^{-1/2}\sum_{k=0}^{n-1}\,x_k\,e^{2\,i\,\pi\,kj/n}\quad j=0,1,\dots,n-1$$If the components $x_k,y_k$ are such that $|x_k|=|y_k|=1$ for $k=0,1,\dots,n-1$ the sequence $x$ is called bi-unimodular. The existence of a bi-unimodular sequence of side $n$ is equivalent to the existence of a complex circulant Hadamard matrix of side $n$; a circulant matrix is obtained by circulating its first row, in our case the components of the vector $x/\sqrt{n}$. Now the Gauss sequence $$x_k=\left\{\begin{array}{ll}e^{2\,i\,\pi(ak^2+bk)/n},\,\, a,\,b\in {\mathbf{Z}},\, a\,{\rm coprime\,to}\,\, n,\, k=0,1,\dots,n-1 \, & \mbox{for $n$ odd}\\ e^{k^2\,i\,\pi/n},\,\qquad k=0,1,\dots,n-1& \mbox{for $n$ even} \end{array}\right.$$ is a bi-unimodular sequence [@BS]. The problem of the complete determination of all bi-unimodular sequences is still open, despite the problem is simpler than the parameterisation of arbitrary complex Hadamard matrices. However this approach gave the first non-trivial examples of complex Hadamard matrices for $n\ge 6$. A step towards its solution was the reduction of the bi-unimodular problem to the problem of finding all cyclic $n$-roots [@Bj], that are given by the following system of equations over $\mathbf{C}$ $$\begin{aligned} \left\{\begin{array}{r} z_0+z_1+\cdots +z_{n-1}=0,\\ z_0z_1+z_1z_2+\cdots +z_{n-1}z_0=0,\\ z_0z_1z_2+z_1z_2z_3+ \cdots + z_{n-1}z_0z_1=0,\\ \cdots\cdots\cdots\\ z_0z_1\cdots z_{n-1}=1 \end{array} \right.\label{sys}\end{aligned}$$ Note that the sums are cyclic and contain just $n$ terms and are not the elementary symmetric functions for $n\ge 4$. The relation between $x$ and $z$ is $z_j=x_{j+1}/x_j$. All cyclic $n$-roots have been found for $2\le n \le 8$; see Björck and Fröberg [@BF; @BF1]. The formalism we will develop in the paper is more general showing that the parameterisation of complex Hadamard matrices is more complicated than the finding of all cyclic $n$-roots of the sytem (\[sys\]). Using our approach we find, e.g. when $n=6$, the following matrix which is not contained in the above solutions $${1\over\sqrt{6}}\left( \begin{array}{rrcccc} 1&1&1&1&1&1\\ 1&-1&i&-i&-i&i\\ 1&i&-1&e^{it}&-e^{it}&-i\\ 1&-i&-e^{-it}&-1&i&e^{-it}\\ 1&-i&e^{-it}&i&-1&-e^{-it}\\ 1&i&-i&-e^{it}&e^{it}&-1 \end{array}\right)$$ matrix that depends on an arbitrary phase. The parameterisation of complex Hadamard matrices is a special case of a more general problem: that of reconstructing the phases of a unitary matrix from the knowledge of the moduli of its entries, problem which was a fashionable one at the end of eighties of the last century in the high energy physics community, see Auberson[@Au], Björken and Dunietz [@BD], Branco and Lavoura [@BL], Auberson [*et al.*]{} [@AMM]. An existence theorem as well as an estimation for the number of solutions was obtained by us [@Di2]. The particle physicists abandoned the problem when they realised that for $n \ge 4$ there exists a continuum of solutions, i.e. solutions depending on arbitrary phases, result that was considered uninteresting from the physical point of view. In our opinion, the reason was the difficulty of the problem; since the experiments provide only the squares of the moduli, the first problem is to decide if from the experimental results, which in the best case generate a doubly stochastic matrix, one can reconstruct a unitary matrix, or a unistochastic matrix. Only for $n=3$ there exists a unambigous procedure. For $n\ge 4$ there are no known necessary and sufficient conditions to separate the unistochastic matrices from the doubly stochastic ones [@Zy]. Almost in the same time the complex Hadamard matrices came out in the construction of some $*$-subalgebras in finite von Neumann algebras, see Popa [@Po], de la Harpe and Jones [@HJ] and Munemasa and Watatani[@MW] . In the last two papers one construct complex Hadamard matrices not of Sylvester type when $n$ is a prime number such that $n\equiv\pm 1$ (mod 4). A little later Haagerup [@Haa] obtained the first example of a 6-dimensional matrix which is not a solution of the system of equations (\[sys\]). In this paper we make use of a few analytic techniques from the operator contraction theory and the factorization of unitary matrices to obtain a convenient reprezentation of unitary matrices of arbitrary order $n$ that leads us easily to a system of $(n-2)^2$ trigonometric (or equivalently polynomial) equations whose solutions give all the complex Hadamard matrices of order $n$. Our approach is also useful for finding [*real* ]{} Hadamard matrices, being complementary to the combinatorial approach almost exclusively used until now. The paper is organized as follows: in Section 2 the equivalence of the complex Hadamard matrices is reviewed. In Section 3 a theorem showing the existence of the complex Hadamard matrices for every integer $n$ is stated and an upper bound on the number of continuum solutions is obtained. Section 4 contains an one-to-one parametrisation of unitary matrices written as block matrices and in the next Section an application of the obtained formulae is given. In Section 6 an other parameterisation of unitary matrices is given under the form of a product of $n$ diagonal phase matrices interlaced with $n-1$ orthogonal matrices each one generated by a real vector from ${\bf R}^n$. This form is convenient because it leads to a simpler form for the moduli equations and in the same time we consider it more appropriate for designing software packages for solving these equations. In Section 7 we show how to derive the moduli equations as trigonometric equations and give a few particular solutions for $n=6$. In Section 8 the problem is reformulated as an algebraic geometry problem and we show that the parameterisation of Hadamard matrices can produce interesting examples for many problems currently under study in this field. The paper ends with Conclusions. Equivalence of complex Hadamard matrices ======================================== Complex $n$-dimensional Hadamard matrices being unitary matrices whose entries have modulus $1/\sqrt{n}$, the natural class of looking for complex Hadamard matrices is the unitary group ${U}(n)$. The unitary group ${U}(n)$ is the group of automorphisms of the Hilbert space $({\mathbf C}^{n}, (\cdot,\cdot))$ where $(\cdot,\cdot)$ denotes the Hermitian scalar product $(x,y)=\sum_{i=1}^{i=n}\,\overline{x_i}\,y_i$ and the bar denotes the complex conjugation. If $A_n\in {U}(n)$ by $A_n^*$ we denote the adjoint matrix and unitarity implies $A^*_n\,A_n=A_n\,A^*_n=I_n$. It follows that $det\, A_n= e^{i\,\varphi}$, where $\varphi$ is a phase, and $dim_{\bf R}\,{U}(n)=n^2$. Because in any group the product of two arbitrary elements is again an element of the group there is a freedom in choosing the “building” blocks to be used in a definite application. In the case of a complex Hadamard matrix the multiplication of a row and/or a column by an arbitrary phase factor does not change its properties and consequently we can remove the phases of a row and column taken arbitrarily. Taking into account that property we can write $$\begin{aligned} A_n=d_n\,\tilde{A_n}\, d_{n-1}\end{aligned}$$ where $\tilde{A_n}$ is a matrix with all the elements of the first row and of the first column positive numbers and $d_n=(e^{i\varphi_1},\dots,e^{i\varphi_n})$ and $d_{n-1}=(1,e^{i\varphi_{n+1}},\dots,e^{i\varphi_{2n-1}})$ are two diagonal phase matrices. In the following we will consider that $A_n \equiv \tilde{A_n}$, i.e. $A_n$ will be a matrix with positive entries in the first row and the first column. Since a unitary matrix is parameterised by $n(n-1)/2$ angles and $n(n+1)/2$ phases [@Di] the above equivalence relation tell us that the number of remaining phases is $n(n+1)/2-(2n-1)=(n-1)(n-2)/2$, and so the number of free real parameters entering a unitary matrix is reduced from $n^2$ to $n^2-(2n-1)=(n-1)^2$. Secondly we can permute any rows and/or columns and get an equivalent unitary matrix. This procedure can be seen as a multiplication of $A_n$ at left and/or right by an arbitrary finite number of the simplest permutation unitary matrices $P_{ij},\,\, i\neq j,\,\, i,j=1,\dots,n$, whose all diagonal entries but $a_{ii}$ and $a_{jj}$ are equal to unity, $a_{ii}=a_{jj}=0,\,\, a_{ij}=a_{ji}=1,\,\, i\neq j$ and all the other entries vanish. Both the diagonal phase and permutation matrices generate subgroups of the unitary ${U}(n)$ group; so we may consider them as gauge subgroups, i.e. any element of ${U}(n)$ is defined modulo the action of a finite number of the above transformation, which has as consequence a standard representation for unitary matrices. We consider that the group generated by the above two subgroups deseves to be independently studied since its orbit structure could shed light on many important issues from information theory and stochastic matrices. The above two equivalence conditions are those found by Sylvester [@Sy] for the Hadamard matrices, but in fact they are valid for ${U}(n)$ which is invariant with respect to the product of an arbitrary number of the above transformations. Besides for Hadamard matrices we will not distinguish between $A_n$ and its complex conjugated matrix $\bar{A}_n$, the complex conjugation being equivalent to the sign change of all phases $\varphi_i \rightarrow - \varphi_i$ entering the parametrisation. More generally we shall consider equivalent two matrices whose phases can be obtained each other by an arbitrary non-singular linear transformation with constant rational coefficients. As we will see later the complex Hadamard matrices depend in general on a number of arbitrary phases and the above condition says that we will consider only the most general form of the solution and not those particular forms obtained by prescribing definite values to the (arbitrary) phases entering the parameterisation. In this sense we can say that there is only one complex Hadamard matrix of order $4$, that found by Hadamard [@Ha], all the others, including those with all entries real numbers, being particular cases of the complex one. Other authors speak in this case of non-equivalent or a continuum of solutions [@Haa]. We consider that the above conditions are the only a priori equivalence criteria we can impose on Hadamard matrices, i.e. will consider equivalent any two matrices that can be made equal by applying them a finite number of the above transformations. Existence of complex Hadamard matrices ====================================== The parameterisation of a unitary matrix by the moduli of its entries is very appealing, and in the case of Hadamard matrices compulsory, although it is not a natural one in the general case. A natural parameterisation would be one whose parameters are free, i.e. there are no supplementary restrictions upon them to enforce unitarity. In this sense natural parameterizations are the Euler-type parameterisation by Murnagham [@Mu], or that found by us [@Di]. The problem we rose in [@Di2] was to what extent the knowledge of the moduli $|a_{ij}|$ of an $n\times n$ unitary matrix $A_n=(a_{ij})$ determines $A_n$. Implicitly we supposed that $A_n$ is parameterized by $n^2$ independent parameters. But from what we said before we know that we may ignore $2n-1$ phases entering the first row and the first column and consequently the number of independent parameters reduces to $(n-1)^2$, that coincides with the number of independent moduli implied by unitarity. If we identify the parameters to the moduli they will be lying within the simple domain $$D=(0,1)\times\dots\times (0,1)\equiv (0,1)^{(n-1)^2}$$ where the above notation means that the number of factors entering the topological product is $(n-1)^2$. We excluded only the extremities of each interval, i.e. the points $0$ and $1$ that is a zero measure set whitin ${U}(n)$ and has no relevance to the parameterisation of complex Hadamard matrices. Thus, in principle, we can parameterise an $n\times n$ unitary rephasing invariant matrix by the upper left corner moduli; we exclude the moduli of the last row and of the last column since they follow from unitarity. Nothing remains but to check if the new parameterisation is one-to-one. A solution to the last problem is the following: start with a one-to-one parameterisation of ${U}(n)$ and then change the coordinates taking as new coordinates the moduli of the $(n-1)^2$ upper left corner entries (and $2n-1$ ignorable phases). Afterwards use the implicit function theorem to find the points where the new parameterisation fails to be one-to-one. The corresponding variety upon which the application is not a bijective one is given by setting to zero the Jacobian of the transformation. One gets that generically for $n\ge 4$ the unitary group ${U}(n)$ cannot be fully parametrised by the moduli of its entries, i.e. for a given set of moduli there could exist a continuum of solutions, but this negative result is good for the parameterisation of Hadamard matrices by decreasing the number of independent solutions taking into account the equivalence conditions discussed in the previous section. If the moduli are outside of the above variety an upper bound for the multiplicity is $2^{n(n-3)\over2}$. However in the case of Hadamard matrices the equivalence constraints reduce this number to lower values than the above upper bound. The bound is saturated for $n=3$ when there is essentially only one complex matrix, i.e. for given moduli values for the first row and column entries compatible with unitarity, the sole freedom is an arbitrary phase. If we denote the relevant squared moduli by $m_1, m_2, m_3, m_4$ and the phase by $\varphi$ then the compatibility condition has the form $ -1\leq \cos\varphi=(-1+2m_1-m_1^2+m_2+m_3+m_4-m_1m_2-m_1 m_3-m_2 m_3 -$ $2 m_1 m_4 - m_1 m_2 m_3m_1^2 m_4)/2\sqrt{m_1 m_2 m_3 (1-m_1-m_2)(1-m_1-m_3)} \leq 1$ This is also the necessary and sufficient condition which the squared moduli $m_i,\,\,i=1,\dots,4$, have to satisfy in order to obtain a unistochastic matrix from a general doubly stochastic matrix. Because unitary matrices of arbitrary dimension do exist and on the other hand the number of independent essential parameters of a ${U}(n)$ matrix is $(n-1)^2$ the following is true: Suppose $(x_1,\dots,x_{n^2})$ is a co-ordinate system on the unitary group ${U}(n)$ consisting of $n(n-1)/2$ angles each one taking values in $[0,\pi/2]$ and $n(n+1)/2$ phases taking values in $[0,2\pi)$. By discarding $2n-1$ non-essential phases the number of co-ordinates reduces to $(n-1)^2$, $( x_1,\dots,x_{(n-1)^2})$, that coincides with the number of independent moduli $(m_1,\dots,m_{(n-1)^2})$ implied by unitarity. Taking as new co-ordinates the moduli $m_i,\,\, i=1,\dots,(n-1)^2,$ the new parameterisation is generically not one-to-one for $n \geq 4$, the non-uniqueness variety being obtained by setting to zero the Jacobian of the transformation $$\begin{aligned} {\partial(m_1,\dots,m_{(n-1)^2})\over\partial(x_1,\dots,x_{(n-1)^2})}=0\label{jac}\end{aligned}$$ Outside this variety the number of discrete solutions $N_s$ satisfies $1\leq N_s\leq 2^{{n(n-3)\over 2}}$ and on the variety described by (\[jac\]) there is a continuum of solutions. In the special case of complex Hadamard matrices all the solutions are given by the system of trigonometric equations $$\begin{aligned} m_i^2(x_1,\dots,x_{(n-1)^2})={1\over{n}}\,,\qquad i=1,\dots,(n-1)^2\label{mod}\end{aligned}$$ Suppose we know the irreducible components of the variety (\[jac\]) and let $r(n)$ be the rank of the system (\[mod\]) in every irreducible component, then every solution of (\[mod\]) in such an irreducible component will depend upon $(n-1)^2-r(n)$ arbitrary parameters and the number of (continuum) solutions satisfies $1\leq N_s\leq 2^{r(n)-1-n(n-1)/2}$. [*Proof.*]{} In the general case Eqs.(\[mod\]) have the form $$\begin{aligned} m_i^2(x_1,\dots,x_{(n-1)}^2)=a_i,\,\,\,\,\,{\rm where}\,\,a_i\in (0,1)\,,\quad i=1,\dots,(n-1)^2\label{mod1}\end{aligned}$$ The parameters $a_i$ generate a doubly stochastic matrix. The Eqs.(\[mod1\]), as we will see later, are trigonometric equations in our parameterisation, and consequently the multiplicity of the solutions may arise from the two possible phase solutions for all values of sine or cosine functions that satisfy (\[mod1\]). The number of independent phases is $(n-1)(n-2)/2$ and taking into account that we consider $A_n$ and $\bar{A_n}$ as equivalent matrices, condition which halves the number of solutions, the above bound for $N_s$ follows. A similar argument establishes the upper bound for the number of continuum solutions. For $n=3$ the Jacobian is positive and $ 1\leq N_s\leq 1$, which implies the existence of one complex matrix irrespective of the values $a_i$, compatible with unitarity. It is easily seen that the equations which correspond to the first row and the first column entries have a unique solution and the number of equations reduces to $(n-2)^2$. Indeed, because these entries are positive we can take the following parameterisation in terms of $2\,n-3 $ angles, e.g. for the first row $$(a_{11},\dots,a_{1n})=(cos\,\chi_1,sin\,\chi_1\,cos\,\chi_2,\dots,sin\,\chi_1\dots sin\,\chi_{n-1})$$ and similarly for the first column. The Eqs.(\[mod1\]) give the unique solution $$cos^2\,\chi_k=\frac{a_k}{\prod_{i=1}^{k-1}(1-a_i)},\quad k=1,2,\dots,n-1$$ where $a_k=|a_{1k}|^2, \, k=1,2,\dots,n-1$. In the case of Hadamard matrices one gets $$cos\,\chi_k=\frac{1}{\sqrt{n+1-k}},\quad k=1,2,\dots,n-1$$ and the same solution for the angles parameterising the first column. In this way the number of equations reduces to $(n-1)^2-(2\,n-3)=(n-2)^2$ and the upper bound for the continuous solutions may be written as $1\leq N_s\leq 2^{r(n)-1-(n-2)(n-3)/2}$, where $r(n) $ is the rank of the reduced system. Even so the number of equations grows quadratically with $n$ which shows that even for moderate values of $n$ the problem is not easy to solve. In conclusion we have a system of trigonometric equations whose solutions will give all the complex Hadamard matrices, but to get effective we have to start with a one-to-one parameterisation of unitary matrices in order to find the explicit form of the $(n-2)^2$ equations and try to solve them. In the following Section we will provide one of the two parameterisations of unitary matrices that we will use in the paper. Parameterisation of unitary matrices ==================================== The aim of this section is to provide a one-to-one parameterisation of unitary matrices that will be useful in describing the complex Hadamard matrices. We shall present two such parameterisations and for the the first one we follow closely our paper [@Di] showing here only the points which are important in the following. The algorithm we provide is a recursive one, allowing the parameterisation of $n\times n$ unitary matrices through the parameterisation of lower dimensional ones. The parameterisation will be one-to-one and given in terms of $a(n)$ angles taking values in $[0,\pi/2]$ and $\varphi(n)$ phases taking values in $[0,2\pi)$ such that the application $$A_n(A_n\in {{U}}(n), A_nA_n^*=I_n)\rightarrow E=(0,\pi/2)^{a(n)}[0,2\pi)^{\varphi(n)}\subset {\mathbf{ R}}^{n^2}$$ is bijective. Always in the following the ends of the interval $[0,\pi/2]$ will be obtained by continuation in the relevant parameters, if necessary. The starting point is the partitioning of the matrix $A_n\in{ U}(n)$ in blocks $$\begin{aligned} A_n=\left( \begin{array}{cc}A &B\\ C&D\end{array}\right) \label{block}\end{aligned}$$ For definiteness we suppose the order of $A$ is equal to $m$ with $m\leq n/2$. The blocks entering (\[block\]) are contractions as follows from unitarity $$\begin{aligned} A\,A^*+B\,B^*=I_m,\quad A^*\,A+C^*\,C=I_m,\quad C\,C^*+D\,D^*=I_{n-m}\label{block1}\end{aligned}$$ where in the following $I_k$ denotes the $k\times k$ unit matrix. Suppose we know the contraction $A$, then the problem reduces to finding the $B$, $C$ and $D$ blocks such that $ A_n$ should be unitary. In other words the problem is: knowing a contraction $A$ of side $m$ how we can border it for getting a unitary $n\times n$ matrix $A_n$? For solving this problem we shall make use of the theory of contraction operators. An operator $T$ applying the Hilbert space $\cal{H}$ in the Hilbert space $\cal{H}'$ is a contraction if for any $v\in {\cal{H}}$,  $||T\,v||_{{\cal{H'}}}\leq ||v||_{{\cal{H}}}$, i.e. $||T||\leq 1$, [@FN]. For any contraction we have $T^*\,T\leq I_{{\cal{H'}}}$ and $T\,T^*\leq I_{{\cal{H}}}$ and the defect operators $$D_T=(I_{{\cal{H}}}-T^*\,T)^{1/2},\quad D_{T^*}=(I_{{\cal{H'}}}-T\,T^*)^{1/2}$$ are Hermitean operators in ${{\cal{H}}}$ and ${{\cal{H}'}}$ respectively. They have the property $$\begin{aligned} T\,D_{T}=D_{T^*}\,T,\quad T^*\,D_{T^*}=D_{T}\,T^*\label{def}\end{aligned}$$ Here we consider only finite-dimensional contractions, i.e. $T$ will have in general $n_1$ rows and $n_2$ columns. The unitarity relations (\[block1\]) can be written as $$BB^*=D_{A^*}^2,\qquad C^*C=D_A^2$$ According to Douglas lemma [@Do] there exist two contractions $U$ and $V$ such that $$B=D_{A^*}U,\quad {\rm and} \quad C=D_AV$$ Since we are looking for a parameterisation of unitary matrices, $U^*$ and $V$ are isometries, i.e. they satisfy the relations $$UU^*=I_{n-m}, \qquad V^*V=I_{m}$$ If $n$ is even and $m=n/2$, then $U$ and $V$ are unitary operators. Thus $B$ and $C$ blocks are given by the defect operators $D_{A^*}$, $D_A$ and two arbitrary isometries whose dimensions are $m\times (n-m)$ and $(n-m)\times m$ respectively. The last block of $A_n$ is given by the lemma The formula $$D=-VA^*U+D_{V^*}K D_U$$ establishes a one-to-one correspondence between all the bounded operators $D$ such that $$A_n=\left( \begin{array}{cc}A &D_{A^*}U\\ VD_A&D\end{array}\right)$$ is a contraction and all the bounded contractions $K$. See Arsene and Gheondea [@AG] for a proof of the general result when $U$, $V$ and $K$ are contractions, and further details. In our case $U$ and $V$ being isometries $D$ is given by $$\begin{aligned} D=-VA^*U + XMY\label{gheo}\end{aligned}$$ where $X$ and $Y$ are those unitary matrices that diagonalise the Hermitean defect operators $D_{V^*}$ and $D_U$ respectively, i.e. $$X^*D_{V^*}X=P,\qquad Y^*D_UY=P$$ $P$ is the projection $$P=\left( \begin{array}{cc}0 &0\\ 0&I_{n-2m}\end{array}\right)$$ and the matrix $M$ entering (\[gheo\]) has the form $$M=\left( \begin{array}{cc}0 &0\\ 0&A_{n-2m}\end{array}\right)$$ where $A_{n-2m}$ denotes an arbitrary $(n-2m)\times(n-2m)$ unitary matrix. See [@Di] for details. In the above formulae we supposed that the eigenvectors of the $D_U$ and $D_{V^*}$ operators entering the matrices $X$ and $Y$ are ordered in the increasing order of the eigenvalues. Therefore the parameterisation of an $n\times n$ unitary matrix is equivalent to the parameterisations of four matrix blocks with lower dimensions than those of the original matrix, and consequently our task is considerably simplified. On the other hand the formulae (\[gheo\]) and subsequent show that this procedure is recursive allowing the parameterisation of any finite dimensional unitary matrix starting with the parameterisation of one- or two-dimensional unitary matrices. Moreover the parameterisation of $A_n$ requires the parameterisation of an $m\times m$ contraction, of two isometries $U$ and $V$ and of an $(n-2m)\times(n-2m)$ unitary matrix. In our papers [@Di; @Di2] we considered only the case $m=1$ as the simplest one, however the case $m >1$ may be useful in the study of complex Hadamard matrices. For what follows we treat again the case $m=1$, i.e. $A$ is the simplest contraction, a complex number whose modulus is less than one, because we found the form of the matrices $X$ and $Y$ for arbitrary $n$. Since $V$ is a $(n-1)$-dimensional vector the isometry property allows us to parametrise it as $V=(cos\,\chi_1,sin\,\chi_1\, cos\,\chi_2,$ $\dots,sin\,\chi_1\dots sin\,\chi_{n-2})^t$ where $t$ denotes transpose. $V$ is the eigenvector of $D_{V^*}$ corresponding to the zero eigenvalue. Indeed from the relations (\[def\]) we have $$D_{V^*}\,V=V\,D_{V}=0$$ showing that $V$ is the eigenvector of $D_{V^*}$ corresponding to the zero eigenvalue. Thus the problem is: how to complete an orthogonal matrix $X$ knowing its first column (row) such that no suplementary parameters enter. The other columns of this matrix we are looking for will be given by the other eigenvectors of $D_{V^*}$. One easily verifies that $D_{V^*}$ is a projection operator such that the other eigenvalues equal unity. Indeed the folowing holds The orthonormalised eigenvectors of the eigenvalue problem $$D_{V^*}\,v_k=\lambda_k\,v_k,\,\,\, k=1,\dots,n-1$$ are the columns of the orthogonal matrix $X\in SO(n-1)$ and are generated by the vector $V$ as $$v_1=\left(\begin{array}{c} cos\,\chi_1\\ sin\,\chi_1\, cos\,\chi_2\\ \cdot\\ \cdot\\ \cdot\\ sin\,\chi_1\dots sin\,\chi_{n-2} \end{array}\right)$$ and $$v_{k+1}={d\over d\,\chi_k}\,v_1(\chi_1=\dots=\chi_{k-1}={\pi\over 2}), \,\,k=1,\dots,n-2$$ where in the above formula one calculates first the derivative and afterwards the restriction to $\pi/2$. In a similar way one finds $Y$; see [@Di3] for a proof. In the case of $n \times n$ Hadamard matrices whose elements of the first row and of the first column are positive numbers $a_{1j}=a_{j1}={1\over\sqrt{n}}$, $j=1,\dots,n$, $X$ has the form $$\left(\begin{array}{cccccccc} {1\over \sqrt{n-1}}&-\sqrt{n-2\over n-1}&0&0&\dots&\dots&0&0\\ {1\over \sqrt{n-1}}&{1\over \sqrt{(n-1)(n-2)}}&-\sqrt{n-3 \over n-2}&0&\dots&\dots&0&0\\ {1\over \sqrt{n-1}}&{1\over \sqrt{(n-1)(n-2)}}&{1\over\sqrt{(n-2)(n-3)}}&-\sqrt{n-4\over n-3}&\dots&\dots&0&0\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ {1\over\sqrt{n-1}}&{1\over \sqrt{(n-1)(n-2)}}&{1\over\sqrt{(n-2)(n-3)}}&{1\over\sqrt{(n-3)(n-4)}}&\dots&\dots& {1\over\sqrt{6}}&-{1\over\sqrt{2}}\\ {1\over\sqrt{n-1}}&{1\over \sqrt{(n-1)(n-2)}}&{1\over\sqrt{(n-2)(n-3)}}&{1\over\sqrt{(n-3)(n-4)}}&\dots&\dots& {1\over\sqrt{6}}&{1\over\sqrt{2}}\\ \end{array}\right)$$ and $Y=X^t$, where $t$ denotes the transposed matrix. In this way all the quantities entering formula (\[gheo\]) are known and the parameterisation of $A_n$ can be obtained recursively starting with the known parameterisation of $2\times 2$ unitary matrices. When the block $A$ is one-dimensional, i.e. a simple number equal to $1/\sqrt{n}$, the term $V\,A^*\,U$ entering Eq.(\[gheo\]) has the form $\frac{1}{(n-1)\sqrt{n}}\,J$ where $J$ is the $(n-1)\times(n-1)$ matrix whose each of entries is $+1$, which appears in many constructions of [*real*]{} Hadamard matrices; see Agaian [@Ag]. Application ============ In the following we will use Eq.(\[gheo\]) to generalize to the case of complex Hadamard matrices the trics used by Sylvester [@Sy] and Hadamard [@Ha] for constructing complex Hadamard matrices. We take $n$ an even number, $n=2\, m$, and we suppose that we know a parameterisation of the $A$ block which is unitary and whose order is $m$. In that case $B$ and $C$ blocks are also unitary matrices of order $m$ and we consider them normalized as $A\,A^*=B\,B^*=C\,C^*= I_m$. From (\[gheo\]) we have $D=-C\,A^*\,B$ and the following matrix $${1\over\sqrt{2}} \left( \begin{array}{cc} A&B\\ C&-C\,A^*\,B \end{array} \right)$$ will be unitary by construction. In general the above matrix will not be Hadamard even when $A,\,\,B$ and $C$ are, as the simplest example shows; this happens only when either $C=A$ or $B=A$. Since the second case is obtained by transposing the matrix of the first one, as long as $B$ and $C$ are arbitrary, we will consider only the matrix $$\begin{aligned} {1\over\sqrt{2}} \left( \begin{array}{cc} A&B\\ A&-B \end{array} \right)\label{arr}\end{aligned}$$ which is the elementary two-dimensional array that will be used in the construction of more complicated arrays of Hadamard matrices. In the following we suppose that $A$ and $B$ are complex Hadamard matrices of size $m$ each one depending on $p\ge 0$, respectively, $q\ge 0$ free phases, i.e. (\[arr\]) is a complex Hadamard matrix of size $2\,m$. Now we make use of Hadamard’s trick to get a Hadamard matrix depending on $p+q+m-1$ arbitrary phases. Indeed we can multiply $B$ at left by the diagonal matrix $d=(1,e^{i\,\varphi_1},\dots,e^{i\,\varphi_{m-1}})$ without modifying the Hadamard property. In this way Hadamard obtained a continuum of solutions for the case $n=4$. We denote $B_1=d\cdot B$ and then the matrix $$\begin{aligned} \frac{1}{\sqrt{2}} \left( \begin{array}{cc} A&B_1\\ A&-B_1 \end{array} \right)\label{arr1}\end{aligned}$$ will be unitary and Hadamard depending on $p+q+m-1$ parameters. From (\[arr1\]) we obtain in general two non-equivalent $2\,m\times 2\, m$ Hadamard matrices when $B\neq B^*$. In this case Eq.(\[arr1\]) is a realization and the second one is given by $B_1 \rightarrow B_2 =d\cdot B^*$. The above procedure can be iterated by taking the matrix (\[arr\]) as a new $A$ block obtaining a Hadamard matrix of the form $$\begin{aligned} \frac{1}{2} \left( \begin{array}{crrr} A&B&C&D\\ A&-B&C&-D\\ A&B&-C&-D\\ A&-B&-C&D \end{array} \right)\label{arr2}\end{aligned}$$ which is a $4\,m$-dimensional array similar to Williamson array [@Wi], and so on. In contradistinction to the Williamson array the $A,\, B,\, C,\, D$ blocks satisfy no supplementary conditions, excepting their unitarity. Thus the following holds If the $m\times m$ complex Hadamard matrices $A, B, C, D$ depend on $p, q, r,s$ arbitrary phases then there exists a complex Hadamard matrix of the form (\[arr2\]) which depends on $p+q+r+s+3(m-1)$ arbitrary phases. We notice that the elementary array (\[arr\]) is different from the Goethals-Seidel one [@GS] that appears in the construction of [*real*]{} Hadamard matrices and which has the form $$\frac{1}{\sqrt{2}} \left( \begin{array}{cc} A&B\\ B&-A \end{array} \right)$$ The above array is not unitary even when $A$ and $B$ are, the suplementary condition for unitarity being the relation $A\,B^*=B A^* $. We consider that the form (\[arr\]) could also be useful for the study of orthogonal designs and [*real*]{} Hadamard matrices it being in some sense complementary to the above form. As an application of the formula (\[arr2\]) we consider the following case: $a_{11}=a_{12}=a_{21}=-a_{22}=b_{11}=b_{12}=c_{11}=c_{12}=d_{11}=d_{12}=1/\sqrt{2}$ and $b_{21}=-b_{22}=e^{is}/\sqrt{2}$, $c_{21}=-c_{22}=e^{it}/\sqrt{2}$, $d_{21}=-d_{22}=e^{iu}/\sqrt{2}$ where the notation is self-explanatory, and we obtain an eight-dimensional Hadamard matrix depending on three arbitrary phases $s,\,t,\,u$. When $A=B$, Eq.(\[arr\]) can be written as $$\begin{aligned} \frac{1}{\sqrt{2}} \left( \begin{array}{cc} A&A\\ A&-A \end{array} \right)={1\over\sqrt{2}} \left( \begin{array}{cc} 1&1\\ 1&\epsilon \end{array} \right)\otimes A \label{syl}\end{aligned}$$ where $\epsilon =-1$, i.e. the first factor is the Sylvester Vandermonde matrix of the second roots of unity, and $\otimes$ is the ordinary Kronecker product, $A\otimes B=[a_{ij}B]$; of course the first factor can be any complex Hadamard matrix of order $m$. Now we want to define a new product the aim being a more general construction of Hadamard matrices. Let $M$ and $N$ be two matrices of the same order $m$ whose elements are matrices $M_{ij}$ of order $n$ and respectively $N_{kl}$ of order $p$. The new product denoted by $\tilde\otimes$ is given as $$Q=M\tilde{\otimes}N$$ which is a matrix of order $mnp$, where $$Q_{ij}=\sum_{k=1}^{k=m}\,M_{ik}\otimes N_{kj}$$ We will use here the above formula only in the case: $M=m_{ij}$ where $m_{ij}$ are complex scalars, not matrices and $N$ is an arbitrary diagonal matrix $N=(N_{11},\cdots, N_{mm})$ where $N_{ii}$ ar matrices of order $p$ obtaining $$\begin{aligned} Q= \left( \begin{array}{cccc} m_{11}N_{11}&\cdot&\cdot&m_{1m}N_{mm}\\ \cdot&\cdot&\cdot&\cdot\\ \cdot&\cdot&\cdot&\cdot\\ m_{1m}N_{11}&\cdot&\cdot&m_{mm}N_{mm} \end{array} \right)\label{arr3}\end{aligned}$$ Thus the following is true. If the matrices $M$ and $N_{ii},\, i=1,\dots,m$, are Hadamard so will be the matrix $Q$ given by Eq.(\[arr3\]). The order of $Q$ is $mp$ and the formula (\[arr3\]) is new even for real Hadamard matrices. This form is the most general array we have obtained and in some sense (\[arr3\]) is the natural generalization of Williamson arrays to the case of complex Hadamard matrices. If in the above relation we take $m_{11}=m_{12}=m_{21}=-m_{22}={1/ \sqrt{2}}$ and $N_{11}=A$ and $N_{22}=B$, then Eq.(\[arr3\]) reduces to Eq.(\[arr\]). [ex]{}[Example]{} If now $m_{ij}$ are the same as above and $$N_{11}={1\over 2}\left( \begin{array}{rrrr} 1&1&1&1\\ 1&1&-1&-1\\ 1&-1&-e^{is}&e^{is}\\ 1&-1&e^{is}&-e^{is} \end{array}\right)$$ is the complex four-dimensional Hadamard matrix and $$N_{22}={1\over 2}\left( \begin{array}{rrrr} 1&0&0&0\\ 0&e^{it}&0&0\\ 0&0&e^{iu}&0\\ 0&0&0&e^{iv} \end{array}\right) \left( \begin{array}{rrrr} 1&1&1&1\\ 1&1&-1&-1\\ 1&-1&-e^{iy}&e^{iy}\\ 1&-1&e^{iy}&-e^{iy} \end{array}\right)$$ we obtain an eight-dimensional matrix depending now on five arbitrary phases $s,t,u,v,y$ instead of three as in the preceding example obtained by using the Williamson-type array (12). Thus the following holds. If $M,N_i,\,i=1,\dots,m$ are $m\times m$ and respectively, $ n\times n$-dimensional complex Hadamard matrices depending on $m$, respectively, $n_i$, arbitrary phases, then there is an array of the form (14) that depends on $$m+n_1+(m-1)\sum_{i=2}^m m_i$$ free phases. The above example shows the necessity for getting upper and lower bounds on the number of arbitrary phases entering a Hadamard matrix of size $N$. Taking into account the standard decomposition of any integer under the form $N=p_1^{q_1}\dots p_m^{q_m}$, where $p_1 <\dots <p_m$ are primes and $q_1 \dots q_m$ their respective powers, we may use the above [*Proposition 3*]{} for obtaining lower bounds on the number of free phases, that we shall denote it by $\varphi(N)$. Since until now does not exist an example of a Hadamard matrix of size $N$ with $N$ prime which depends on free phases, in the following we will consider the normalization $\varphi(N)=0$, for $N$ prime. Thus the following holds. Let $N=p_1^{q_1}$ be the power of a prime $p_1$, with $q\ge 2$. Then a lower bound for $\varphi(p_1^{q_1})$, the number of free phases entering the parameterization of the $N\times N$ complex Hadamard matrix, is given by $$\varphi(p_1^{q_1})= 1+[(p_1-1)(q_1-1) -1]p_1^{q_1 -1}$$ If $N=p_1^{q_1}\dots p_m^{q_m}=p_1^{q_1} N_1 $ then $\varphi(p_1^{q_1}N_1)$ is given by $$\varphi(p_1^{q_1}N_1)=1+[(p_1 -1)q_1N_1-p_1]p_1^{q_1 -1} + \varphi(N_1)p_1^{q_1}$$ [*Proof.*]{} Making use of [*Proposition 3*]{} we find the recurrence relation $$\varphi(p_1^{q_1})= p_1 \varphi(p_1^{q_1 -1})+(p_1-1)(p_1^{q_1 -1}-1)$$ with the initial condition $\varphi(p_1)=0$ and the solution follows. In the second case the recurrence relation writes $$\varphi(p_1^{q_1}N_1)=p_1\varphi(p_1^{q_1 -1} N_1)+(p_1 -1)(p_1^{q_1 -1} N_1 -1)$$ and the initial condition can be taken as $$\varphi(p_1 N_1)= p_1\varphi(N_1)+ (p_1 -1)(N_1 -1)$$ and the solution follows. The above recurrence relation allows us to obtain lower bounds for any integer $N$ under the form $$\varphi(p_1^{q_1}\dots p_m^{q_m})=1+ [(p_1 -1)q_1p_2^{q_2}\dots p_m^{q_m} -p_1]p_1^{q_1 -1}+$$ $$p_1^{q_1}\{1+[(p_2 -1)q_2 p_3^{q_3}\dots p_m^{q_m} -p_2]p_2^{q_2 -1}+$$ $$p_2^{q_2}\{1+[(p_3 -1)q_3 p_4^{q_4}\dots p_m^{q_m} -p_3]p_3^{q_3 -1}+$$ $$p_3^{q_3}\{1+\dots +p_{m-1}^{q_{m-1}}\{1+[(p_m -1)q_m -p_m]p_m^{q_m -1}\}+$$ $$p_{m-1}^{q_{m-1}}\{1+[(p_m -1)(q_m -1) -1]p_m^{q_m -1}\}\dots\}$$ We give now a few examples. If $N=p_1^{q_1}p_2^{q_2}$ then the lower bound for $\varphi(p_1^{q_1}p_2^{q_2})$, the number of free phases entering the parameterization of the $N\times N$ complex Hadamard matrix, is given by $$\begin{aligned} \varphi(p_1^{q_1}p_2^{q_2})= 1+ (p_1 -1)q_1p_1^{q_1 -1}p_2^{q_2}+ [(p_2 -1)(q_2 -1)- 1]p_1^{q_1}p_2^{q_2 -1}\label{bound1}\end{aligned}$$ Numerical examples: $\varphi(2^3)=5,\, \varphi(2^4)=17,\,\varphi(6)= 2,\, \varphi(3^2)=4,\varphi(2^2 3^2)=49,$ etc. An other parameterisation of unitary matrices ============================================= In the following we will shortly present another parameterisation of unitary matrices [@Di3] under the form of a product of $n$ diagonal matrices containing phases interlaced with $n-1$ orthogonal matrices each one generated by a real vector $v\in {\bf R}^n$. This new form will be more appropriate for design and implementation of the software packages necessary for solving the equations (\[mod\]) for arbitrary $n$. We have seen in Section 2 that we can write any unitary matrix as a product of two diagonal matrices of the for $d_n=(e^{i\varphi_1},\dots,e^{i\varphi_n})$ with $\varphi_j \in [0,2\,\pi)$, $j=1,\dots,n$ arbitrary phases and a unitary matrix with positive elements in the first row and the first column. We make also the notation $d_k^{n-k}=(1_{n-k}, e^{i\psi_1},\dots,e^{i\psi_k})$, $k<n$, where $1_{n-k}$ means that the first $(n-k)$ diagonal entries equal unity, i.e. it can be obtained from $d_n$ by making the first $n-k$ phases equal zero. These diagonal phase matrices are the first building blocks in our construction. Other building blocks that will appear in factorization of unitary matrices $A_n$ are the two-dimensional rotations which operate in the $i,i+1$-plane of the form $$\begin{aligned} J_{i,i+1}=\left( \begin{array}{ccc} I_{i-1}&0&0\\ 0& \begin{array}{cc} cos\,{\theta_i}&-sin\,{\theta_i}\\ sin\,{\theta_i}& cos\,{\theta_i} \end{array} &0\\ 0&0&I_{n-i-1} \end{array}\right),\quad i=1,\dots,n-1\label{rot} \end{aligned}$$ The factorization idea comes from the well known fact that $U(n)$ acts transitivly on the $n$-dimensional complex sphere ${\bf S}_{2n-1}\,\in{\bf C}^{n}$, and explicitely from the coset relation $${\bf S}_{2n-1}={\it coset~space }\,\,{ U}(n)/{ U}(n-1)$$ A direct consequence of the last relation is that we expect that any element of ${ U}(n)$ should be uniquely specified by a pair of a vector $v\in {\bf S}_{2n-1}$ and of an arbitrary element of ${ U}(n-1)$. Thus we are looking for a factorization of an arbitrary element $A_n\in { U}(n)$ in the form $$A_n=B_n\cdot\left(\begin{array}{cc} 1&0\\0&A_{n-1}\end{array}\right) %\end{eqnarray}$$ where $B_n\in{ U}(n)$ is a unitary matrix whose first column is uniquely defined by a vector $v\in {\bf S}_{2n-1}$, but otherwise arbitrary, and $A_{n-1}$ is an arbitrary element of ${ U}(n-1)$. Iterating the previous equation we arrive at the conclusion that an element of ${ U}(n)$ can be written as a product of $n$ unitary matrices $$A_n=B_{n}\cdot B_{n-1}^1\dots B_1^{n-1}$$ where $$B_{n-k}^k=\left(\begin{array}{cc} I_k&0\\ 0&B_{n-k} \end{array}\right)$$ $B_k,\,\, k=1,\dots,n-1$, are $k\times k$ unitary matrices whose first columns are generated by vectors $b_k\in {\bf S}_{2k-1}$; for example $B_1^{n-1}$ is the diagonal matrix $(1,\dots,1,e^{i\varphi_{n(n+1)}})$. The still arbitrary columns of $B_k$ will be chosen in such a way that we should obtain a simple form for the matrices $B^{n-k}_k$, and we require that $B_k$ should be completely specified by the parameters entering the vector $b_k$ and nothing else. If we take into account the equivalence considerations of the Section 2 then $B_n\,(B_{n-k})$ can be written as $$B_n=d_n\,\tilde{B_n}$$ where the first column of $\tilde{B_n}$ has non-negative entries. Denoting this column by $v_1$ we will use the parameterization $$v_1=(cos\,{\theta_1},cos\,{\theta_2}\,sin\,{\theta_1 },\dots,\sin\,{\theta_1}\dots sin\,{\theta_{n-1}})^t$$ where $\theta_i\in[0,\pi/2],\, i=1,\dots,n-1$. Thus $B_n$ will be parameterized by $n$ phases and $n-1$ angles. According to the above factorization $\tilde{B}_n$ is nothing else than the orthogonal matrix generated by the vector $v_1$ and its form is given by [*Lemma 2* ]{}with $n\rightarrow n+1$. Thus without loss of generality $B_n=d_n\,{\cal{O}}_n$ with ${\cal{O}}_n\in SO(n)$. In this way the factorization of $A_n$ will be $$\begin{aligned} A_n=d_n\,{\cal{O}}_n\,d_{n-1}^1\,{\cal{O}}_{n-1}^1\dots d_2^{n-2}{\cal{O}}_{2}^{n-1}d_1^{n-1}I_n \label{fac} \end{aligned}$$ where ${\cal{O}}_{n-k}^k$ has the same structure as $B_{n-k}^k$, i.e $${\cal{O}}_{n-k}^k=\left(\begin{array}{cc} I_k&0\\ 0&{\cal{O}}_{n-k} \end{array}\right)$$ and $d^k_{n-k}=(1,\dots,1,e^{i\phi_1},\dots,,e^{i\phi_{n-k}})$ The orthogonal matrices ${\cal{O}}_n$ can be factored in terms of $J_{i,i+1}$ as follows. The orthogonal matrices ${\cal O}_n$ ( ${\cal O}_{n-k}^k$) at their turn can be factored into a product of $n-1$ (n-k-1) matrices of the form $J_{i,i+1}$; e.g. we have $$\begin{aligned} {\cal O}_n=J_{n-1,n}\,J_{n-2,n-1}\dots J_{1,2}\nonumber\end{aligned}$$ where $J_{i,i+1}$ are $n\times n$ rotations introduced by Eq.(\[rot\]). In this way the parameterisation of unitary matrices reduces to a product of simpler matrices: diagonal phase matrices and two-dimensional rotation matrices. For more details see our paper [@Di3]. Now we propose a disentanglement of the angles and phases entering each “generation” and denote the angles by latin letters, e.g. those that parameterize ${\cal O}_n$ will be denoted by $a_1,\dots,a_{n-1}$, the angles that parameterize ${\cal O}_{n-1}^1$, by $b_1,\dots,b_{n-2}$, etc., the last angle entering ${\cal O}^{n-1}_2$ by $z_{1}$. The phases will be denoted by Greek letters; e.g. the phases entering $d_1$ will be denoted by $\alpha_1,\dots,\alpha_{n}$, those entering $d_{n-1}^1$ by $\beta_1,\dots,\beta_{n-1}$, etc. The above factorization will be used in the next section for obtaining the equations for the moduli of the matrix elements. Explicit equations of the moduli ================================ Our choise for the orthogonal vectors in [*Lemma 2*]{} was such that the resulting matrix should have as many zero entries as possible. Thus ${\cal{O}}_n$ has $(n-1)(n-2)/2$ zeros in the right upper corner and the entries of the Hadarmard matrix will get more and more complicated when going from left to right and from top to bottom. We will start using the form (\[fac\]) of the unitary matrix and then $d_n\equiv I_n$. Since the first column has the form $a_{i1}=1/\sqrt{n},\,\, i=1,\dots,n$ and $d_{n-1}^1=(1,e^{i\alpha},e^{i\alpha_1},\dots,e^{i\alpha_{n-2}})$ the product ${\cal{O}}_n\,d_{n-1}^1$ is $$\left(\begin{array}{ccccccc} {1\over \sqrt{n}}&-\sqrt{{n-1\over n}}\,e^{i\alpha}&0&0&\dots&0&0\\ {1\over \sqrt{n}}&{e^{i\alpha}\over \sqrt{n(n-1)}}&-\sqrt{{n-2 \over n-1}}\,e^{i\alpha_1}&0&\dots&0&0\\ {1\over \sqrt{n}}&{e^{i\alpha}\over \sqrt{n(n-1)}} &{e^{i\alpha_1}\over\sqrt{(n-1)(n-2)}}&-\sqrt{{n-3\over n-2}} e^{{i\alpha_2}}&\dots&0&0\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ {1\over \sqrt{n}}&{ e^{i\alpha}\over \sqrt{n(n-1)}}& e^{i\alpha_1}\over\sqrt{(n-1)(n-2)}&e^{i\alpha_2}\over\sqrt{(n-2)(n-3)} &\dots& e^{i\alpha_{n-3}} \over\sqrt{6}&-e^{i\alpha_{n-2}} \over\sqrt{2}\\ {1\over \sqrt{n}}&{e^{i\alpha}\over \sqrt{n(n-1)}}&{e^{i\alpha_1} \over\sqrt{(n-1)(n-2)}}&{e^{i\alpha_2} \over\sqrt{(n-2)(n-3)}}&\dots&{ e^{i\alpha_{n-3}} \over\sqrt{6}}&{e^{i\alpha_{n-2}} \over\sqrt{2}}\\ \end{array}\right) \eqno(18)$$ where $\alpha, \alpha_i,\, i=1,\dots,n-2$ are $n-1$ arbitrary phases. The next building block ${\cal{O}}_{n-1}^1\,d_{n-2}^2$ will have the form $$\left(\begin{array}{ccccc} 1&0&0&\cdot&0\\ 0&\cos\,a&-\sin\,a\,e^{i\beta}&\cdot&0\\ 0&\sin\,a\, \cos\,a_1&\cos\,a \,\cos\,a_1\,e^{i\beta}&\cdot&0\\ \cdot&\cdot&\cdot&\cdot&\cdot\\ \cdot&\cdot&\cdot&\cdot&\cdot\\ 0&\sin\,a...\sin\,a_{n-3}&\cos\,a\,\sin\,a_1\cdots \sin\,a_{n-3}\,e^{i\beta}&\cdot& \cos\,a_{n-3}\,e^{i\beta_{n-3}}\\ \end{array} \right)\eqno(19)$$ in terms of $n-2$ phases $\beta,\beta_1,\dots,\beta_{n-3}$ and $n-2$ angles $a,a_1,\dots, a_{n-3}$, and so on. It is easy to see that the first two columns of the product of matrices (18) and (19) does not change when multiplied by ${\cal O}_{n-2}^2\,d_{n-3}^3$; however the first row does. If the angles entering ${\cal O}_{n-2}^2$ are denoted by $b, b_1,\dots,b_{n-4}$ and the phases are $\gamma,\gamma_1,\dots, \gamma_{n-4}$, etc., then the entries of the first row are $$a_{12}=-\sqrt{ n-1\over n}\cos\,a\,e^{i\alpha},\quad a_{13}=\sqrt{ n-1\over n}\sin\,a \,\cos\,b\,e^{i(\alpha+\beta)}, \dots,$$ $$a_{1 n-1}=(-1)^{n-1}\sqrt{ n-1\over n}\sin\,a \,\sin\,b\, \dots \cos\,z\, e^{i(\alpha+\beta+ \dots \omega)}$$ where $z\,\, {\rm and}\,\, \omega$ are the last angle and phase respectively. Since we use the standard form of Hadamard matrices, i.e. the entries of the first row and of the first column are positive and equal $1/\sqrt n$, the above equations imply $$\alpha=\beta=\dots =\omega=\pi; \,\, \cos\, a={1\over\sqrt{n-1}},\, \cos\, b={1\over\sqrt{n-2}},\dots, \cos\, z={1\over\sqrt{2}}$$ We substitute the above values in Eq.(\[fac\]) and find a complex $n\times n$ matrix depending on $(n-1)(n-2)/2$ phases $\alpha_1,\dots,\alpha_{n-2},\beta_1,\dots,\psi_1$ and $(n-2)(n-3)/2$ angles $a_1,\dots,a_{n-3},b_1,\dots,y_1$, i.e. $(n-2)^2$ parameters which have to be found by solving the corresponding equations given by the moduli. The first simplest entries of the unitary matrix have the form $$a_{22}=-{1\over{(n-1)}\sqrt{n}}-{n-2\over n-1}\cos\,a_1\,e^{i\alpha_1},\dots$$ $$a_{k2}=-{1\over{(n-1)}\sqrt{n}}\, +\,\sqrt{{n-2\over n-1}}\,\left({\cos\,a_1\,e^{i\alpha_1}\over\sqrt{(n-1)(n-2)}}+\dots+ { \sin\,a_1\dots \cos\,a_{k-2}\,e^{i\alpha_{k-2}}\over\sqrt{(n-k+2)(n-k+1)}}\right.$$ $$-\left.{\sqrt{{n-k\over n-k+1}}}\sin\,a_1\dots \sin\,a_{k-2}\cos\,a_{k-1}\,e^{i\alpha_{k-1}}\right),\,\, k=3,\dots,n-1\eqno(20)$$ $$a_{2k}=-{1\over{(n-1)}\sqrt{n}}\,+\,\sqrt{{n-2\over n-1}}\,\left({\cos\,a_1\,e^{i\alpha_1}\over\sqrt{(n-1)(n-2)}}-{\sin\,a_1\cos\,b_1\,e^{i(\alpha_1+\beta_1)}\over\sqrt{(n-2)(n-3)}}\,+\,\dots\right.$$ $$\left. +(-1)^{k-1}\,\, \sqrt{n-k\over {n-k+1}}\,\sin\,a_1\sin\,b_1\dots\cos\,l(k)_1\,e^{i(\alpha_1 + \beta_1 +\dots +\lambda(k)_1)}\right),\,\,{\rm etc.}$$ where $l(k)\,{\rm and}\,\lambda(k)$ denote the letters for angle and respectively phase corresponding to index $k$ and the signs in the last bracket alternate. The matrix elements get more complicated when going from the upper left corner to right bottom corner. The entries $a_{22}, a_{32}$ and $ a_{23}$ lead, for example, to the following moduli equations $$(n-2)\,\cos^2 a_1\,+\,{2\over\sqrt{n}}\,\cos\,a_1\, \cos\,\alpha_1\,-\,1=0$$ $$\sin a_1\left((n-3)\sin a_1\,\cos^2 a_2 +\right.$$ $$~~~~\left. 2 \sqrt{n-3\over n-1}\cos a_2\left({\cos \alpha_2\over\sqrt{n}} - \cos a_1\,\cos(\alpha_1-\alpha_2)\right) - \sin\,a_1 \right)=0\eqno(21)$$ $$\sin a_1\left((n-3)\sin a_1\,\cos^2 b_1 +\right.$$ $$\left. 2 \sqrt{n-3\over n-1}\cos b_1\left(-{\cos(\alpha_1 + \beta_1)\over\sqrt{n}} + \cos a_1\,\cos \beta_1 \right) - \sin\,a_1 \right)=0$$ and so on. The form of the last two equations was obtained after the elimination of the term containing $\cos a_1\, \cos \alpha_1$ by using the first equation (21), i.e. we work in the ideal generated by the moduli equations. It is easily seen that the other equations contain as factors $\sin a_2,\dots, \sin a_{n-2}, \sin b_{1},\dots,{\rm etc.}$. Thus a particular solution can be obtained when $$\sin a_1 =0$$ which implies $ a_1=0,\pi$, and from the first equation (21) we get $$\cos \alpha_1=\pm {(n-3)\sqrt{n}\over 2}$$ It is easily seen that the above equation has solution only for $n=2,3,4$; for $n\ge 5$ the factor $\sin a_1$ will be omitted from Eqs.(21) because then $a_1\ne 0,\pi$. When $n=2$ we obtain $\alpha_1=\pi/4$, so $a_{22}=-1/\sqrt{2}$. If $n=3$, then $\alpha_1=3 \pi/2$ and from the first Eq.(20) one gets $$a_{22}=-{1\over 2\sqrt{3}}+{i\over 2}={1\over\sqrt{3}} e^{{2 \pi i\over 3}},\,\,{\rm etc.}$$ The case $n=4$ leads to $\alpha_1=\pi$ which gives $$a_{22}=-a_{23}=-a_{32}={1\over 2} \quad{\rm and} \quad a_{33}=-a_{34}=-{e^{i(\alpha_2 + \beta_1)}\over 2}$$ After the substitution $\alpha_2 + \beta_1=t$ one finds the standard complex form of the $4\times 4$ matrix found by Hadamard. To view what is the origin of the phase $\alpha_2 + \beta_1$ we have to look at the moduli equations. They have the form $$2 \cos^{2}a_1 +\cos a_1\cos{\alpha_1} -1=0$$ $$\sin a_1(\cos{\alpha_2}- 2\cos{a_1}\cos(\alpha_1-\alpha_2))=0$$ $$\sin a_1(2 \cos{a_1}\cos{\beta_1}-\cos(\alpha_1+\beta_1))=0$$ $$\cos 2a_1\, \cos(\alpha_1-\alpha_2)\cos\,\beta_1+\cos a_1\cos(\alpha_2+\beta_1)+ \sin(\alpha_1-\alpha_2)\sin \beta_1=0$$ and we see that the above system splits into two cases. In the first case, when $\sin a_1=0$, the rank of the system is two which explains the above dependence of $a_{33}$ on two phases and in the second case when $\sin a_1\ne 0$ the rank is three and the dependence is only on one arbitrary phase. However in this case there is no final difference between the two cases. The solution of the above system is obtained directly but for $n\ge 5$ the problem is difficult and needs more powerful techniques. Particular solutions can be obtained rather easily e.g for $n=6$ there is a matrix that has the property $a_{ij}=a_{ji}$. $${1\over\sqrt{6}}\left(\begin{array}{rrrrrr} 1&1&1&1&1&1\\ 1&-1&-1&1&i&-i\\ 1&-1&-i&-1&1&i\\ 1&1&-1&-i&-1&i\\ 1&i&1&-1&-1&-i\\ 1&-i&i&i&-i&-1 \end{array}\right)$$ There exists even a Hermitian matrix $S=S^*$ $${1\over\sqrt{6}}\left(\begin{array}{rrrrrr} 1&1&1&1&1&1\\ 1&-1&i&i&-i&-i\\ 1&-i&-1&1&-1&i\\ 1&-i&1&-1&i&-1\\ 1&i&-1&-i&1&-1\\ 1&i&-i&-1&-1&1 \end{array}\right)$$ and so on. As we said before getting the most general form of a solution is not a simple task; for $n=6$ we have $16$ complicated trigonometric equations and we remind that the simpler $ (\ref{sys})$ system was solved only for $n\le 8$ equations. Thus new approaches are necessary and in the next Section we suggest such an approach: using methods from algebraic geometry. Connection with algebraic geometry ================================== The Eqs.(21) can be transformed into polynomial equations by the known procedure $$\sin\,a \rightarrow {2\,x\over 1+x^2}\,,\quad \cos\, a \rightarrow {1-x^2\over 1+x^2}$$ such that we get from (21) $$p_1=\left[(n - 3+{2\over\sqrt{n}})x_1^4 -2(n-1)x_1^2+(n - 3-{2\over\sqrt{n}})\right]y_1^2+(n - 3-{2\over\sqrt{n}})x_1^4 -$$ $$2(n-1)x_1^2+(n - 3+{2\over\sqrt{n}})$$ $$p_2=\left\{\left[-(1-{1\over\sqrt{n}})x_1^2+C_1\,x_1 +(1+{1\over\sqrt{n}})\right]x_2^4-C_2\,x_1\,x_2^2 +(1-{1\over\sqrt{n}})x_1^2+C_1x_1-\right.$$ $$\left.(1+{1\over\sqrt{n}})\right\}y_1^2y_2^2+ \left\{\left[(1-{1\over\sqrt{n}})x_1^2+C_1x_1-(1+{1\over\sqrt{n}})\right]x_2^4-C_2\,x_1\,x_2^2-(1-{1\over\sqrt{n}})x_1^2+\right.$$ $$\left.C_1\,x_1+ (1+{1\over\sqrt{n}})\right\}y_1^2+ \left\{\left[(1+{1\over\sqrt{n}})x_1^2+C_1x_1-(1-{1\over\sqrt{n}})\right]x_2^4-C_2\,x_1\,x_2^2-\right.$$ $$\left.(1+{1\over\sqrt{n}})x_1^2+C_1\,x_1 +(1-{1\over\sqrt{n}})\right\}y_2^2-4(1-x_1^2)(1-x_2^4)y_1y_2+\left[-(1+{1\over\sqrt{n}})x_1^2 +\right.$$ $$\left.C_1\,x_1 +(1-{1\over\sqrt{n}})\right]x_2^4-C_2\,x_1\,x_2^2+(1+{1\over\sqrt{n}})x_1^2+C_1x_1-(1-{1\over\sqrt{n}}) \eqno(27)$$ $$p_3=\left\{\left[-(1-{1\over\sqrt{n}})x_1^2+C_1\,x_1 +(1+{1\over\sqrt{n}})\right]x_3^4-C_2\,x_1\,x_3^2 +(1-{1\over\sqrt{n}})x_1^2+C_1x_1-\right.$$ $$\left.(1+{1\over\sqrt{n}})\right\}y_1^2y_3^2+ \left\{\left[(1-{1\over\sqrt{n}})x_1^2+C_1x_1-(1+{1\over\sqrt{n}})\right]x_3^4-C_2\,x_1\,x_3^2-(1-{1\over\sqrt{n}})x_1^2+\right.$$ $$\left.C_1\,x_1+ (1+{1\over\sqrt{n}})\right\}y_1^2+ \left\{\left[-(1-{1\over\sqrt{n}})x_1^2+C_1x_1+(1-{1\over\sqrt{n}})\right]x_3^4-C_2\,x_1\,x_3^2+\right.$$ $$\left.(1+{1\over\sqrt{n}})x_1^2+C_1\,x_1 -(1-{1\over\sqrt{n}})\right\}y_3^2-4(1+x_1^2)(1-x_3^4)y_1y_2+\left[(1+{1\over\sqrt{n}})x_1^2 +\right.$$ $$\left.C_1\,x_1 -(1-{1\over\sqrt{n}})\right]x_3^4-C_2\,x_1\,x_3^2-(1+{1\over\sqrt{n}})x_1^2+C_1x_1+(1-{1\over\sqrt{n}})\eqno(26)$$ where $$C_1={(n-1)(n-4)\over\sqrt{(n-1)(n-3)}}, \qquad C_2={2(n-1)(n-2)\over\sqrt{(n-1)(n-3)}}$$ and the angles by the above transformation go to $x_1,x_2,x_3,\dots$ and the phases to $y_1,y_2,y_3,\dots$ From the matrices such as (18) one sees that the full set of the $(n-2)^2$ equations contains square roots of almost all prime numbers $\le n$ so that not all the coefficients are rational and we have to look for solutions in a field ${\mathbf{Q}}(\sqrt{d})$ for some $d\in{\mathbf{N}}$. The polynomial equation $p_1=0$ defines an algebraic curve; however the most studied are the elliptic and hyperelliptic curves, i.e. those defined by an equation of the form $ y^2=f_p(x)$ where $f_p(x)$ is a polynomial of degree $p$. From $p_1=0$ we get $$y_1^2=-{(n - 3-{2\over\sqrt{n}})x_1^4 -2(n-1)x_1^2+(n - 3+{2\over\sqrt{n}})\over (n - 3+{2\over\sqrt{n}})x_1^4 -2(n-1)x_1^2+(n - 3-{2\over\sqrt{n}})}=-{P_1(x_1)\over P_2(x_1)}$$ which defines a meromorphic function. Its zeros and poles are $$\pm \sqrt{{\sqrt{n}-1\over \sqrt{n}+1}},\qquad \pm \sqrt{{n+\sqrt{n}-2\over n-\sqrt{n}-2}}$$ and $$\pm \sqrt{{\sqrt{n}+1\over \sqrt{n}-1}},\qquad \pm \sqrt{{n-\sqrt{n}-2\over n+\sqrt{n}-2}}$$ respectively that are simple, and the poles and the zeros are interlaced. Thus apparently the above equation is not hyperelliptic, however by the birational transformation $$y_1={Y_1\over P_2(x_1)}$$ we get the equation $$Y_1^2=-P_1(x_1)\,P_2(x_1)$$ which shows that the above curve has genus $g=3$. For $n\ge 5$ the curve has no branch going to infinity since the highest power coefficient is negative and consequently the curve is made of three ovals. The polynomials $p_1=p_2=0$ define a surface, $p_1=p_2=p_3=0$ define a 3-fold, and so on. We consider that the study of these multi-fold varieties will be very interesting from the algebraic geometry point of view and their parameterizations could reveal unknown properties that may lead to a better understanding of the rational varieties. As we saw in Section 5 one can easily construct parameterizations of Hadamard matrices depending on a number of free phases at least for a non-prime $n$. That means that the set of the moduli equations has to be split in some sub-sets and for each such sub-set the solutions are in $\underbrace{S^1\otimes\dots\otimes S^1}_{k\,\, factors}$, where $k$ is the number of arbitrary phases parameterizing the considered sub-set. But this could be equivalent to the existence of a rational parameterization for the equations defining this sub-set. Unfortunately the best studied case and the best results are for algebraic curves; see [@Ko], Theorem 14, for a flavour of recent results. The study of surfaces, three-fold, etc. is at the beginning and until now the theory was developed only for the simplest varieties, the so called rationally connected varieties [@Ko]. From what we said before one may conclude that the parameterization of complex Hadamard matrices could be an interesting example of the parameterization of meromorphic varieties, which could be a mixing between a rational parameterisation and a parameterisation of hyperelliptic curves. Thus the theoretical instrument for the parameterization of complex Hadamard matrices seems to exist, the challenging problem being the transformation of the existing theorems into a symbolic manipulation software program able to find after a reasonable computer time explicit solutions at least for moderate values of $n$. Conclusion ========== All the results obtained for the complex Hadamard matrices can be used for the construction of [*real*]{} Hadamard matrices the only supplementary constraint being the natural one $n=4\,m$. We believe that the Hadamard conjecture can be solved in our formalism since unlike the classical combinatorial approach we have also at our disposal $(n-1)(n-2)/2$ phases, and the problem is to guess the pattern of $0$ and $\pi$ taken by them. Conversely many constructions from the theory of real Hadamard matrices can be extended to the complex case. For example a complex conference matrix will be a matrix with $a_{ii}=0,\,\, i=1,\dots,n$ and $|a_{ij}|=1/\sqrt{n}$ such that $$W\,W^* =\frac{n-1}{n}$$ It is not difficult to construct complex conference matrices, in fact it is a simpler problem than the construction of complex Hadamard matrices because the equations $a_{ii}=0,\,\,i=2,\dots,n-1$ imply the determination of $2(n-2)$ parameters which simplify the other equations. We give a few examples: $$W_4=\frac{1}{2}\left( \begin{array}{cccc} 0&1&1&1\\ 1&0&-e^{it}&e^{it}\\ 1&e^{it}&0&-e^{it}\\ 1&-e^{it}&e^{it}&0 \end{array}\right)$$ and $$\begin{aligned} W_6= \frac{1}{\sqrt{6}}\left( \begin{array}{cccccc} 0&1&1&1&1&1\\ 1&0&-e^{i\alpha} &-e^{i\alpha}&e^{i\alpha} &e^{i\alpha} \\ 1&-e^{i\alpha} &0&e^{i\alpha }&-e^{i(\alpha -\beta)}&e^{i(\alpha -\beta)}\\ 1&-e^{i\alpha} &e^{i\alpha }&0&e^{i(\alpha -\beta)} &-e^{i(\alpha -\beta)} \\ 1&e^{i\alpha} &-e^{i(\alpha+\beta) }&e^{i(\alpha+\beta)} &0&-e^{i\alpha }\\ 1&e^{i\alpha} &e^{i(\alpha+\beta)} &-e^{i(\alpha+\beta)} &-e^{i\alpha }&0\\ \end{array} \right)\nonumber\end{aligned}$$ where the second depends on two arbitrary phases. They are useful because if $W_n$ is a complex conference matrix then $$M_{2 n}= \frac{1}{\sqrt{2}}\left( \begin{array}{cc} W_n +{{\textstyle I_n}\over\sqrt{\textstyle n}} & W_n^* -{{\textstyle I_n}\over\sqrt{\textstyle n}} \\ &\\ W_n -{{\textstyle I_n}\over\sqrt{\textstyle n}} &- W_n^* - {{\textstyle I_n}\over\sqrt{\textstyle n}} \end{array}\right)$$ is a complex Hadamard matrix of order $2 n$. In this paper we have used convenient parameterisations of unitary matrices that allowed us getting a set of $(n-2)^2$ polynomial equations whose solutions will give all the posible parameterisations for Hadamard matrices. Unfortunately the system is very complicated and only particular solutions have been found; thus from a pragmatical point of view the most important issue would be the design of software packages for solving these equations. The work was completed while the author was a visitor at the Institute for Theoretical Physics, University of Bern in the frame of the Swiss National Science Foundation Program “ Scientific Co-operation between Eastern Europe and Switzerland (SCOPES 2000-2003)”. It is a pleasure for me to thank Professor H. Leutwyller for many interesting discussions. Also I want to thank Professor J. Gasser for the warm hospitality extended to me during my stay in Bern. 2.5cm [99]{} A.A. Agaian, [*Hadamard Matrices and Their Applications*]{}, Lectures Notes in Mathematics \# 1168, Springer (1985) Gr. Arsene and A. Gheondea, “Completing matrix contractions” [*J. Operator Theory*]{} [**7**]{} (1982) 179-189 G. Auberson, “On the reconstruction of a unitary matrix from its moduli. Existence of continuos ambiguities” [*Phys.Lett.*]{} [**B216**]{} (1989) 167-171 G. Auberson, A. Martin and G. Mennessier, “On the reconstruction of a unitary matrix from its moduli” [*Commun.Math.Phys.*]{} [**140**]{} (1991) 417-431 G. Björck, “Functions of modulus one on $\mathbf{Z_p}$ whose Fourier transforms have constant modulus” [*Colloquia Mathematica Societatis János Bolyai*]{} [**49**]{} (1985) 193-197 G. Björck and R. Fröberg, “A faster way to count the solutions of inhomogeneous systems of algebraic equations, with applications to cyclic n-roots” [*J.Symbolic Computation*]{} [**12**]{} (1991) 329-336 G. Björck and R. Fröberg, “Methods to “divide-out” certain solutions from systems of algebraic equations, applied to find all cyclic 8-roots” in [*Analysis, Algebra and Computers in Mathematical Research*]{}, Dekker (1994) 57-70 G. Björck and B. Saffari, “New classes of finite unimodular sequences with unimodular Fourier transform. Circulant Hadamard matrices with complex entries” [*C.R.Acad.Sci.Paris*]{} [**320**]{} (1995) 319-324 J.D.Bjorken and I. Dunietz, “Rephasing invariant parameterisations of generalized Kobayashi-Maskawa matrices” [*Phys.Rev.*]{} [**D36**]{} (1987) 2109-2118 G.C. Branco and L. Lavoura, “Rephasing-invariant parameterisation of the quark matrix” [*Phys.Lett.*]{} [**B208**]{} (1988) 123-130 P. Diţă, “Parameterisation of unitary matrices”, [*J.Phys.A: Math.Gen.*]{} [**15**]{} (1982) 3465-3473 P. Diţă, “Parameterisation of unitary matrices by moduli of their elements”, [*Commun.Math.Phys.*]{} [**159**]{} (1994) 581-591 P. Diţă, “Factorization of unitary matrices” [*J.Phys.A: Math.Gen*]{} [**36**]{} (2003) 2781-2789 R.G. Douglas, “On majorization, factorization, and range inclusion of operators on Hilbert space” [*Proc.Amer.Math.Soc.*]{} [ **17**]{} (1966) 413-415 J.M. Goethals and J.J. Seidel, “Orthogonal matrices with zero diagonal” [*Canad.J.Math*]{} [**19**]{} (1967) 1001-1010 U. Haagerup “Orthogonal maximal abelian $*$-subalgebra ot the $n\times n$ matrices and cyclic $n$-roots”, in [*Operator algebras and quantum field theory*]{} Rome (1996), 296-322, Internat. Press, Cambridge, MA, 1997 J. Hadamard, “ Résolution d’une question rélative aux déterminants”, [*Bull.Sci.Math.*]{} [**17**]{} (1893) 240-246 P de la Harpe and V R F Jones, “Paires de sous-algebres semi-simples et graphes fortement réguliers”, [*C R Acad.Sci. Paris*]{} [**311**]{} (1990) 147-150 J. Kollár, “Which are the simplest algebraic varieties?”, [*Bull.Amer.Math.Soc.*]{} [**38**]{} (2001) 409-433 A Munemasa and Y Watatani, “Orhogonal pairs of $*$-subalgebras and association schemes”, [*C R Acad.Sci. Paris*]{} [**314**]{} (1992) 329-331 F.D. Murnagham, [*The Unitary and Rotation Groups,*]{} (1962), Spartan Books, Washington, D.C. B. Sz-Nagy and C. Foias, [*Analyse Harmonique des Opérateurs de l’Espace de Hilbert*]{}, Masson, Paris, 1967 S. Popa, “Orthogonal pairs of $*$-subalgebras in finite von Neumann algebras, [*J. Operator Theory*]{} [**9**]{} (1983) 253-268 J.J. Sylvester, “Thoughts on inverse orthogonal matrices, simultaneous sign-succesions, and tessellated pavements in two or more colors, with applications to Newton’s rule, ornamental tile-work, and the theory of numbers”, [*Phil.Mag.*]{} [**34**]{} (1867) 461-475 K.G.H. Vollbrecht and R.F. Werner, “Why two qubits are special”, [*J.Math.Phys.*]{} [**41**]{} (2000) 6772-6782 R.F. Werner, “All teleportation and dense coding schemes”, [*Preprint*]{} quantum-ph/0003070 R.F. Werner, “Quantum information theory - an invitation”, in [*Quantum information - an introduction to the basic theoretical concepts and experiments*]{}, Springer Tracts in Modern Physics, (2003) Springer J. Williamson, “Hadamard’s determinant theorem and the sum of four squares”, [*Duke Math.J.*]{} [**11**]{} (1944) 65-81 Życzkowski K, Kus M, Słomczyński W and Sommers H-J “Random unistochastic matrices” [*J.Phys.A: Math.Gen*]{} [**36**]{} (2003) 3425-3450
{ "pile_set_name": "ArXiv" }
--- abstract: | The second order (in time) Schrödinger equation is proposed. The additional term (in comparison to Schrödinger equation) describes the interaction of particles with vacuum filled with virtual particle – antiparticle pairs (*zitterbewegung*). Key words: Schrödinger equation; Nanoscience; Zitterbewegung. --- [**Schrödinger Equation for Nanoscience**]{} Quantum mechanics has been remarkably successful in all realms of atoms molecular and solids. But even more remarkable is the fact that quantum theory still continues to fascinate researches. Interest in quantum mechanics both theoretical and experimental is probably greater now that it ever has been. In this article we develop the *modified Schrödinger* equation which describes the structure of matter on the subatomic level i.e. for characteristic dimension $r_n<d<r_a$ where $r_n\,(\textrm{nucleus radius} )\sim\,\textrm{fm}$, $r_a\,(\textrm{atom radius})\sim~\textrm{nm}$. To that aim we use the analogy between the Schrödinger equation and diffusion equation (Fourier equation). The quantum Fourier equation which describes the heat (mass) diffusion on the atomic level has the form [@1]: $$\frac{\partial T}{\partial t}=\frac{\hbar}{m}\nabla^{2}T.\label{eq1}$$ When the real time $t\rightarrow it/2$ and $T\rightarrow\Psi$, Eq. (\[eq1\]) has the form of the free Schrödinger equation: $$i\hbar\frac{\partial \Psi}{\partial t}=-\frac{\hbar^{2}}{2m}\nabla^{2}\Psi.\label{eq2}$$ The complete Schrödinger equation has the form: $$i\hbar\frac{\partial \Psi}{\partial t}=-\frac{\hbar^{2}}{2m}\nabla^2\Psi+V\Psi\label{eq3}$$ where $V$ denotes the potential energy. When we go back to real time $t\rightarrow-2it$, $\Psi\rightarrow T$ the new parabolic quantum heat transport equation (quantum Fokker-Planck equation) is obtained: $$\frac{\partial T}{\partial t}=\frac{\hbar}{m}\nabla^2 T-\frac{2V}{\hbar}T.\label{eq4}$$ Equation (\[eq4\]) describes the quantum heat transport for $\triangle t>\tau$. For ultrashort time processes, $\triangle t <\tau$ one obtains the generalized quantum hyperbolic heat transport equation: $$\tau\frac{\partial^2 T}{\partial t^2}+\frac{\partial T}{\partial t}=\frac{\hbar}{m}\nabla^2 T-\frac{2V}{\hbar}T.\label{eq5}$$ The structure and the solutions of Eq. (\[eq5\]) for ultrashort thermal processes was investigated in the monograph: M. Kozlowski, J. Marciak-Kozlowska: *From quarks to bulk matter*, Hadronic Press, USA, 2001. The generalized heat transport equation (\[eq5\]) leads to *modified Schrödinger* equation (MSE). After substitution $t\rightarrow it/2, \,T\rightarrow\Psi$ in Eq. (\[eq5\]) one obtains: $$i\hbar\frac{\partial\Psi}{\partial t}=-\frac{\hbar^2}{2m}\nabla^2\Psi+V\Psi -2\tau\hbar\frac{\partial^2\Psi}{\partial t^2}.\label{eq6}$$ The additional term (in comparison to Schrödinger equation) describes the interaction of electrons with surrounding space-time filled with virtual positron-electron pairs, i.e. *zitterbewegung*. One can conclude that for time period $\triangle t<\tau$ and distance $\triangle r<c\tau$ the description of quantum phenomena needs some revision. The numerical values for $\triangle t$ and $\triangle r$ can be calculated as follows. Considering that $\tau$, [@1] $$\begin{aligned} \tau&=&\frac{\hbar}{m\alpha^2 c^2}\sim 10^{-17}\,{\rm s}\\ \label{eq7} c\tau&=&\frac{\hbar c}{m\alpha^2 c^2}\sim 1\,{\rm nm},\nonumber\end{aligned}$$ we conclude that with the help of MSE we can visit the inner atomic environement. On the other hand for $\triangle t>10^{-17}$ s, $\triangle r>$ nm, in MSE the second derrivative term can be omitted and as result the SE is obtained, i.e. $$i\hbar\frac{\partial \Psi}{\partial t}=-\frac{\hbar^2}{2m}\nabla^2\Psi+V\Psi.\label{eq8}$$ The visit of the inner structure of the atom can be quite interesting, for the fact that atom radius remains strictly constant during the universe expansion [@2]. [9]{} M. Kozlowski, J. Marciak-Kozlowska\ *From Quarks to Bulk Matter*\ Hadronic Press, USA, 2001 W. B. Bonnor,\ *Size of a hydrogen atom in the expanding universe*\ Class. Quantum Grav. **16**, (1999) 1313
{ "pile_set_name": "ArXiv" }
--- abstract: 'This fluid dynamics video demonstrates an experiment on superfast thinning of a freestanding thin aqueous film. The production of such films is of fundamental interest for interfacial sciences and the applications in nanoscience. The stable phase of the film is of the order $5-50\,nm$; nevertheless thermal convection can be established which changes qualitatively the thinning behavior from linear to exponentially fast. The film is thermally driven on one spot by a very cold needle, establishing two convection rolls at a Rayleigh number of $10^7$. This in turn enforces thermal and mechanical fluctuations which change the thinning behavior in a peculiar way, as shown in the video.' author: - | Michael Winkler$^{1)}$, Guggi Kofod$^{1)}$, Rumen Krastev$^{3)}$, and Markus Abel$^{1),2)}$\ \ 1) Institute for Physics and Astronomy, University of Potsdam, 14476 Potsdam,\ 2) Université Henri Poincaré-LEMTA, BP 160 - 54504 Vandoeuvre,\ 3) NMI - Natural and Medical Science Institute, 72770 Reutlingen, bibliography: - 'gallery.bib' title: | Superfast Thinning of a\ Nanoscale Thin Liquid Film --- ![image](fig/sequence.pdf){width="\textwidth"} Thin liquid films may show a very thin stable phase of nanometer scale. Since the film is no longer visible by optical wavelengths at this thickness, it is called a Black Film [@derjaguin1989theory]. The evolution of an initially thick, freestanding film towards this equilibrium thickness is a slow process which can be observed as a flat boundary of Black Film on top of a periodic color pattern moving downwards. The colors correspond to the repeated negative interference of light waves when the condition $n\cdot \lambda/4$ is met. This process is driven by gravitation and surface forces, but the time scale is set by the Poiseuille flow between the film interfaces [@couder1989hydrodynamics], cf. Fig. \[fig:thinningsequence\], 75s and 115s. When the thin film is driven, the motion may change due to the altered transport properties. We explore this possibility by thermal driving with a cold copper rod at $100 \,K$, corresponding to a Rayleigh number of $10^7$. This establishes two stable convection rolls, Fig. \[fig:thinningsequence\], 195s–275s and gives rise to large mechanical and thermal fluctuations. These fluctuations in turn generate spontaneously spots of stable and light black film inside the unstable thick and heavy phase. The spots are convected for small size until they eventually escape to the top due to buoyancy. While being convected, the spots grow and leave behind tails of Black Film, thereby increasing the Black Film area in an exponential manner, cf. Fig. \[fig:thinningsequence\], $280\,s--305\,s$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the classical non-relativistic two-dimensional one-component plasma at Coulomb coupling $\Gamma=2$ on the Riemannian surface known as Flamm’s paraboloid which is obtained from the spatial part of the Schwarzschild metric. At this special value of the coupling constant, the statistical mechanics of the system are exactly solvable analytically. The Helmholtz free energy asymptotic expansion for the large system has been found. The density of the plasma, in the thermodynamic limit, has been carefully studied in various situations.' author: - Riccardo Fantoni - Gabriel Téllez bibliography: - '2docp.bib' title: | Two-dimensional one-component plasma\ on a Flamm’s paraboloid --- Keywords: Coulomb systems, one-component plasma, non constant curvature. Introduction ============ The system under consideration is a classical (non quantum) two-dimensional one-component plasma: a system composed of one species of charged particles living in a two-dimensional surface, immersed in a neutralizing background, and interacting with the Coulomb potential. The one-component classical Coulomb plasma is exactly solvable in one dimension [@Edwards62]. In two dimensions, in their 1981 work, B. Jancovici and A. Alastuey [@Jancovici81b; @Alastuey81] showed how the partition function and $n$-body correlation functions of the two-dimensional one-component classical Coulomb plasma (2dOCP) on a plane can be calculated exactly analytically at the special value of the coupling constant $\Gamma=\beta q^2=2$, where $\beta$ is the inverse temperature and $q$ the charge carried by the particles. This has been a very important result in statistical physics since there are very few analytically solvable models of continuous fluids in dimensions greater than one. Since then, a growing interest in two-dimensional plasmas has lead to study this system on various flat geometries [@Rosinberg84; @Jancovici94; @Jancovici96] and two-dimensional curved surfaces: the cylinder [@Choquard81; @Choquard83], the sphere [@Caillol81; @Jancovici92; @Jancovici96b; @Tellez99; @Jancovici00] and the pseudosphere [@Jancovici98; @Fantoni03jsp; @Jancovici04]. These surface have constant curvature and the plasma there is homogeneous. Therefore, it is interesting to study a case where the surface does not have a constant curvature. In this work we study the 2dOCP on the Riemannian surface ${\cal S}$ known as the Flamm’s paraboloid, which is obtained from the spatial part of the Schwarzschild metric. The Schwarzschild geometry in general relativity is a vacuum solution to the Einstein field equation which is spherically symmetric and in a two dimensional world its spatial part has the form $$\begin{aligned} \label{metric} d\mathbf{s}^2=\left(1-\frac{2M}{r}\right)^{-1}\,dr^2+r^2\,d\varphi^2~.\end{aligned}$$ In general relativity, $M$ (in appropriate units) is the mass of the source of the gravitational field. This surface has a hole of radius $2M$ and as the hole shrinks to a point (limit $M\to 0$) the surface becomes flat. It is worthwhile to stress that, while the Flamm’s paraboloid considered here naturally arises in general relativity, we will study the classical ([*i.e.*]{} non quantum) statistical mechanics of the plasma obeying non-relativistic dynamics. Recent developments for a statistical physics theory in special relativity have been made in [@kaniadakis02; @kaniadakis05]. To the best of our knowledge no attempts have been made to develop a statistical mechanics in the framework of general relativity. The “Schwarzschild wormhole” provides a path from the upper “universe” to the lower one. We will study the 2dOCP on a single universe, on the whole surface, and on a single universe with the “horizon” (the region $r=2M$) grounded. Since the curvature of the surface is not a constant but varies from point to point, the plasma will not be uniform even in the thermodynamic limit. We will show how the Coulomb potential between two unit charges on this surface is given by $-\ln(|z_1-z_2|/\sqrt{|z_1z_2|})$ where $z_i=(\sqrt{r_i}+\sqrt{r_i-2M})^2e^{i\varphi_i}$. This simple form will allow us to determine analytically the partition function and the $n$-body correlation functions at $\Gamma=2$ by extending the original method of Jancovici and Alastuey [@Jancovici81b; @Alastuey81]. We will also compute the thermodynamic limit of the free energy of the system, and its finite-size corrections. These finite-size corrections to the free energy will contain the signature that Coulomb systems can be seen as critical systems in the sense explained in [@Jancovici94; @Jancovici96]. The work is organized as follows: in section \[sec:model\], we describe the one-component plasma model and the Flamm’s paraboloid, [*i.e.*]{} the Riemannian surface ${\cal S}$ where the plasma is embedded. In section \[sec:poisson\], we find the Coulomb pair potential on the surface ${\cal S}$ and the particle-background potential. We found it convenient to split this task into three cases. We first solve Poisson equation on just the upper half of the surface ${\cal S}$. We then find the solution on the whole surface and at last we determine the solution in the grounded horizon case. In section \[sec:correlations\], we determine the exact analytical expression for the partition function and density at $\Gamma=2$ for the 2dOCP on just one half of the surface, on the whole surface, and on the surface with the horizon grounded. In section \[sec:conclusions\], we outline the conclusions. The model {#sec:model} ========= A one-component plasma is a system of $N$ pointwise particles of charge $q$ and density $n$ immersed in a neutralizing background described by a static uniform charge distribution of charge density $\rho_b=-qn_b$. In this work, we want to study a two-dimensional one-component plasma (2dOCP) on a Riemannian surface ${\cal S}$ with the following metric $$\begin{aligned} d\mathbf{s}^2=g_{\mu\nu}dx^\mu dx^\nu= \left(1-\frac{2M}{r}\right)^{-1}dr^2+r^2d\varphi^2~.\end{aligned}$$ or $g_{rr}=1/(1-2M/r),g_{\varphi\varphi}=r^2$, and $g_{r\varphi}=0$. This is an embeddable surface in the three-dimensional Euclidean space with cylindrical coordinates $(r,\varphi,Z)$ with $d\mathbf{s}^2=dZ^2+dr^2+r^2d\varphi^2$, whose equation is $$\begin{aligned} \label{surf} Z(r)=\pm 2\sqrt{2M(r-2M)}~. \end{aligned}$$ This surface is illustrated in Fig. \[fig:surf\]. It has a hole of radius $2M$. We will from now on call the $r=2M$ region of the surface its “horizon”. ![The Riemannian surface ${\cal S}$: the Flamm’s paraboloid.[]{data-label="fig:surf"}](s){width="\GraphicsWidth"} ### The Flamm’s paraboloid $\cal S$ {#sec:topology} The surface ${\cal S}$ whose local geometry is fixed by the metric (\[metric\]) is known as the Flamm’s paraboloid. It is composed by two identical “universes”: ${\cal S}_+$ the one at $Z>0$, and ${\cal S}_-$ the one at $Z<0$. These are both multiply connected surfaces with the “Schwarzschild wormhole” providing the path from one to the other. The system of coordinates $(r,\varphi)$ with the metric (\[metric\]) has the disadvantage that it requires two charts to cover the whole surface ${\cal S}$. It can be more convenient to use the variable $$\begin{aligned} \label{u} u=\frac{Z}{4M}= \pm\sqrt{\frac{r}{2M}-1}\end{aligned}$$ instead of $r$. Replacing $r$ as a function of $Z$ using equation (\[surf\]) gives the following metric when using the system of coordinates $(u,\varphi)$, $$\begin{aligned} d\mathbf{s}^2=4M^2(1+u^2)\left[4\,du^2+(1+u^2)\,d\varphi^2\right]~.\end{aligned}$$ The region $u>0$ corresponds to ${\cal S}_+$ and the region $u<0$ to ${\cal S}_{-}$. Let us consider that the OCP is confined in a “disk” defined as $$\begin{aligned} \Omega_R^{+}=\{{{\bf q}}=(r,\varphi)\in {\cal S}_+ |0\le\varphi\le 2\pi, 2M\le r\le R\}~.\end{aligned}$$ The area of this disk is given by $$\begin{aligned} \label{volume} \mathcal{A}_R=\int_{\Omega_R} dS=\pi\left[\sqrt{R(R-2M)}(3M+R)+ 6M^2\ln\left(\frac{\sqrt{R}+\sqrt{R-2M}}{\sqrt{2M}}\right)\right]~,\end{aligned}$$ where $dS=\sqrt{g}\,dr\,d\varphi$ and $g=\det(g_{\mu\nu})$. The perimeter is $\mathcal{C}_R=2\pi R$. The Riemann tensor in a two dimensional space has only $2^2(2^2-1)/12=1$ independent component. In our case the characteristic component is $$\begin{aligned} {R^r}_{\varphi r\varphi}=-\frac{M}{r}~.\end{aligned}$$ The scalar curvature is then given by the following indexes contractions $$\begin{aligned} \mathcal{R}={R^\mu}_\mu={R^{\mu\nu}}_{\mu\nu}=2{R^{r\varphi}}_{r\varphi} =2g^{\varphi\varphi}{R^r}_{\varphi r\varphi}= -\frac{2M}{r^3}~, \end{aligned}$$ and the (intrinsic) Gaussian curvature is $K=\mathcal{R}/2=-M/r^3$. The (extrinsic) mean curvature of the manifold turns out to be $H=-\sqrt{M/8r^3}$. The Euler characteristic of the disk $\Omega_R^{+}$ is given by $$\begin{aligned} \chi=\frac{1}{2\pi}\left(\int_{\Omega_R^{+}}K\,dS+ \int_{\partial\Omega_R^{+}}k\,dl\right)~, \end{aligned}$$ where $k$ is the geodesic curvature of the boundary $\partial\Omega_R^{+}$. The Euler characteristic turns out to be zero, in agreement with the Gauss-Bonnet theorem $\chi=2-2h-b$ where $h=0$ is the number of handles and $b=2$ the number of boundaries. We can also consider the case where the system is confined in a “double” disk $$\Omega_R=\Omega_R^{+}\cup\Omega_R^{-}\,,$$ with $\Omega_R^{-}=\{{{\bf q}}=(r,\varphi)\in {\cal S}_{-} |0\le\varphi\le 2\pi, 2M\le r\le R\}$, the disk image of $\Omega_{R}^{+}$ on the lower universe ${\cal S}_{-}$ portion of ${\cal S}$. The Euler characteristic of $\Omega_R$ is also $\chi=0$. ### A useful system of coordinates {#sec:good-coordinates} The Laplacian for a function $f$ is $$\begin{aligned} \nonumber \Delta f&=& \frac{1}{\sqrt{g}}\frac{\partial}{\partial q^\mu} \left(\sqrt{g}\,g^{\mu\nu}\frac{\partial}{\partial q^\nu}\right)f\\ &=&\left[\left(1-\frac{2M}{r}\right)\frac{\partial^2}{\partial r^2} +\frac{1}{r^2}\frac{\partial^2}{\partial \varphi^2} +\left(\frac{1}{r}-\frac{M}{r^2}\right)\frac{\partial}{\partial r}\right] f~,\end{aligned}$$ where ${{\bf q}}\equiv(r,\varphi)$. In appendix \[app:green\], we show how, finding the Green function of the Laplacian, naturally leads to consider the system of coordinates $(x,\varphi)$, with $$x=(\sqrt{u^2+1}+u)^{2} \,.$$ The range for the variable $x$ is $\left]0,+\infty\right[$. The lower paraboloid ${\cal S}_{-}$ corresponds to the region $0<x<1$ and the upper one ${\cal S}_{+}$ to the region $x>1$. A point in the upper paraboloid with coordinate $(x,\varphi)$ has a mirror image by reflection ($u\to -u$) in the lower paraboloid, with coordinates $(1/x,\varphi)$, since if $$x=(\sqrt{u^2+1}+u)^{2}$$ then $$\frac{1}{x}=(\sqrt{u^2+1}-u)^{2} \,.$$ In the upper paraboloid ${\cal S}_{+}$, the new coordinate $x$ can be expressed in terms of the original one, $r$, as $$x=\frac{(\sqrt{r}+\sqrt{r-2M})^2}{2M} \,.$$ Using this system of coordinates, the metric takes the form of a flat metric multiplied by a conformal factor $$\label{eq:metric-in-x} d\mathbf{s}^2= \frac{M^2}{4}\left(1+\frac{1}{x}\right)^4 \left(dx^2+x^2\,d\varphi^2\right)\,.$$ The Laplacian also takes a simple form $$\Delta f =\frac{4}{M^2\left(1+\frac{1}{x}\right)^4} \,\Delta_{\mathrm{flat}}f$$ where $$\Delta_{\mathrm{flat}}f= \frac{\partial^2 f}{\partial x^2} +\frac{1}{x}\frac{\partial f}{\partial x} +\frac{1}{x^2}\frac{\partial^2 f}{\partial \varphi^2}$$ is the Laplacian of the flat Euclidean space $\mathbb{R}^2$. The determinant of the metric is now given by $g=[ M^2 x (1+x^{-1})^4 /4]^2$. With this system of coordinates $(x,\varphi)$, the area of a “disk” $\Omega_{R}^{+}$ of radius $R$ \[in the original system $(r,\varphi)$\] is given by $$\mathcal{A}_R=\frac{\pi M^2}{4}\,p(x_m)$$ with $$\label{eq:p(x)} p(x)=x^2+8 x-\frac{8}{x}-\frac{1}{x^2}+12\ln x$$ and $x_m=(\sqrt{R}+\sqrt{R-2M})^2/(2M)$. Coulomb potential {#sec:poisson} ================= Coulomb potential created by a point charge {#sec:green} ------------------------------------------- The Coulomb potential $G(x,\varphi;x_0,\varphi_0)$ created at $(x,\varphi)$ by a unit charge at $(x_0,\varphi_0)$ is given by the Green function of the Laplacian $$\label{eq:LaplaceGreen} \Delta G(x,\varphi;x_0,\varphi_0) =-2\pi \delta^{(2)}(x,\varphi;x_0,\varphi_0)$$ with appropriate boundary conditions. The Dirac distribution is given by $$\delta^{(2)}(x,\varphi;x_0,\varphi_0) =\frac{4}{M^2 x (1+x^{-1})^4}\,\delta(x-x_0)\delta(\varphi-\varphi_0)$$ Notice that using the system of coordinates $(x,\varphi)$ the Laplacian Green function equation takes the simple form $$\label{eq:GreenLaplace-flat} \Delta_{\mathrm{flat}} G(x,\varphi;x_0,\varphi_0) =-2\pi\frac{1}{x}\,\delta(x-x_0)\delta(\varphi-\varphi_0)$$ which is formally the same Laplacian Green function equation for flat space. We shall consider three different situations: when the particles can be in the whole surface ${\cal S}$, or when the particles are confined to the upper paraboloid universe ${\cal S}_{+}$, confined by a hard wall or by a grounded perfect conductor. ### Coulomb potential $G^{\mathrm{ws}}$ when the particles live in the whole surface ${\cal S}$ To complement the Laplacian Green function equation (\[eq:LaplaceGreen\]), we impose the usual boundary condition that the electric field $-\nabla G$ vanishes at infinity ($x\to\infty$ or $x\to0$). Also, we require the usual interchange symmetry $G(x,\varphi;x_0,\varphi_0)=G(x_0,\varphi_0;x,\varphi)$ to be satisfied. Additionally, due to the symmetry between each universe ${\cal S}_{+}$ and ${\cal S}_{-}$, we require that the Green function satisfies the symmetry relation $$\label{eq:symmetry-S+S-} G^{\mathrm{ws}}(x,\varphi;x_0,\varphi_0)= G^{\mathrm{ws}}(1/x,\varphi;1/x_0,\varphi_0)$$ The Laplacian Green function equation (\[eq:LaplaceGreen\]) can be solved, as usual, by using the decomposition as a Fourier series. Since equation (\[eq:LaplaceGreen\]) reduces to the flat Laplacian Green function equation (\[eq:GreenLaplace-flat\]), the solution is the standard one $$\label{eq:Fourier} G(x,\varphi;x_0,\varphi_0)= \sum_{n=1}^{\infty} \frac{1}{n}\left(\frac{x_{<}}{x_{>}}\right)^{2n} \cos\left[ n(\varphi-\varphi_0)\right] +g_0(x,x_0)$$ where $x_{>}=\max(x,x_0)$ and $x_{<}=\min(x,x_0)$. The Fourier coefficient for $n=0$, has the form $$g_0(x,x_0)= \begin{cases} a_0^{+}\ln x+b_0^{+}\,,&x>x_0\\ a_0^{-}\ln x+b_0^{-}\,,&x<x_0\,. \end{cases}$$ The coefficients $a_0^{\pm},b_0^{\pm}$ are determined by the boundary conditions that $g_0$ should be continuous at $x=x_0$, its derivative discontinuous $\partial_x g_0|_{x=x_0^{+}}-\partial_x g_0|_{x=x_0^{-}}=-1/x_0$, and the boundary condition at infinity $\nabla g_0|_{x\to\infty}=0$ and $\nabla g_0|_{x\to 0}=0$. Unfortunately, the boundary condition at infinity is trivially satisfied for $g_0$, therefore $g_0$ cannot be determined only with this condition. In flat space, this is the reason why the Coulomb potential can have an arbitrary additive constant added to it. However, in our present case, we have the additional symmetry relation (\[eq:symmetry-S+S-\]) which should be satisfied. This fixes the Coulomb potential up to an additive constant $b_0$. We find $$\label{eq:g0-ws} g_0(x,x_0)=-\frac{1}{2}\ln\frac{x_{>}}{x_{<}} + b_0\,,$$ and summing explicitly the Fourier series (\[eq:Fourier\]), we obtain $$\label{eq:Gws} G^{\mathrm{ws}}(x,\varphi;x_0,\varphi_0)= -\ln\frac{\left|z-z_0\right|}{\sqrt{\left|z z_0 \right|}} +b_0 ~,$$ where we defined $z=xe^{i\varphi}$ and $z_0=x_0e^{i\varphi_0}$. Notice that this potential does not reduce exactly to the flat one when $M=0$. This is due to the fact that the whole surface $\mathcal{S}$ in the limit $M\to0$ is not exactly a flat plane $\mathbb{R}^2$, but rather it is two flat planes connected by a hole at the origin, this hole modifies the Coulomb potential. ### Coulomb potential $G^{\mathrm{hs}}$ when the particles live in the half surface ${\cal S}_{+}$ confined by hard walls We consider now the case when the particles are restricted to live in the half surface ${\cal S}_{+}$, $x>1$, and they are confined by a hard wall located at the “horizon” $x=1$. The region $x<1$ (${\cal S}_{-}$) is empty and has the same dielectric constant as the upper region occupied by the particles. Since there are no image charges, the Coulomb potential is the same $G^{\mathrm{ws}}$ as above. However, we would like to consider here a new model with a slightly different interaction potential between the particles. Since we are dealing only with half surface, we can relax the symmetry condition (\[eq:symmetry-S+S-\]). Instead, we would like to consider a model where the interaction potential reduces to the flat Coulomb potential in the limit $M\to0$. The solution of the Laplacian Green function equation is given in Fourier series by equation (\[eq:Fourier\]). The zeroth order Fourier component $g_0$ can be determined by the requirement that, in the limit $M\to0$, the solution reduces to the flat Coulomb potential $$G^{\mathrm{flat}}({{\bf r}},{{\bf r}}')=-\ln\frac{|{{\bf r}}-{{\bf r}}'|}{L}$$ where $L$ is an arbitrary constant length. Recalling that $x\sim 2r/M$, when $M\to0$, we find $$\label{eq:g0-hs} g_0(x,x_0)=-\ln x_{>}-\ln\frac{M}{2L}$$ and $$\label{cgreen} G^{\mathrm{hs}} (x,\varphi;x_0,\varphi_0)=-\ln |z-z_0|-\ln\frac{M}{2L} \,.$$ ### Coulomb potential $G^{\mathrm{gh}}$ when the particles live in the half surface ${\cal S}_{+}$ confined by a grounded perfect conductor Let us consider now that the particles are confined to ${\cal S}_{+}$ by a grounded perfect conductor at $x=1$ which imposes Dirichlet boundary condition to the electric potential. The Coulomb potential can easily be found from the Coulomb potential $G^{\mathrm{ws}}$ (\[eq:Gws\]) using the method of images $$\label{ghgreen} G^{\mathrm{gh}}(x,\varphi;x_0,\varphi_0)= -\ln\frac{|z-z_0|}{\sqrt{|z z_0|}} +\ln\frac{|z-\bar{z}_0^{-1}|}{\sqrt{|z \bar{z}_0^{-1}|}} = -\ln\left|\frac{z-z_0}{1-z\bar{z}_0}\right|$$ where the bar over a complex number indicates its complex conjugate. We will call this the grounded horizon Green function. Notice how its shape is the same of the Coulomb potential on the pseudosphere [@Fantoni03jsp] or in a flat disk confined by perfect conductor boundaries [@Jancovici96]. This potential can also be found using the Fourier decomposition. Since it will be useful in the following, we note that the zeroth order Fourier component of $G^{\mathrm{gh}}$ is $$\label{eq:g0-gh} g_0(x,x_0)=\ln x_{<}\,.$$ The background -------------- The Coulomb potential generated by the background, with a constant surface charge density $\rho_b$ satisfies the Poisson equation $$\begin{aligned} \Delta v_b =-2\pi \rho_b \,.\end{aligned}$$ Assuming that the system occupies an area $\mathcal{A}_R$, the background density can be written as $\rho_b=-qN_b/\mathcal{A}_R=-qn_b$, where we have defined here $n_b=N_b/\mathcal{A}_R$ the number density associated to the background. For a neutral system $N_b=N$. The Coulomb potential of the background can be obtained by solving Poisson equation with the appropriate boundary conditions for each case. Also, it can be obtained from the Green function computed in the previous section $$v_b(x,\varphi)=\int G(x,\varphi;x',\varphi') \rho_b \,dS' \,.$$ This integral can be performed easily by using the Fourier series decomposition (\[eq:Fourier\]) of the Green function $G$. Recalling that $dS=\frac{1}{4}M^2 x (1+x^{-1})^4\,dx\,d\varphi$, after the angular integration is done, only the zeroth order term in the Fourier series survives $$v_b(x,\varphi)=\frac{\pi \rho_b M^2}{2} \int_{1}^{x_m} g_0(x,x') \, x \left(1+\frac{1}{x}\right)^4 \,dx \,.$$ The previous expression is for the half surface case and the grounded horizon case. For the whole surface case, the lower limit of integration should be replaced by $1/x_m$, or, equivalently, the integral multiplied by a factor 2. Using the explicit expressions for $g_0$, (\[eq:g0-ws\]), (\[eq:g0-hs\]), and (\[eq:g0-gh\]) for each case, we find, for the whole surface, $$v_b^{\mathrm{ws}}(x,\varphi)=-\frac{\pi \rho_b M^2}{8} \left[ h(x)-h(x_m) +2 p(x_m) \ln x_m - 4b_0 p(x_m)\right]$$ where $p(x)$ was defined in equation (\[eq:p(x)\]), and $$h(x)=x^2+16x+\frac{16}{x}+\frac{1}{x^2}+12(\ln x)^2 - 34 \,.$$ Notice the following properties satisfied by the functions $p$ and $h$ $$\label{eq:p-h-symmetry} p(x)=-p(1/x) \,,\qquad h(x)=h(1/x)$$ and $$\label{eq:p-h-deriv} p(x)=x h'(x)/2 \,,\qquad p'(x)=2x\left(1+\frac{1}{x}\right)^4$$ where the prime stands for the derivative. The background potential for the half surface case, with the pair potential $-\ln(|z-z'|M/2L)$ is $$v_b^{\mathrm{hs}}(x,\varphi)=-\frac{\pi \rho_b M^2}{8} \left[ h(x)-h(x_m) + 2 p(x_m) \ln\frac{x_m M}{2L} \right] \,.$$ Also, the background potential in the half surface case, but with the pair potential $-\ln(|z-z'|/\sqrt{|zz'|})+b_0$ is $$v_b^{\overline{\mathrm{hs}}}(x,\varphi)= -\frac{\pi \rho_b M^2}{8} \left[ h(x)-\frac{h(x_m)}{2}+p(x_m)\left(\ln\frac{x_m}{x}-2 b_0\right) \right]\,.$$ Finally, for the grounded horizon case, $$v_b^{\mathrm{gh}}(x,\varphi)=-\frac{\pi \rho_b M^2}{8} \left[ h(x) - 2 p(x_m) \ln x\right] \,.$$ Partition function and density at $\Gamma=2$ {#sec:correlations} ============================================ We will now show how at the special value of the coupling constant $\Gamma=\beta q^2=2$ the partition function and $n$-body correlation functions can be calculated exactly. In the following we will distinguish four cases labeled by $A$: $A=\rm{hs}$, the plasma on the half surface (choosing $G^{\rm{hs}}$ as the pair Coulomb potential); $A=\rm{ws}$, the plasma on the whole surface (choosing $G^{\rm{ws}}$ as the pair Coulomb potential); $A=\overline{\rm{hs}}$, the plasma on the half surface but with the Coulomb potential $G^{\rm{ws}}$ of the whole surface case; and $A=\rm{gh}$, the plasma on the half surface with the grounded horizon (choosing $G^{\rm{gh}}$ as the pair Coulomb potential). The total potential energy of the plasma is, in each case $$\begin{aligned} \label{eq:hamiltonian-gen} V^A=v_{0}^A+q\sum_i v_{b}^A(x_i)+q^2\sum_{i<j}G^A(x_i,\varphi_i;x_j,\varphi_j)~, \end{aligned}$$ where $(x_i,\varphi_i)$ is the position of charge $i$ on the surface, and $$v_{0}^A=\frac{1}{2}\int \rho_b v_b^{A}(x,\varphi)\,dS$$ is the self energy of the background in each of the four mentioned cases. In the grounded case $A=\text{gh}$, one should add to $V^{\text{gh}}$ in (\[eq:hamiltonian-gen\]) the self energy that each particle has due to the polarization it creates on the grounded conductor. The 2dOCP on half surface with potential $-\ln|z-z'|-\ln M/(2L)$ {#sec:half-surface-1} ---------------------------------------------------------------- ### Partition function For this case, we work in the canonical ensemble with $N$ particles and the background neutralizes the charges: $N_b=N$, and $n=N/\mathcal{A}_R=n_b$. The potential energy of the system takes the explicit form $$\begin{aligned} V^{\mathrm{hs}} & = & -q^2\sum_{1\leq i<j\leq N}\ln|z_i-z_j| +\frac{q^2}{2}\alpha \sum_{i=1}^N h(x_i) +\frac{q^2}{2} N \ln \frac{M}{2L} -\frac{q^2}{4}N\alpha h(x_m) \nonumber\\ &&+\frac{q^2}{2} N^2 \ln x_m -\frac{q^2}{4} \alpha^2 \int_{1}^{x_m} h(x) p'(x)\,dx \label{eq:pot1}\end{aligned}$$ where we have used the fact that $dS=\pi M^2 x (1+x^{-1})^4\,dx/2=\pi M^2 p'(x)\,dx/4$, and we have defined $$\alpha=\frac{\pi n_b M^2}{4}\,.$$ Integrating by parts the last term of (\[eq:pot1\]) and using (\[eq:p-h-deriv\]), we find $$\begin{aligned} V^{\mathrm{hs}} & = & -q^2\sum_{1\leq i<j\leq N}\ln|z_i-z_j| +\frac{q^2}{2}\alpha \sum_{i=1}^N h(x_i) +\frac{q^2}{2} N \ln \frac{M}{2L} +\frac{q^2}{2} N^2 \ln x_m \nonumber\\ && +\frac{q^2}{2}\alpha^2\int_{1}^{x_m} \frac{[p(x)]^2}{x}\,dx -\frac{q^2}{2} N \alpha h(x_m) \,. \label{eq:Vhs}\end{aligned}$$ When $\beta q^2=2$, the canonical partition function can be written as $$Z^{\mathrm{hs}}=\frac{1}{\lambda^{2N}}\,Z_0^{\mathrm{hs}} \exp(-\beta F_0^{\mathrm{hs}})$$ with $$\label{eq:F0} -\beta F_0^{\mathrm{hs}}= -N \ln \frac{M}{2L} - N^2 \ln x_m -\alpha^2\int_{1}^{x_m} \frac{[p(x)]^2}{x}\,dx + N \alpha h(x_m)$$ and $$Z_0^{\mathrm{hs}}=\frac{1}{N!}\int \prod_{i=1}^N dS_{i}\, e^{-\alpha h(x_i)} \prod_{1\leq i<j \leq N} |z_i-z_j|^2 \,.$$ where $\lambda=\sqrt{2\pi\beta\hbar^2/m}$ is the de Broglie thermal wavelength. $Z_0$ can be computed using the original method for the OCP in flat space [@Jancovici81b; @Alastuey81], which was originally introduced in the context of random matrices [@Mehta91; @Ginibre65]. By expanding the Vandermonde determinant $\prod_{i<j}(z_i-z_j)$ and performing the integration over the angles, the partition function can be written as $$\begin{aligned} \label{cpf} Z_0^{\mathrm{hs}}&=& \prod_{k=0}^{N-1}{\cal B}_N(k)~,\end{aligned}$$ where $$\begin{aligned} {\cal B}_N(k)&=& \int x^{2k} e^{-\alpha h(x)}\,dS \\ &=& \frac{\alpha}{n_b}\int_{1}^{x_m} x^{2k} e^{-\alpha h(x)} p'(x)\,dx \,. \label{gamma}\end{aligned}$$ In the flat limit $M\to0$, we have $x\sim 2r/M$, with $r$ the radial coordinate of the flat space $\mathbb{R}^2$, and $h(x)\sim p(x)\sim x^2$. Then, $\mathcal{B}_N$ reduces to $$\mathcal{B}_N(k)\sim\frac{1}{n_b \alpha^k}\, \gamma(k+1,N)$$ where $\gamma(k+1,N)=\int_0^{N} t^k e^{-t}\,dt$ is the incomplete Gamma function. Replacing into (\[cpf\]), we recover the partition function for the OCP in a flat disk of radius $R$ [@Alastuey81] $$\ln Z^{\mathrm{hs}}=\frac{N}{2}\ln\frac{\pi L^2}{n_b\lambda^4} +\frac{3N^2}{4}-\frac{N^2}{2}\ln N +\sum_{k=1}^{N} \ln \gamma(k,N) \,.$$ ### Thermodynamic limit $R\to\infty$, $x_m\to\infty$, and fixed $M$ Let us consider the limit of a large system when $x_m=(\sqrt{R}+\sqrt{R-2M})^2/(2M)\to\infty$, $N\to\infty$, constant density $n$, and constant $M$. Therefore $\alpha$ is also kept constant. In appendix \[app:gamma\], we develop a uniform asymptotic expansion of $\mathcal{B}_N(k)$ when $N\to\infty$ and $k\to\infty$ with $(N-k)/\sqrt{N} = O(1)$. Let us define ${\hat{x}}_k$ by $$\label{eq:def-x_k} k=\alpha p({\hat{x}}_k)\,.$$ The asymptotic expansion (\[eq:asymptics-B\]) of $\mathcal{B}_N(k)$ can be rewritten as $$\begin{aligned} \label{eq:asympt-B} \mathcal{B}_N(k)&=& \frac{1}{2n_b} \sqrt{\pi \alpha {\hat{x}}_k p'({\hat{x}}_k) }\, e^{2k\ln {\hat{x}}_k -\alpha h({\hat{x}}_k)}\left[1+{\mathop\text{erf}}\left( \epsilon_k\right)\right] \nonumber\\ &&\times \left[1+\frac{1}{12 k} +\frac{1}{\sqrt{k}}\,\xi_1(\epsilon_k) +\frac{1}{k}\,\xi_2(\epsilon_k)\right]\end{aligned}$$ where $$\epsilon_k=\frac{2p(x_k)}{x_k p'(x_k)}\frac{N-k}{\sqrt{2k}}$$ is a order one parameter, and the functions $\xi_1(\epsilon_k)$ and $\xi_2(\epsilon_k)$ can be obtained from the calculation presented in appendix \[app:gamma\]. They are integrable functions for $\epsilon_k\in[0,\infty[$. We will obtain an expansion of the free energy up to the order $\ln N$. At this order the functions $\xi_{1,2}$ do not contribute to the result. Writing down $$\ln Z_0^{\mathrm{hs}} =\sum_{k=0}^{N} \ln \mathcal{B}_{N}(k)-\ln\mathcal{B}_{N}(N)$$ and using the asymptotic expansion (\[eq:asympt-B\]), we have $$\begin{aligned} \ln Z_0^{\mathrm{hs}} &=& -N\ln\frac{n_{b}}{\sqrt{2\pi}}+S_1^{\mathrm{hs}} +S_2^{\mathrm{hs}} +S_3^{\mathrm{hs}} +\frac{1}{12}\ln N \nonumber\\ && -\ln\left[\sqrt{\alpha} x_m \left(1+\frac{1}{x_m}\right)^2\right] -2N\ln x_m +\alpha h(x_m) +O(1)\end{aligned}$$ with $$\begin{aligned} S_1^{\mathrm{hs}} & = & \sum_{k=0}^N \ln \left( \sqrt{\alpha} {\hat{x}}_k \left(1+\frac{1}{{\hat{x}}_k}\right)^2\right] \\ S_2^{\mathrm{hs}} &=& \sum_{k=0}^{N} \left [2k\ln {\hat{x}}_k - \alpha h({\hat{x}}_k) \right] \\ S_3^{\mathrm{hs}} &=& \sum_{k=0}^N \ln \frac{1+{\mathop\text{erf}}\left(\epsilon_k \right)}{2} \,.\end{aligned}$$ Notice that the contribution of $\xi_{1}(\epsilon_k)$ is of order one, since $\sum_k \xi_1(\epsilon_k)/\sqrt{k}\sim \int_0^{\infty} \xi_{1}(\epsilon)\, d\epsilon = O(1)$. Also, $\sum_k \xi_2(\epsilon_k)/k\sim (1/\sqrt{N})\int_0^{\infty} \xi_{2}(\epsilon)\, d\epsilon = O(1/\sqrt{N})$. $S_3^{\mathrm{hs}}$ gives a contribution of order $\sqrt{N}$, transforming the sum over $k$ into an integral over the variable $t=\epsilon_k$, we have $$S_3=\sqrt{2N} \int_0^{\infty} \ln \frac{1+{\mathop\text{erf}}(t)}{2}\,dt + O(1) \,.$$ This contribution is the same as the perimeter contribution in the flat case. To expand $S_1^{\mathrm{hs}}$ and $S_2^{\mathrm{hs}}$ up to order $O(1)$, we need to use the Euler-McLaurin summation formula [@Abramowitz; @Wong89] $$\sum_{k=0}^{N} f(k) = \int_0^N f(y)\,dy +\frac{1}{2}\left[f(0)+f(N)\right] +\frac{1}{12}\left[f'(N)-f'(0)\right]+\cdots \,.$$ We find $$\begin{aligned} S_1^{\mathrm{hs}}&=& \frac{N}{2}\ln\alpha + \alpha x_m^2 \left(\ln x_m-\frac{1}{2}\right) +\alpha x_m \left(8\ln x_m - 4\right) \nonumber\\ && +\left(14\alpha +\frac{1}{2} \right)\ln x_m + 6 \left(\ln x_m \right)^2\end{aligned}$$ and $$\begin{aligned} S_2^{\mathrm{hs}} &=& N^2 \ln x_m + N\ln x_m -\alpha N h(x_m) +\alpha^2 \int_{1}^{x_m} \frac{\left[p(x)\right]^2}{x}\,dx -\frac{\alpha}{2} \, h(x_m) +\frac{1}{6} \ln x_m \,.\end{aligned}$$ Summing all terms in $\ln Z_0^{\mathrm{hs}}$ and those from $\beta F_0^{\mathrm{hs}}$, we notice that all nonextensive terms cancel, as it should be, and we obtain $$\ln Z^{\mathrm{hs}}= -N\beta f_B + 4 x_m \alpha - \mathcal{C}_R\,\beta\gamma_{\text{hard}} +\left(14\alpha - \frac{1}{6}\right) \ln x_m +O(1) \label{eq:Fhs-xm-infinity}$$ where $$\label{eq:bulk} \beta f_B=-\frac{1}{2} \ln\frac{2\pi^2 L^2}{n \lambda^4}$$ is the bulk free energy of the OCP in the flat geometry [@Alastuey81], $$\begin{aligned} \label{app:gam} \beta\gamma_{\text{hard}}=-\sqrt{\frac{n_b}{2\pi}} \int_0^\infty\ln\frac{1+{\mathop\text{erf}}(y)}{2}\,dy\end{aligned}$$ is the perimeter contribution to the free energy (“surface” tension) in the flat geometry near a plane hard wall [@Jancovici94], and $$\mathcal{C}_R=2\pi R=\pi M\sqrt{x_m p'(x_m)/2} = \pi M x_m + O(1)$$ is the perimeter of the boundary at $x=x_m$. The region $x\to\infty$ has zero curvature, therefore in the limit $x_m\to\infty$, most of the system occupies an almost flat region. For this reason, the extensive term (proportional to $N$) is expected to be the same as the one in flat space $f_B$. The largest boundary of the system $x=x_m$ is also in an almost flat region, therefore it is not surprising to see the factor $\gamma_{\text{hard}}$ from the flat geometry appear there as well. Nevertheless, we notice an additional contribution $4\alpha x_m$ to the perimeter contribution, which comes from the curvature of the system. In the logarithmic correction $\ln x_m$, we notice a $-(1/6)\ln x_m$ term, the same as in a flat disk geometry [@Jancovici94], but also a nonuniversal contribution due to the curvature $14\alpha \ln x_m$. ### Thermodynamic limit at fixed shape: $\alpha\to\infty$ and $x_m$ fixed In the previous section we studied a thermodynamic limit case where a large part of the space occupied by the particles becomes flat as $x\to \infty$ keeping $M$ fixed. Another interesting thermodynamic limit that can be studied is the one where we keep the shape of the space occupied by the particles fixed. This limit corresponds to the situation $M\to\infty$ and $R\to\infty$ while keeping the ratio $R/M$ fixed, and of course the number of particles $N\to\infty$ with the density $n$ fixed. Equivalently, recalling that $N=\alpha p(x_m)$, in this limit $x_m$ is fixed and finite, and $\alpha=\pi M^2 n_b/4\to\infty$. We shall use $\alpha$ as the large parameter for the expansion of the free energy. In this limit, we expect the curvature effects to remain important, in particular the bulk free energy (proportional to $\alpha$) will not be the same as in flat space. Using the expansion (\[eq:BN-fixed-shape\]) of $\mathcal{B}_N(k)$ for the fixed shape situation, we have $$\ln Z_0^{\mathrm{hs}}= N \ln\frac{\sqrt{\pi}}{n_b}+N \ln\sqrt{\alpha} +S_1^{\mathrm{hs,fixed}}+S_2^{\mathrm{hs,fixed}} +S_3^{\mathrm{hs,fixed}}+O(1)$$ where now $$\begin{aligned} \label{eq:S1-hs-fixed} S_1^{\mathrm{hs,fixed}} &=& \frac{1}{2}\sum_{k=0}^{N-1} \ln [{\hat{x}}_k p'({\hat{x}}_k)] \\ \label{eq:S2-hs-fixed} S_2^{\mathrm{hs,fixed}} &=& -\alpha\sum_{k=0}^{N-1} [h({\hat{x}}_k)-2p({\hat{x}}_k)\ln {\hat{x}}_k]\\ S_3^{\mathrm{hs,fixed}} &=& \sum_{k=0}^{N-1}\ln \frac{{\mathop\text{erf}}(\epsilon_{k,1})+ {\mathop\text{erf}}(\epsilon_{k,m})}{2}\end{aligned}$$ with $\epsilon_{k,m}$ and $\epsilon_{k,1}$ given in equations (\[eq:epsilon\_km\]) and (\[eq:epsilon\_k1\]), and ${\hat{x}}_k$ is given by $k=\alpha p({\hat{x}}_k)$. Using the Euler-McLaurin expansion, we obtain $$\begin{aligned} \label{eq:S1-hs-fixed-asympt} S_1^{\mathrm{hs,fixed}} &=& \alpha \int_1^{x_m} \frac{(1+x)^4}{x^3}\,\ln\frac{2(x+1)^4}{x^2}\,dx +O(1)\\ \label{eq:S2-hs-fixed-asympt} S_2^{\mathrm{hs,fixed}}&=&N^2 \ln x_m -\alpha N h(x_m) +\alpha^2 \int_{1}^{x_m} \frac{[p(x)]^2}{x}\,dx +\frac{\alpha}{2} h(x_m) - N\ln x_m + O(1) \,.\end{aligned}$$ For $S_3^{\mathrm{hs,fixed}}$, the relevant contributions are obtained when $k$ is of order $\sqrt{N}$, where $\epsilon_{k,1}$ is of order one, and when $N-k$ is of order $\sqrt{N}$, where $\epsilon_{k,m}$ is of order one. In those regions, the sum can be changed into an integral over the variable $t=\epsilon_{k,1}$ or $t=\epsilon_{k,m}$. This gives $$\begin{aligned} S_3^{\mathrm{hs,fixed}}&=&-\sqrt{\frac{4\pi\alpha}{n_b}}\, \left[x_m \left(1+\frac{1}{x_m}\right)^2 +4\right] \beta\gamma_{\text{hard}} +O(1)\end{aligned}$$ with $\gamma_{\text{hard}}$ given in equation (\[app:gam\]). Once again the nonextensive terms (proportional to $\alpha^2$) in $S_2^{\mathrm{hs,fixed}}$ cancel out with similar terms in $F_0^{\mathrm{hs,fixed}}$ from equation (\[eq:F0\]). The final result for the free energy $\beta F^{\mathrm{hs}}= -\ln Z^{\mathrm{hs}}$ is $$\begin{aligned} \ln Z^{\mathrm{hs}} &=& \alpha \left[ -p(x_m) \beta f_B + \frac{1}{2} \left[ h(x_m) - 2 p(x_m) \ln x_m \right] +\int_{1}^{x_m} \frac{(1+x)^4}{x^3}\,\ln\frac{(x+1)^4}{x^2}\,dx \right] \nonumber\\ && -\sqrt{\frac{4\pi\alpha}{n_b}}\, \left[x_m \left(1+\frac{1}{x_m}\right)^2+4\right] \beta\gamma_{\text{hard}} +O(1) \label{eq:free-energy-fixed-shape}\end{aligned}$$ where $f_B$, given by (\[eq:bulk\]), is the bulk free energy per particle in a flat space. We notice the additional contribution to the bulk free energy due to the important curvature effects \[second and third term of the first line of equation (\[eq:free-energy-fixed-shape\])\] that remain present in this thermodynamic limit. The boundary terms, proportional to $\sqrt{\alpha}$, turn out to be very similar to those of a flat space near a hard wall [@Jancovici82], with a contribution $\beta \gamma_{\text{hard}} \mathcal{C}_b$ for each boundary at $x_{b}=x_m$ and at $x_{b}=1$ with perimeter $$\label{eq:perimeter} \mathcal{C}_b=\pi M \sqrt{\frac{x_b p'(x_b)}{2}} =\pi M x_b \left(1+\frac{1}{x_b}\right)^2 \,.$$ Also, we notice the absence of $\ln \alpha$ corrections in the free energy. This is in agreement with the general results from Refs. [@Jancovici94; @Jancovici96], where, using arguments from conformal field theory, it is argued that for two-dimensional Coulomb systems living in a surface of Euler characteristic $\chi$, in the limit of a large surface keeping its shape fixed, the free energy should exhibit a logarithmic correction $(\chi/6) \ln R$ where $R$ is a characteristic length of the size of the surface. For our curved surface studied in this section, the Euler characteristic is $\chi=0$, therefore no logarithmic correction is expected. ### Distribution functions Following [@Jancovici81b], we can also find the $k$-body distribution functions $$\begin{aligned} \label{cf} n^{(k){\mathrm{hs}}}({{\bf q}}_1,\ldots,{{\bf q}}_k)= \det[{\cal K}_N^{\mathrm{hs}}({{\bf q}}_i,{{\bf q}}_j)]_{(i,j)\in\{1,\ldots,k\}^2}~,\end{aligned}$$ where ${{\bf q}}_i=(x_i,\varphi_i)$ is the position of the particle $i$, and $$\begin{aligned} \label{KN} {\cal K}_N^{\mathrm{hs}}({{\bf q}}_i,{{\bf q}}_j) =\sum_{k=0}^{N-1}\frac{z_{i}^{k}\bar{z}_j^{k} e^{-\alpha[h(|z_i|)+h(|z_j|)]/2}}{{\cal B}_N(k)}~.\end{aligned}$$ where $z_k=x_k e^{i\varphi_k}$. In particular, the one-body density is given by $$\label{eq:density-Nfinite} n^{\mathrm{hs}}(x)=\mathcal{K}_N({{\bf q}},{{\bf q}})= \sum_{k=0}^{N-1} \frac{x^{2k} e^{-\alpha h(x)}}{\mathcal{B}_N(k)} \,.$$ ### Internal screening Internal screening means that at equilibrium, a particle of the system is surrounded by a polarization cloud of opposite charge. It is usually expressed in terms of the simplest of the multipolar sum rules [@Martin88]: the charge or electroneutrality sum rule, which for the OCP reduces to the relation $$\begin{aligned} \label{csr} \int n^{(2){\mathrm{hs}}}({{\bf q}}_1,{{\bf q}}_2)\,dS_2=(N-1)n^{(1){\mathrm{hs}}}({{\bf q}}_1)~,\end{aligned}$$ This relation is trivially satisfied because of the particular structure (\[cf\]) of the correlation function expressed as a determinant of the kernel $\mathcal{K}_{N}^{\mathrm{hs}}$, and the fact that $\mathcal{K}_{N}^{\mathrm{hs}}$ is a projector $$\int dS_3\, \mathcal{K}_N^{\mathrm{hs}}({{\bf q}}_1,{{\bf q}}_3) \mathcal{K}_N^{\mathrm{hs}}({{\bf q}}_3,{{\bf q}}_2) = \mathcal{K}_N^{\mathrm{hs}}({{\bf q}}_1,{{\bf q}}_2) \,.$$ Indeed, $$\begin{aligned} \nonumber \int n^{(2){\mathrm{hs}}}({{\bf q}}_1,{{\bf q}}_2)\,dS_2&=& \int [{\cal K}_N^{\mathrm{hs}}({{\bf q}}_1,{{\bf q}}_1) {\cal K}_N^{\mathrm{hs}}({{\bf q}}_2,{{\bf q}}_2)- {\cal K}_N^{\mathrm{hs}}({{\bf q}}_1,{{\bf q}}_2) {\cal K}_N^{\mathrm{hs}}({{\bf q}}_2,{{\bf q}}_1)]\,dS_2 \\ \nonumber &=&\int n^{(1){\mathrm{hs}}}({{\bf q}}_1)n^{(1){\mathrm{hs}}}({{\bf q}}_2)\,dS_2- {\cal K}_N^{\mathrm{hs}}({{\bf q}}_1,{{\bf q}}_1) \\ &=& (N-1)n^{(1){\mathrm{hs}}}({{\bf q}}_1) \,.\end{aligned}$$ ### External screening External screening means that, at equilibrium, an external charge introduced into the system is surrounded by a polarization cloud of opposite charge. When an external infinitesimal point charge $Q$ is added to the system, it induces a charge density $\rho_Q({{\bf q}})$. External screening means that $$\begin{aligned} \label{es} \int \rho_Q({{\bf q}})\, dS=-Q~.\end{aligned}$$ Using linear response theory we can calculate $\rho_Q$ to first order in $Q$ as follows. Imagine that the charge $Q$ is at ${{\bf q}}$. Its interaction energy with the system is $\hat{H}_{int}=Q\hat{\phi}({{\bf q}})$ where $\hat{\phi}({{\bf q}})$ is the microscopic electric potential created at ${{\bf q}}$ by the system. Then, the induced charge density at ${{\bf q}}'$ is $$\begin{aligned} \rho_Q({{\bf q}}')=-\beta\langle\hat{\rho}({{\bf q}}')\hat{H}_{int}\rangle_T= -\beta Q \langle\hat{\rho}({{\bf q}}')\hat{\phi}({{\bf q}})\rangle_T~,\end{aligned}$$ where $\hat{\rho}({{\bf q}}')$ is the microscopic charge density at ${{\bf q}}'$, $\langle AB\rangle_T=\langle AB\rangle-\langle A\rangle\langle B\rangle$, and $\langle\ldots\rangle$ is the thermal average. Assuming external screening (\[es\]) is satisfied, one obtains the Carnie-Chan sum rule [@Martin88] $$\begin{aligned} \label{cc} \beta\int \langle\hat{\rho}({{\bf q}}')\hat{\phi}({{\bf q}})\rangle_T\,dS'=1~.\end{aligned}$$ Now in a uniform system starting from this sum rule one can derive the second moment Stillinger-Lovett sum rule [@Martin88]. This is not possible here because our system is not homogeneous since the curvature is not constant throughout the surface but varies from point to point. If we apply the Laplacian respect to ${{\bf q}}$ to this expression and use Poisson equation $$\begin{aligned} \Delta_{{{\bf q}}}\langle\hat{\rho}({{\bf q}}')\hat{\phi}({{\bf q}})\rangle_T= -2\pi\langle\hat{\rho}({{\bf q}}')\hat{\rho}({{\bf q}})\rangle_T~,\end{aligned}$$ we find $$\begin{aligned} \label{csr'} \int \rho_e^{(2)}({{\bf q}}',{{\bf q}})\,dS'=0~,\end{aligned}$$ where $\rho_e^{(2)}({{\bf q}}',{{\bf q}})=\langle\hat{\rho}({{\bf q}}')\hat{\rho}({{\bf q}})\rangle_T$ is the excess pair charge density function. Eq. (\[csr’\]) is another way of writing the charge sum rule Eq. (\[csr\]) in the thermodynamic limit. ### Asymptotics of the density in the limit $x_m\to\infty$ and $\alpha$ fixed, for $1\ll x \ll x_m$ The formula (\[eq:density-Nfinite\]) for the one-body density, although exact, does not allow a simple evaluation of the density at a given point in space, as one has first to calculate $\mathcal{B}_N(k)$ through an integral and then perform the sum over $k$. One can then try to determine the asymptotic behaviors of the density. In this section, we consider the limit $x_m\to\infty$ and $\alpha$ fixed, and we study the density in the bulk of the system $1\ll x\ll x_m$. In the sum (\[eq:density-Nfinite\]), the dominant terms are the ones for which $k$ is such that ${\hat{x}}_k=x$, with ${\hat{x}}_k$ defined in (\[eq:def-x\_k\]). Since $1\ll x \ll x_m$, the dominant terms in the calculation of the density are obtained for values of $k$ such that $1\ll k \ll N$. Therefore in the limit $N\to\infty$, in the expansion (\[eq:asympt-B\]) of $B_N(k)$, the argument of the error function is very large, then the error function can be replaced by 1. Keeping the correction $1/(12 k)$ from (\[eq:asympt-B\]) allow us to obtain an expansion of the density up to terms of order $O(1/x^2)$. Replacing the sum over $k$ into an integral over ${\hat{x}}_k$, we have $$n^{\text{hs}}(x)=\frac{n_b}{\sqrt{\pi}} \int_{-\infty}^{\infty} e^{\Psi({\hat{x}}_k)} f({\hat{x}}_k) \left(1-\frac{1}{12\alpha p({\hat{x}}_k)} \right) \,d{\hat{x}}_k$$ with $$\Psi({\hat{x}}_k)=2\alpha p({\hat{x}}_k)\ln \frac{x}{{\hat{x}}_k} -\alpha[h(x)-h({\hat{x}}_k)]$$ and $$f({\hat{x}}_k)=\sqrt{\frac{\alpha p'({\hat{x}}_k)}{{\hat{x}}_k}} \,.$$ We proceed now to use the Laplace method to compute this integral. The function $\Psi({\hat{x}}_k)$ has a maximum for $x={\hat{x}}_k$, with $\Psi(x)=0$ and \[eq:Psis\] $$\begin{aligned} \Psi''(x)&=&-\frac{2\alpha p'(x)}{x}\\ \Psi^{(3)}(x)&=&-\frac{4\alpha}{x}+O(1/x^2)\\ \Psi^{(4)}(x)&=&\frac{4\alpha}{x^2}+O(1/x^3) \,.\end{aligned}$$ Expanding for ${\hat{x}}_k$ close to $x$ and for $x\gg1$ up to order $1/x^2$, we have $$\begin{aligned} n^{\text{hs}}(x)&=&\frac{n_b}{\sqrt{\pi}} \int_{-\infty}^{+\infty} e^{-\alpha p'(x) ({\hat{x}}_k-x)^2/x} \left( f(x)+f'(x)({\hat{x}}_k-x)+\frac{f''(x)({\hat{x}}_k-x)^2}{2} \right) \nonumber\\ &&\times\left(1+\frac{1}{3!} \Psi^{(3)}(x)({\hat{x}}_k-x)^3 +\frac{1}{4!}\Psi^{(4)}(x)({\hat{x}}_k-x)^4 +\frac{[\Psi^{(3)}(x)]^2}{3!^2\ 2}({\hat{x}}_k-x)^6 \right) \nonumber\\ && \times \left(1-\frac{1}{12\alpha p(x)}+O(1/x^3)\right) \,d{\hat{x}}_k \,.\end{aligned}$$ For the expansion of $f({\hat{x}}_k)$ around ${\hat{x}}_k=x$, it is interesting to notice that $$f'(x)=O(1/x^2)\,, \qquad\text{and }f''(x)=O(1/x^3)\,.$$ In the integral, the factor containing $f'(x)$ is multiplied $({\hat{x}}_k-x)$ which after integration vanishes. Therefore, the relevant contributions to order $O(1/x^2)$ are $$\begin{aligned} n^{\text{hs}}(x)&=&\frac{n_b}{\sqrt{\pi}} \int_{-\infty}^{+\infty} e^{-\alpha p'(x) ({\hat{x}}_k-x)^2/x} \sqrt{\frac{\alpha p'(x)}{x}} \nonumber\\ &&\times\left(1+\frac{1}{3!} \Psi^{(3)}(x)({\hat{x}}_k-x)^3 +\frac{1}{4!}\Psi^{(4)}(x)({\hat{x}}_k-x)^4 +\frac{[\Psi^{(3)}(x)]^2}{3!^2\ 2}({\hat{x}}_k-x)^6 \right) \nonumber\\ && \times \left(1-\frac{1}{12\alpha p(x)}\right) \,d{\hat{x}}_k+O(1/x^3) \,.\end{aligned}$$ Then, performing the Gaussian integrals and replacing the dominant values of $\Psi(x)$ and its derivatives from Eqs. (\[eq:Psis\]) for $x\gg1$, we find $$n(x)=n_b \left(1+\frac{1}{12\alpha x^2}\right) \left(1-\frac{1}{12\alpha x^2}\right) +O(1/x^3) =n_b + O(1/x^3) \,.$$ In the bulk of the plasma, the density of particles equal the bulk density, as expected. The above calculation, based the Laplace method, generates an expansion in powers of $1/x$ for the density. The first correction to the background density, in $1/x^2$, has been shown to be zero. We conjecture that this is probably true for any subsequent corrections in powers $1/x$ if the expansion is pushed further, because the corrections to the bulk density are probably exponentially small, rather than in powers of $1/x$, due to the screening effects. In the following subsections, we consider the expansion of the density in other types of limits, and in particular close to the boundaries, and the results suggest that our conjecture is true. ### Asymptotics of the density close to the boundary in the limit $x_m\to\infty$ We study here the density close to the boundary $x=x_m$ in the limit $x_m\to\infty$ and $M$ fixed. Since in this limit this region is almost flat, one would expect to recover the result for the OCP in a flat space near a wall [@Jancovici82]. Let $x=x_m+y$ where $y\ll x_m$ is of order 1. Using the dominant term of the asymptotics (\[eq:asympt-B\]), $$\begin{aligned} \label{eq:asympt-B-domin} \mathcal{B}_N(k)&=& \frac{1}{2n_b} \sqrt{\pi \alpha {\hat{x}}_k p'({\hat{x}}_k) }\, e^{2k\ln {\hat{x}}_k -\alpha h({\hat{x}}_k)}\left[1+{\mathop\text{erf}}\left( {\epsilon_k}\right)\right] \,,\end{aligned}$$ we have $$n^{\mathrm{hs}}(x)= \frac{2n_b}{\sqrt{\pi}} \sum_{k=0}^{N-1} \frac{e^{2k(\ln x-\ln {\hat{x}}_k)-\alpha [h(x)-h({\hat{x}}_k)]}}{ \sqrt{\alpha {\hat{x}}_k p' ({\hat{x}}_k)} \left[ 1+{\mathop\text{erf}}\left({\epsilon_k}\right) \right]}$$ where we recall that ${\hat{x}}_k=p^{-1}(k/\alpha)$. The exponential term in the sum has a maximum when ${\hat{x}}_k=x$ [*i.e.*]{} $k=k_{\max}=\alpha p(x)$, and since $x$ is close to $x_m\to\infty$, the function is very peaked near this maximum. Thus, we can use Laplace method to compute the sum. Expanding the argument of the exponential up to order 2 in $k-k_{\max}$, we have $$n^{\mathrm{hs}}(x)=\frac{2n_b}{\sqrt{\pi}} \sum_{k=0}^{N-1} \frac{\exp\left[-\frac{2}{\alpha x p'(x)} (k-k_{\max})^2 \right]}{\sqrt{\alpha x p'(x)}\left[ 1+{\mathop\text{erf}}\left(\epsilon_k\right) \right] }$$ Now, replacing the sum by an integral over $t=\epsilon_k$ and replacing $x=x_m-y$, we find $$\label{eq:density-close-border} n^{\mathrm{hs}}(x)=\frac{2n_b}{\sqrt{\pi}} \int_0^{\infty} \frac{\exp\left[-(t-\sqrt{2\alpha}y)^2\right]}{1+{\mathop\text{erf}}(t)} \,dt \,.$$ Since both $x_m\to\infty$, and $x\to\infty$, in that region, the space is almost flat. If $s$ is the geodesic distance from $x$ to the border, then we have $y\sim\sqrt{(\pi n_b/\alpha)}\, s$, and equation (\[eq:density-close-border\]) reproduces the result for the flat space [@Jancovici82], as expected. ### Density in the thermodynamic limit at fixed shape: $\alpha\to\infty$ and $x_m$ fixed. {#sec:density-canonic-fixed-shape} Using the expansion (\[eq:BN-fixed-shape\]) of $\mathcal{B}_N(k)$ for the fixed shape situation, we have $$n^{\mathrm{hs}}(x)=2n_b \sum_{k=0}^{N-1} \frac{e^{-\alpha [h(x)-2p({\hat{x}}_k)\ln x- h({\hat{x}}_k) + 2p({\hat{x}}_k) \ln {\hat{x}}_k]} }{ \sqrt{\alpha \pi {\hat{x}}_k p'({\hat{x}}_k)} \left[{\mathop\text{erf}}(\epsilon_{k,1})+{\mathop\text{erf}}(\epsilon_{k,m})\right] } \,. \label{eq:density-fixed-shape-start1}$$ Once again, to evaluate this sum when $\alpha\to\infty$ it is convenient to use Laplace method. The argument of the exponential has a maximum when $k$ is such that ${\hat{x}}_k=x$. Transforming the sum into an integral over ${\hat{x}}_k$, and expanding the argument of the integral to order $({\hat{x}}_k-x)^2$, we have $$n^{\mathrm{hs}}(x)=\frac{2n_b\sqrt\alpha}{\sqrt{\pi}} \int_{1}^{x_m} \sqrt{\frac{p'({\hat{x}}_k)}{{\hat{x}}_k}} \frac{e^{-\alpha p'(x) (x-{\hat{x}}_k)^2/x} }{ {\mathop\text{erf}}(\epsilon_{k,1})+{\mathop\text{erf}}(\epsilon_{k,m}) } \,d{\hat{x}}_k \,. \label{eq:density-fixed-shape-start2}$$ Depending on the value of $x$ the result will be different, since we have to take special care of the different cases when the corresponding dominant values of ${\hat{x}}_k$ are close to the limits of integration or not. Let us first consider the case when $x-1$ and $x_m-x$ are of order one. This means we are interested in the density in the bulk of the system, far away from the boundaries. In this case, since $\epsilon_{k,1}$ and $\epsilon_{k,m}$, defined in (\[eq:epsilon\_km\]) and (\[eq:epsilon\_k1\]), are proportional to $\sqrt{\alpha}\to\infty$, then each error function in the denominator of (\[eq:density-fixed-shape-start2\]) converge to 1. Also, the dominant values of ${\hat{x}}_k$, close to $x$ (more precisely, $x-{\hat{x}}_k$ of order $1/\sqrt{\alpha}$), are far away from $1$ and $x_m$ (more precisely, ${\hat{x}}_k-1$ and $x_m-{\hat{x}}_k$ are of order 1). Then, we can extend the limits of integration to $-\infty$ and $+\infty$, and approximate ${\hat{x}}_k$ by $x$ in the term $p'({\hat{x}}_k)/{\hat{x}}_k$. The resulting Gaussian integral is easily performed, to find $$\label{eq:density-bulk} n(x)=n_b\,, \qquad \text{when $x-1$ and $x_m-x$ are of order 1.}$$ Let us now consider the case when $x-x_m$ is of order $1/\sqrt{\alpha}$, [*i.e.*]{} we study the density close to the boundary at $x_m$. In this case $\epsilon_{k,m}$ is of order 1 and the term ${\mathop\text{erf}}(\epsilon_{k,m})$ cannot be approximated to 1, whereas $\epsilon_{k,1}\propto\sqrt{\alpha}\to\infty$ and ${\mathop\text{erf}}(\epsilon_{k,1})\to 1$. The terms $p'({\hat{x}}_k)/{\hat{x}}_k$ and $p'(x)/x$ can be approximated to $p'(x_m)/x_m$ up to corrections of order $1/\sqrt{\alpha}$. Using $t=\epsilon_{k,m}$ as new variable of integration, we obtain $$\label{eq:density-border-xm} n^{\mathrm{hs}}(x)=\frac{2 n_b}{\sqrt{\pi}} \int_0^{+\infty} \frac{\exp\left[-\left(t-\sqrt{\frac{\alpha p'(x_m)}{x_m}}(x_m-x) \right)^2\right]}{1+{\mathop\text{erf}}(t)} \,dt\,, \quad \text{for $x_m-x$ of order }\frac{1}{\sqrt{\alpha}} \,.$$ In the case where $x-1$ is of order $1/\sqrt{\alpha}$, close to the other boundary, a similar calculation yields, $$\label{eq:density-border-1} n^{\mathrm{hs}}(x)=\frac{2 n_b}{\sqrt{\pi}} \int_0^{+\infty} \frac{\exp\left[-\left(t-\sqrt{\alpha p'(1)}(x-1) \right)^2\right]}{1+{\mathop\text{erf}}(t)} \,dt\,, \quad \text{for $x-1$ of order }\frac{1}{\sqrt{\alpha}} \,.$$ where $p'(1)=32$. Fig. \[fig:density-fixed-shape-hs\] compares the density profile for finite $N=100$ with the asymptotic results (\[eq:density-bulk\]), (\[eq:density-border-xm\]) and (\[eq:density-border-1\]). The figure show how the density tends to the background density, $n_{b}$, far from the boundaries. Near the boundaries it has a peak, eventually decreasing below $n_{b}$ when approaching the boundary. In the limit $\alpha\to\infty$, the value of the density at each boundary is $n_b\ln 2$. ![The normalized one-body density $n^{\text{hs}}(x)/n_b$, for the 2dOCP on just one universe of the surface ${\cal S}$. The dashed line corresponds to a numerical evaluation, obtained from (\[eq:density-Nfinite\]), with $N=100$, $x_m=2$ and $\alpha=4.15493$. The full line corresponds to the asymptotic result in the fixed shape limit when $\alpha\to\infty$, and $x_m=2$ fixed. []{data-label="fig:density-fixed-shape-hs"}](density-fixed-shape-half-surface){width="\GraphicsWidth"} Interestingly, the results (\[eq:density-bulk\]), (\[eq:density-border-xm\]) and (\[eq:density-border-1\]) turn out to be the same than the one for a flat space near a hard wall [@Jancovici82]. From the metric (\[eq:metric-in-x\]), we deduce that the geodesic distance to the boundary at $x_m$ is $s=M(x_m-x)\sqrt{p'(x_m)/(8x_m)}$ (when $x_m-x$ is of order $1/\sqrt{\alpha}$), and a similar expression for the distance to the boundary at $x=1$ replacing $x_m$ by 1. Then, in terms of the geodesic distance $s$ to the border, the results (\[eq:density-border-xm\]) and (\[eq:density-border-1\]) are exactly the same as those of an OCP in a flat space close to a plane hard wall [@Jancovici82], $$\label{eq:density-border-flat} n(s)=\frac{2 n_b}{\sqrt{\pi}} \int_0^{+\infty} \frac{\exp\left[-\left(t-s\sqrt{2\pi n_b} \right)^2\right]}{1+{\mathop\text{erf}}(t)} \,dt\,.$$ This result shows that there exists an interesting universality for the density, because, although we are considering a limit where curvature effects are important, the density turns out to be the same as the one for a flat space. The 2dOCP on the whole surface with potential $-\ln(|z-z'|/\sqrt{|zz'|})$ ------------------------------------------------------------------------- ### Partition function Until now we studied the 2dOCP on just one universe. Let us find the thermodynamic properties of the 2dOCP on the whole surface ${\cal S}$. In this case, we also work in the canonical ensemble with a global neutral system. The position $z_k=x_k e^{i\varphi_k}$ of each particle can be in the range $1/x_m<x_k<x_m$. The total number particles $N$ is now expressed in terms of the function $p$ as $N=2\alpha p(x_m)$. Similar calculations to the ones of the previous section lead to the following expression for the partition function, when $\beta q^2=2$, $$Z^{\mathrm{ws}}=\frac{1}{\lambda^{2N}}Z_0^{\mathrm{ws}} \exp(-\beta F_0^{\mathrm{ws}})$$ now, with $$-\beta F_0^{\mathrm{ws}} = N b_0+N\alpha h(x_m)- \frac{N^2}{2} \ln x_m -\alpha^2 \int_{1/x_m}^{x_m} \frac{\left[p(x)\right]^2}{x}\,dx$$ and $$Z_0^{\mathrm{ws}}=\frac{1}{N!}\int \prod_{i=1}^N dS_{i}\, e^{-\alpha h(x_i)} x_{i}^{-N+1} \prod_{1\leq i<j \leq N} |z_i-z_j|^2 \,.$$ Expanding the Vandermonde determinant and performing the angular integrals we find $$Z_0^{\mathrm{ws}}=\prod_{k=0}^{N-1} \tilde{\mathcal{B}}_N(k)$$ with $$\begin{aligned} \tilde{{\cal B}}_N(k)&=& \int x^{2k-N+1} e^{-\alpha h(x)}\,dS \\ &=& \frac{\alpha}{n}\int_{1/x_m}^{x_m} x^{2k-N+1} e^{-\alpha h(x)} p'(x)\,dx \,. \label{gamma-tilde}\end{aligned}$$ The function $\tilde{\mathcal{B}}_N(k)$ is very similar to $\mathcal{B}_{N}$, and its asymptotic behavior for large values of $N$ can be obtained by Laplace method as explained in appendix \[app:gamma\]. ### Thermodynamic limit $R\to\infty$, $x_m\to\infty$, and fixed $M$ Writing the partition function as $$\ln Z_0^{\mathrm{ws}} = \sum_{k=0}^{N} \ln \tilde{\mathcal{B}}_N(k) -\ln \tilde{\mathcal{B}}_N(N) \,,$$ and using the asymptotic expansion (\[eq:asympt-tildeBN\]) for $\tilde{\mathcal{B}}_N$, we have $$\begin{aligned} \ln Z_0^{\mathrm{ws}}&=& -\ln\frac{n_b}{\sqrt{2\pi}} +S_1^{\mathrm{ws}}+S_2^{\mathrm{ws}}+S_3^{\mathrm{ws}} +S_4^{\mathrm{ws}}+S_5^{\mathrm{ws}} -\ln\left[\sqrt{\alpha}\, x_m \left(1+\frac{1}{x_m}\right)^2 \right] \nonumber\\ &&-\ln x_m -N\ln x_m +\alpha h(x_m)\end{aligned}$$ where $$\begin{aligned} S_1^{\mathrm{ws}} &=&\sum_{k=0}^{N} \ln \left[\sqrt{\alpha}\,{\hat{x}}_{k-\frac{N}{2}} \left(1+\frac{1}{{\hat{x}}_{k-\frac{N}{2}}}\right)^2\right]\\ S_2^{\mathrm{ws}} &=&\sum_{k=0}^{N} 2\left(k-\frac{N}{2}\right)\ln {\hat{x}}_{k-\frac{N}{2}} -\alpha h({\hat{x}}_{k-\frac{N}{2}})\\ S_3^{\mathrm{ws}}&=&\sum_{k=0}^N \ln \frac{{\mathop\text{erf}}(\epsilon_{k,\min})+{\mathop\text{erf}}(\epsilon_{k,\max})}{2} \\ S_4^{\mathrm{ws}}&=&\sum_{k=0}^{N} \ln {\hat{x}}_{k-\frac{N}{2}} \\ S_5^{\mathrm{ws}}&=&\sum_{k'=1}^{N/2} \left(\frac{1}{12}+\frac{3}{8}\right)\frac{1}{|k'|} + \sum_{k'=-N/2}^{-1} \left(\frac{1}{12}-\frac{1}{8}\right)\frac{1}{|k'|} =\frac{5}{6}\ln x_m + O(1)\end{aligned}$$ and $\epsilon_{k,\min}$ and $\epsilon_{k,\max}$ are defined in equation (\[eq:epsilons-min-max\]). Notice that $S_4^{\text{ws}}=0$ due to the symmetry relation ${\hat{x}}_{-\ell}=1/{\hat{x}}_{\ell}$, therefore only the sums $S_1^{\text{ws}}$, $S_2^{\text{ws}}$, $S_3^{\text{ws}}$ and $S_5^{\text{ws}}$ contribute to the result. These sums are similar to the ones defined for the half surface case, with the difference that the running index $k'=k-N/2$ varies from $-N/2$ to $N/2$ instead of $0$ to $N$ as in the half surface case. This difference is important when considering the remainder terms in the Euler-McLaurin expansion, because now both terms for $k'=-N/2$ and $k'=N/2$ are important in the thermodynamic limit. In the half surface case only the contribution for $k=N$ was important in the thermodynamic limit. The asymptotic expansion of each sum, for $x_m\to\infty$, is now $$\begin{aligned} S_1^{\mathrm{ws}}&=&\frac{N}{2}\ln \alpha + x_m^2(2\ln x_m-1) +2x_m(8\ln x_m-4) +(28\alpha+1) \ln x_m+12\alpha(\ln x_m)^2 +O(1) \nonumber\\ \\ S_2^{\mathrm{ws}}&=&\frac{N^2}{2}\ln x_m +\alpha^2 \int_{1/x_m}^{x_m} \frac{\left[p(x)\right]^2}{x}\,dx -\alpha N h(x_m) +N\ln x_m -\alpha h(x_m) +\frac{1}{3}\ln x_m +O(1) \nonumber\\ \\ S_3^{\mathrm{ws}}&=&-2 x_m \sqrt{\frac{4\pi\alpha}{n_b}}\, \beta\gamma_{\text{hard}} +O(1)\end{aligned}$$ where $\gamma$ is defined in equation (\[app:gam\]). The free energy is given by $\beta F^{\mathrm{ws}}=-\ln Z^{\mathrm{ws}}$, with $$\begin{aligned} \ln Z^{\mathrm{ws}}&=&2\alpha x_m^2 \ln x_m+ N \left(b_0+ \ln\frac{\sqrt{2\pi\alpha}}{\lambda^2 n_b}\right) -\alpha x_m^2 +8\alpha x_m (2 \ln x_m -1) -2\mathcal{C}_R \,\beta \gamma_{\text{hard}} \nonumber\\ && +12 \alpha (\ln x_m)^2+28 \alpha \ln x_m +\frac{1}{6}\ln x_m +O(1)\,.\end{aligned}$$ We notice that the free energy for this system turns out to be nonextensive with a term $2x_m^2 \ln x_m$. This is probably due to the special form of the potential $-\ln(|z-z'|/\sqrt{|zz'|})$: the contribution from the denominator in the logarithm can be written as a one-body term $[(N-1)/2]\ln x $, which is not intensive but extensive. However, this nonextensivity of the final result is mild, and can be cured by choosing the arbitrary additive constant $b_0$ of the Coulomb potential as $b_0=-\ln (M x_m)+\text{constant}$. ### Thermodynamic limit at fixed shape: $\alpha\to\infty$ and $x_m$ fixed For this situation, we use the asymptotic behavior (\[eq:tildeBN-fixed-shape\]) of $\tilde{\mathcal{B}}_N$ $$\ln Z_0^{\mathrm{ws}}= N \ln \frac{\sqrt{\pi \alpha}}{n_b}+ S_1^{\mathrm{ws,fixed}}+S_2^{\mathrm{ws,fixed}} +S_3^{\mathrm{ws,fixed}}+S_4^{\mathrm{ws,fixed}}$$ where, now $$\begin{aligned} S_1^{\mathrm{ws,fixed}} &=& \frac{1}{2}\sum_{k=0}^{N-1} \ln [{\hat{x}}_{k-\frac{N}{2}} p'({\hat{x}}_{k-\frac{N}{2}})] \\ S_2^{\mathrm{ws,fixed}} &=& -\alpha\sum_{k=0}^{N-1} [h({\hat{x}}_{k-\frac{N}{2}})-2p({\hat{x}}_{k-\frac{N}{2}}) \ln {\hat{x}}_{k-\frac{N}{2}}]\\ S_3^{\mathrm{ws,fixed}} &=& \sum_{k=0}^{N-1}\ln \frac{{\mathop\text{erf}}(\epsilon_{k,\min})+ {\mathop\text{erf}}(\epsilon_{k,\max})}{2}\\ S_4^{\mathrm{ws,fixed}}&=&\sum_{k=0}^{N-1} \ln {\hat{x}}_{k-\frac{N}{2}}\end{aligned}$$ These sums can be computed as earlier using Euler-McLaurin summation formula. We notice that $$S_4^{\mathrm{ws,fixed}} =\alpha \int_{1/x_m}^{x_m} \ln x\, p'(x)\,dx + O(1) =0+ O(1)$$ because of the symmetry properties $\ln(1/x)=-\ln x$ and $p'(1/x)d(1/x)=-p'(x)dx$. In the computation of $S_2^{\mathrm{ws,fixed}}$ there is an important difference with the case of the half surface section, due to the contribution when $k=0$, since ${\hat{x}}_{-N/2}=1/{\hat{x}}_{N/2}=1/x_m$ $$\begin{aligned} S_2^{\mathrm{ws,fixed}}=-\alpha N h(x_m) - \frac{N^2}{2}\ln x_m +\alpha^2 \int_{1/x_m}^{x_m} \frac{\left[p(x)\right]^2}{x}\,dx +O(1) \,.\end{aligned}$$ There is no $O(\alpha)$ contribution from $S_2^{\mathrm{ws,fixed}}$. Finally, the free energy $\beta F^{\mathrm{ws}}=-\ln Z^{\mathrm{ws}}$ is given by $$\begin{aligned} \ln Z^{\mathrm{ws}} &=& \alpha \left[ -2p(x_m) \left(\ln \frac{\sqrt{2\pi \alpha}}{\lambda^2 n_b}+b_0\right) +\int_{1/x_m}^{x_m} \frac{(1+x)^4}{x^3}\,\ln\frac{(x+1)^4}{x^2}\,dx \right] \nonumber\\ && -2\sqrt{\frac{4\pi\alpha}{n_b}}\, x_m \left(1+\frac{1}{x_m}\right)^2 \beta\gamma_{\text{hard}} +O(1) \label{eq:free-energy-fixed-shape-full-surface}\end{aligned}$$ We notice that the free energy has again a nonextensive term proportional to $\alpha\ln {\alpha}$, but, once again, it can be cured by choosing the constant $b_0$ as $b_0=-\ln( M x_m)+\text{constant}$. The perimeter correction, $2\mathcal{C}_R\beta\gamma_{\text{hard}}$, proportional to $\sqrt{\alpha}$, has the same form as for the half surface case, with equal contributions from each boundary at $x=1/x_m$ and $x=x_m$. Once again, there is no $\ln \alpha$ correction in agreement with the general theory of Ref. [@Jancovici94; @Jancovici96] and the fact that the Euler characteristic of this manifold is $\chi=0$. ### Density The density is now given by $$n^{\mathrm{ws}}(x)=\sum_{k=0}^{N-1} \frac{x^{2k-N+1}\,e^{-\alpha h(x)}}{\tilde{\mathcal{B}}_N(k)}$$ Due to the fact that the asymptotic behavior of $\tilde{\mathcal{B}}_N(k)$ is almost the same as the one of $\mathcal{B}_N(k')$ with $k'=|k-\frac{N}{2}|$, the behavior of the density turn out to be the same as for the half surface case, in the thermodynamic limit $\alpha\to\infty$, $x_m$ fixed, $$n(x)=n_b\,, \qquad\text{in the bulk, ie., when $x-x_m$ and $x-\frac{1}{x_m}$ are of order 1.}$$ And, close to the boundaries, $x\to x_b$ with $x_b=x_m$ or $x_b=1/x_m$, $$\label{eq:density-border-xb} n(x)=\frac{2 n_b}{\sqrt{\pi}} \int_0^{+\infty} \frac{\exp\left[-\left(t-\sqrt{\frac{\alpha p'(x_b)}{x_b}}|x-x_b|) \right)^2\right]}{1+{\mathop\text{erf}}(t)} \,dt\,, \quad \text{for $x_b-x$ of order }\frac{1}{\sqrt{\alpha}} \,.$$ If the result is expressed in terms of the geodesic distance $s$ to the border, we recover, once again, the result of the OCP in a flat space near a hard wall (\[eq:density-border-flat\]). The 2dOCP on the half surface with potential $-\ln(|z-z'|/\sqrt{|zz'|})$ ------------------------------------------------------------------------ ### Partition function In this case, we have $N=\alpha p(x_m)$. Following similar calculations to the ones of the previous cases, we find that the partition function, at $\beta q^2=2$, is $$Z^{\overline{\mathrm{hs}}}=Z_0^{\overline{\mathrm{hs}}} e^{-\beta F_0^{\overline{\mathrm{hs}}}}$$ with $$-\beta F_0^{\overline{\mathrm{hs}}} = \alpha^2 p(x_m) h(x_m) - p(x_m)^2\ln x_m +\int_1^{x_m} \frac{\left[p(x)\right]^2}{x}\,dx -Nb_0$$ and $$Z_0^{\overline{\mathrm{hs}}}=\prod_{k=0}^{N-1} \hat{\mathcal{B}}_N(k)$$ with $$\hat{\mathcal{B}}_N(k)=\frac{\alpha}{n_b}\int_1^{x_m} x^{2k+1}e^{-\alpha h(x)}\,dx$$ ### Thermodynamic limit $R\to\infty$, $x_m\to\infty$, and fixed $M$ The asymptotic expansion of $\hat{\mathcal{B}}_{N}(k)$ is obtained from equation (\[eq:asympt-tildeBN\]) replacing $k'$ by $k$ and considering only the case $k>0$. As explained in appendix \[app:gamma\], the main difference with the other half surface case (section \[sec:half-surface-1\]), is an additional term ${\hat{x}}_k$ in each factor of the partition function and the additional term $(3/(8k))$ in the expansion (\[eq:asympt-tildeBN\]). Therefore, the partition function can be obtained from the one of the half surface with potential $-\ln|z-z'|$ by adding the terms $$\begin{aligned} S_4^{\overline{\mathrm{hs}}}&=&\sum_{k=0}^{N-1} \ln {\hat{x}}_k\,,\\ S_5^{\overline{\mathrm{hs}}} &=&\sum_{k=1}^{N-1} \frac{3}{8k}=\frac{3}{8}\ln N+O(1)=\frac{3}{4}\ln x_m +O(1)\,.\end{aligned}$$ Using Euler-McLaurin expansion, we have $$\begin{aligned} S_4^{\overline{\mathrm{hs}}}&=&\sum_{k=0}^{N} \ln {\hat{x}}_k -\ln x_m \nonumber\\ &=&\int_1^{x_m} \alpha p'(x)\ln x\,dx +\frac{1}{2}\ln x_m - \ln x_m+ O(1) \nonumber\\ &=& \alpha p(x_m)\ln x_m -\alpha \int_1^{x_m} \frac{p(x)}{x}\,dx -\frac{1}{2} \ln x_m+O(1) \nonumber\\ &=& \alpha p(x_m)\ln x_m - \frac{1}{2}\alpha h(x_m) -\frac{1}{2}\ln x_m+ O(1) \,,\end{aligned}$$ where we used the property (\[eq:p-h-deriv\]). Finally, $$\begin{aligned} \ln Z^{\overline{\mathrm{hs}}}&=&\alpha x_m^2 \ln x_m+ N \left(b_0+ \ln\frac{\sqrt{2\pi\alpha}}{\lambda^2 n_b}\right) -\frac{\alpha}{2} x_m^2 +4\alpha x_m (2 \ln x_m -1) \nonumber\\ && - \mathcal{C}_R\,\beta \gamma_{\text{hard}} +6 \alpha (\ln x_m)^2+14 \alpha \ln x_m +\frac{1}{12}\ln x_m +O(1)\,.\end{aligned}$$ The result is one-half of the one for the full surface, $\ln Z^{\mathrm{ws}}$, as it might be expected. ### Thermodynamic limit at fixed shape: $\alpha\to\infty$ and $x_m$ fixed For this case, the asymptotics of $\hat{\mathcal{B}}_N$ are very similar to those of $\mathcal{B}_N$ from equation (\[eq:BN-fixed-shape\]) $$\hat{\mathcal{B}}_N(k)\sim {\hat{x}}_k \mathcal{B}_N(k) \,.$$ Therefore, the only difference from the calculations of the half surface case with potential $-\ln|z-z'|+\text{constant}$, and this case, is the sum $$S_4^{\overline{\mathrm{hs}},\mathrm{fixed}}=\sum_{k=0}^{N-1} \ln {\hat{x}}_k\,.$$ We have $$\begin{aligned} S_4^{\overline{\mathrm{hs}},\mathrm{fixed}} &=&\int_1^{x_m} \alpha p'(x)\ln x\,dx + O(1) \nonumber\\ &=& \alpha p(x_m)\ln x_m - \frac{1}{2}\alpha h(x_m)+ O(1) \,.\end{aligned}$$ Here, the term $k=N$ and the remainder of the Euler-McLaurin expansion give corrections of order $O(\alpha^0)=O(1)$, as opposed to the previous section where they gave contributions of order $O(\ln x_m)$. Finally, we find $$\begin{aligned} \ln Z^{\overline{\mathrm{hs}}} &=& \alpha \left[ p(x_m)\left( \frac{1}{2}\ln\frac{\sqrt{2\alpha \pi}}{n_b}+b_0 \right) + \int_{1}^{\infty} \frac{(1+x)^4}{x^3}\ln\frac{(1+x)^4}{x^2}\,dx\right] \nonumber\\ && -\sqrt{\frac{4\pi\alpha}{n_b}} \left[x_m\left(1+\frac{1}{x_m}\right)+4\right]\beta\gamma_{\text{hard}} +O(1)\,.\end{aligned}$$ The bulk free energy, proportional to $\alpha$, plus the nonextensive term proportional $\alpha\ln\alpha$, are one-half the ones from equation (\[eq:free-energy-fixed-shape-full-surface\]) for the full surface case, as expected. The perimeter contribution, proportional to $\sqrt{\alpha}$ is again the same as in all the previous cases of thermodynamic limit at fixed shape, i.e. a contribution $\beta \gamma_{\text{hard}} \mathcal{C}_b$ for each boundary at $x_b=x_m$ and at $x_b=1$ with perimeter $\mathcal{C}_b$ (\[eq:perimeter\]). Once again, there is no $\ln \alpha$ correction in agreement with the fact that the Euler characteristic of this manifold is $\chi=0$. The grounded horizon case ------------------------- ### Grand canonical partition function In order to find the partition function for the system in the half space, with a metallic grounded boundary at $x=1$, when the charges interacting through the pair potential of Eq. (\[ghgreen\]) it is convenient to work in the grand canonical ensemble instead, and use the techniques developed in Refs. [@Forrester85; @Jancovici96]. We consider a system with a fixed background density $\rho_b$. The fugacity $\tilde{\zeta}=e^{\beta \mu}/\lambda^2$, where $\mu$ is the chemical potential, controls the average number of particles $\langle N\rangle$, and in general the system is nonneutral $\langle N\rangle \neq N_b$, where $N_b=\alpha p(x_m)$. The excess charge is expected to be found near the boundaries at $x=1$ and $x=x_m$, while in the bulk the system is expected to be locally neutral. In order to avoid the collapse of a particle into the metallic boundary, due to its attraction to the image charges, we confine the particles to be in a “disk” domain $\tilde{\Omega}_R$, where $x\in[1+w,x_m]$. We introduced a small gap $w$ between the metallic boundary and the domain containing the particles, the geodesic width of this gap is $W=\sqrt{\alpha p'(1)/(2\pi n_b)}\,w$. On the other hand, for simplicity, we consider that the fixed background extends up to the metallic boundary. In the potential energy of the system (\[eq:hamiltonian-gen\]) we should add the self energy of each particle, that is due to the fact that each particle polarizes the metallic boundary, creating an induced surface charge density. This self energy is $\frac{q^2}{2}\ln [|x^2-1|M/2L]$, where the constant $\ln (M/2L)$ has been added to recover, in the limit $M\to0$, the self energy of a charged particle near a plane grounded wall in flat space. The grand partition function, when $\beta q^2=2$, is $$\Xi=e^{-\beta F_0^{\text{gh}}} \left[1+\sum_{N=1}^{\infty} \frac{\zeta^N}{N!} \int \prod_{i=1}^N dS_i \prod_{i<j}\left|\frac{z_i-z_j}{1-z_i \bar{z}_j}\right|^2 \prod_{i=1}^{N} \left| |z_i|^2-1\right|^{-1} \prod_{i=1}^{N} e^{-\alpha [ h(x_i)-2N_b \ln x_i ]} \right]$$ where for $N=1$ the product $\prod_{i<j}$ must be replaced by 1. The domain of integration for each particle is $\tilde{\Omega}_R$. We have defined a rescaled fugacity $\zeta=2L\tilde{\zeta }/M$ and $$-\beta F_0^{\text{gh}}=\alpha N_b h(x_m) - N_b^2\ln x_m -\alpha^2 \int_{1}^{x_m} \frac{[p(x)]^2}{x}\,dx$$ which is very similar to $F_0^{\text{hs}}$, except that here $N_b=\alpha p(x_m)$ is not equal to $N$ the number of particles. Let us define a set of reduced complex coordinates $u_i=z_i$ and its corresponding images $u_i^*=1/\bar{z}_i$. By using Cauchy identity $$\begin{aligned} \label{Cauchy} \det \left( \frac{1}{u_i-u_j^*} \right)_{(i,j)\in\{1,\cdots,N\}^2} = (-1)^{N(N-1)/2}\: \frac{\prod_{i<j} (u_i-u_j)(u^*_i-u^*_j)}{\prod_{i,j} (u_i-u_j^*)}\end{aligned}$$ the particle-particle interaction and self energy terms can be cast into the form $$\begin{aligned} \prod_{i<j}\left| \frac{z_i-z_j}{1-z_i \bar{z}_j} \right|^2 \prod_{i=1}^N \left(|z_i|^2-1\right)^{-1} =(-1)^{N} \det \left( \frac{1}{1-z_i \bar{z}_j} \right)_{(i,j)\in\{1,\cdots,N\}^2} \,.\end{aligned}$$ The grand canonical partition function is then $$\begin{aligned} \label{eq:GrandPart-prelim} \Xi= e^{-\beta F_0^{\text{gh}}}\left[1+\sum_{N=1}^{\infty} \frac{1}{N!} \int \prod_{i=1}^N dS_i \prod_{i=1}^{N} \left[-\zeta(x_i)\right]\,\det \left( \frac{1}{1-z_i \bar{z}_j} \right) \right] \,,\end{aligned}$$ with $\zeta(x)=\zeta e^{-\alpha[h(x)-2N_b \ln x]}$. We shall now recall how this expression can be reduced to a Fredholm determinant [@Forrester85]. Let us consider the Gaussian partition function $$\label{eq:part-fun-Grass-libre} Z_0=\int {\cal D}\psi {\cal D}\bar{\psi} \,\exp\left[\int \bar{\psi}({{\bf q}}) A^{-1}(z,\bar{z}') \psi({{\bf q}}')\, dS\, dS' \right] \,.$$ The fields $\psi$ and $\bar{\psi}$ are anticommuting Grassmann variables. The Gaussian measure in (\[eq:part-fun-Grass-libre\]) is chosen such that its covariance is equal to $$\left<\bar{\psi}({{\bf q}}_i)\psi({{\bf q}}_j)\right> = A(z_i,\bar{z}_j)=\frac{1}{1-z_i \bar{z}_j}$$ where $\langle\ldots\rangle$ denotes an average taken with the Gaussian weight of (\[eq:part-fun-Grass-libre\]). By construction we have $$\label{Z_0} Z_0=\det(A^{-1})$$ Let us now consider the following partition function $$Z=\int {\cal D}\psi {\cal D}\bar{\psi} \exp\left[\int \bar{\psi}({{\bf q}}) A^{-1}(z,\bar{z}') \psi({{\bf q}}') dS dS' -\int \zeta(x) \bar{\psi}({{\bf q}})\psi({{\bf q}}) \,dS \right]$$ which is equal to $$Z=\det(A^{-1}-\zeta)$$ and then $$\label{Z/Z_0} \frac{Z}{Z_0}=\det[A(A^{-1}-\zeta)]=\det(1+K)$$ where $K$ is an integral operator (with integration measure $dS$) with kernel $$\label{K} K({{\bf q}},{{\bf q}}')=-\zeta(x')\, A(z,\bar{z}')= -\frac{\zeta(x')}{1-z\bar{z}'} \,.$$ Expanding the ratio $Z/Z_0$ in powers of $\zeta$ we have $$\label{eq:expans-ZZ0} \frac{Z}{Z_0}= 1+ \sum_{N=1}^{\infty} \frac{1}{N!} \int \prod_{i=1}^N dS_i (-1)^{N}\prod_{i=1}^N \zeta(x_i) \left<\bar{\psi}({{\bf q}}_1)\psi({{\bf q}}_1)\cdots \bar{\psi}({{\bf q}}_N)\psi({{\bf q}}_N)\right>$$ Now, using Wick theorem for anticommuting variables [@ZinnJustin], we find that $$\label{eq:WickFerms} \left<\bar{\psi}({{\bf q}}_1)\psi({{\bf q}}_1)\cdots \bar{\psi}({{\bf q}}_N)\psi({{\bf q}}_N)\right> =\det A(z_i,\bar{z}_j)=\det\left(\frac{1}{1-z_i \bar{z}_j}\right)$$ Comparing equations (\[eq:expans-ZZ0\]) and (\[eq:GrandPart-prelim\]) with the help of equation (\[eq:WickFerms\]) we conclude that $$\label{eq:Xi-det} \Xi=e^{-\beta F_0^{\text{gh}}}\,\frac{Z}{Z_0}=e^{-\beta F_0^{\text{gh}}}\det(1+K)$$ The problem of computing the grand canonical partition function has been reduced to finding the eigenvalues $\lambda$ of the operator $K$. The eigenvalue problem for $K$ reads $$\label{eq:vpK} -\int_{\tilde{\Omega}_R} \frac{\zeta(x')} { 1-z\bar{z}'}\, \Phi(x',\varphi') dS' = \lambda \Phi(x,\varphi)$$ For $\lambda\neq 0$ we notice from equation (\[eq:vpK\]) that $\Phi(x,\varphi)=\Phi(z)$ is an analytical function of $z=xe^{i\varphi}$ in the region $|z|>1$. Because of the circular symmetry, it is natural to try $\Phi(z)=\Phi_{\ell}(z)=z^{-\ell}$ with $\ell\ge 1$ a positive integer. Expanding $$\frac{1}{1-z\bar{z}'}= -\sum_{n=1}^{\infty}\left(z\bar{z}'\right)^{-n}$$ and replacing $\Phi_{\ell}(z)=z^{-\ell}$ in equation (\[eq:vpK\]), we show that $\Phi_{\ell}$ is indeed an eigenfunction of $K$ with eigenvalue $$\label{eq:lambda-vp-de-K} \lambda_{\ell}= \zeta \mathcal{B}_{N_b}^{\text{gh}}(N_b-\ell)$$ where $$\label{eq:BNgh} \mathcal{B}_{N_b}^{\text{gh}}(k)=\frac{\alpha}{n_b}\int_{1+w}^{x_m} x^{2k} e^{- \alpha h(x)}\,p'(x)\,dx$$ which is very similar to $\mathcal{B}_N$ defined in Eq. (\[gamma\]), except for the small gap $w$ in the lower limit of integration. So, we arrive to the result for the grand potential $$\label{eq:grand-potential-somme} \beta\Omega = -\ln\Xi = \beta F_0^{\text{gh}} - \sum_{\ell=1}^{\infty} \ln\left[ 1+\zeta {\cal B}_{N_b}^{\text{gh}}(N_b-\ell) \right]\,.$$ ### Thermodynamic limit at fixed shape: $\alpha\to\infty$ and $x_m$ fixed Let us define $k=N_b-\ell$ for $\ell\in\mathbb{N}^{*}$, thus $k$ is positive, then negative when $\ell$ increases. Therefore, it is convenient to split the sum (\[eq:grand-potential-somme\]) in $\ln\Xi$ into two parts $$\begin{aligned} \label{eq:S6gh} S_{6}^{\text{gh,fixed}} &=& \sum_{k=-\infty}^{-1} \ln[1+\zeta \mathcal{B}_{N_b}^{\text{gh}}(k)] \\ \label{eq:S7gh} S_{7}^{\text{gh,fixed}}&=&\sum_{k=0}^{N_b-1} \ln[1+\zeta \mathcal{B}_{N_b}^{\text{gh}}(k)]\,.\end{aligned}$$ The asymptotic behavior of $\mathcal{B}_{N_b}^{\text{gh}}(k)$ when $\alpha\to\infty$ can be directly deduced from the one of $\mathcal{B}_N$ found in appendix \[app:gamma\], Eq. (\[eq:BN-fixed-shape\]), taking into account the small gap $w$ near the boundary at $x=1+w$. When $k<0$, we have ${\hat{x}}_k<1$, then we notice that $\epsilon_{k,1}$ defined in (\[eq:epsilon\_k1\]) is negative, and that the relevant contributions to the sum $S_6^{\text{gh,fixed}}$ are obtained when $k$ is close to 0, more precisely $k$ of order $O(\sqrt{N_b})$. So, we expand ${\hat{x}}_k$ around ${\hat{x}}_k=1$ up to order $({\hat{x}}_k-1)^2$ in the exponential term $e^{-\alpha[h({\hat{x}}_k)-2p({\hat{x}}_k)\ln {\hat{x}}_k]}$ from Eq. (\[eq:BN-fixed-shape\]). Then, we have, for $k<0$ of order $O(\sqrt{N_b})$, $$\label{eq:BNgh-asympt} \mathcal{B}_{N_b}^{\text{gh}}(k)= \frac{\sqrt{\alpha \pi p'(1)}}{2 n_b} e^{\alpha p'(1)\,(1-{\hat{x}}_k)^2} {\mathop\text{erfc}}[\sqrt{\alpha p'(1)}\,(1+w-{\hat{x}}_k)]$$ where ${\mathop\text{erfc}}(u)=1-{\mathop\text{erf}}(u)$ is the complementary error function. Then, up to corrections of order $O(1)$, the sum $S_6^{\text{gh,fixed}}$ can be transformed into an integral over the variable $t=\sqrt{\alpha p'(1)}\,(1-{\hat{x}}_k)$, to find $$S_6^{\text{gh,fixed}}=\sqrt{\alpha p'(1)}\int_{0}^{\infty} \ln\left[ 1+\frac{\zeta \sqrt{\alpha\pi p'(1)}}{2n_b} e^{t^2}\,{\mathop\text{erfc}}\left(t+\sqrt{2\pi n_b} W\right) \right]\,dt+O(1) \,.$$ Let $\mathcal{C}_1=\sqrt{2\pi\alpha p'(1)/n_b}$, be total length of the boundary at $x=1$. We notice that $$\zeta\frac{\sqrt{\alpha \pi p'(1)}}{2n_b} =\frac{\zeta \mathcal{C}_{1}}{\sqrt{2n_b}} =\frac{2\tilde{\zeta}L}{\sqrt{2n_b}} \frac{\mathcal{C}_{1}}{M}$$ is fixed and of order $O(1)$ in the limit $M\to\infty$, since in the fixed shape limit $\mathcal{C}_{1}/M$ is fixed. Therefore $S_6^{\text{gh,fixed}}$ gives a contribution proportional to the perimeter $\mathcal{C}_1$. For $S_7^{\text{gh,fixed}}$, we define $$\tilde{\epsilon}_{k,1}=\sqrt{\alpha p'(1)}\,(1+w-{\hat{x}}_k) \,,$$ and we write $$\begin{aligned} S_7^{\text{gh,fixed}}&=&\sum_{k=0}^{N_b-1} \ln \left[ 1+\frac{\zeta\sqrt{\alpha \pi {\hat{x}}_k p'({\hat{x}}_k)}}{2n_b} e^{-\alpha[h({\hat{x}}_k)-2p({\hat{x}}_k)\ln {\hat{x}}_k]} \left[{\mathop\text{erf}}(\tilde{\epsilon}_{k,1})+{\mathop\text{erf}}({\epsilon}_{k,m}) \right] \right] \nonumber\\ &=& S_8^{\text{gh,fixed}} + S_{1}^{\text{hs,fixed}} +S_{2}^{\text{hs,fixed}}+N_b\ln \frac{\zeta \sqrt{\alpha\pi}}{n_b} \end{aligned}$$ where $$S_8^{\text{gh,fixed}}=\sum_{k=0}^{N_b-1} \ln\left[ \frac{n_b e^{\alpha[h({\hat{x}}_k)-2p({\hat{x}}_k)\ln {\hat{x}}_k]}}{\zeta\sqrt{\alpha\pi {\hat{x}}_k p'({\hat{x}}_k)}} +\frac{1}{2}\left[ {\mathop\text{erf}}(\tilde{\epsilon}_{k,1})+{\mathop\text{erf}}(\epsilon_{k,m}) \right] \right]$$ and we see that the sums $S_{1}^{\text{hs,fixed}}$ and $S_{2}^{\text{hs,fixed}}$ reappear. These are defined in equations (\[eq:S1-hs-fixed\]) and (\[eq:S2-hs-fixed\]) and computed in (\[eq:S1-hs-fixed-asympt\]) and (\[eq:S2-hs-fixed-asympt\]). In a similar way to $S_6^{\text{gh,fixed}}$, $S_8^{\text{gh,fixed}}$ gives only boundary contributions when $k$ is close to 0, of order $\sqrt{N_b}$ (grounded boundary at $x=1$) and when $k$ is close to $N_b$ with $N_b-k$ of order $\sqrt{N_b}$ (boundary at $x=x_m$). We have, $$\begin{aligned} S_8^{\text{gh,fixed}}&=& \sqrt{\alpha p'(1)} \int_{0}^{\infty} \ln\left[ \frac{n_b e^{-t^2}}{\zeta \sqrt{\alpha \pi p'(1)}} +\frac{1}{2}\left[{\mathop\text{erf}}(t-\sqrt{2\pi n_b} W)+1\right] \right]\,dt \nonumber\\ && + \sqrt{\alpha x_m p'(x_m)} \int_{0}^{\infty} \ln\left[ \frac{{\mathop\text{erf}}(t)+1}{2} \right]\,dt\end{aligned}$$ Let us introduce again the perimeter of the outer boundary at $x=x_m$, $\mathcal{C}_{R}=\sqrt{2\pi\alpha x_m p'(x_m)/n_b}$. Putting together all terms, we finally have $$\begin{aligned} \ln \Xi &=& -N_b \beta\omega_{B}+\frac{\alpha}{2}\left[h(x_m)-2p(x_m)\ln x_m\right] +\alpha \int_{1}^{x_m} \frac{(1+x)^4}{x^3}\ln\frac{(1+x)^4}{x^2}\,dx \nonumber\\ && -\mathcal{C}_1 \beta\gamma_{\text{metal}} -\mathcal{C}_{R} \beta\gamma_{\text{hard}} +O(1)\end{aligned}$$ where $$\beta\omega_{B} = -\ln \frac{2\pi\tilde{\zeta}L}{\sqrt{2n_b}}$$ is the bulk grand potential per particle of the OCP near a plane metallic wall in the flat space. The surface (perimeter) tensions $\gamma_{\text{metal}}$ and $\gamma_{\text{hard}}$ associated to each boundary (metallic at $x_b=1$, and hard wall at $x_b=x_m$) are given by $$\begin{aligned} \beta \gamma_{\text{metal}}&=&- \sqrt{\frac{n_b}{2\pi}} \int_{0}^{\infty} \ln\left[ 1+\frac{\zeta \sqrt{\alpha\pi x_b p'(x_b)}}{2n_b} e^{t^2}\,{\mathop\text{erfc}}\left(t+\sqrt{2\pi n_b} W\right) \right]\,dt \nonumber\\ && - \sqrt{\frac{n_b}{2\pi}} \int_{0}^{\infty} \ln\left[ \frac{n_b e^{-t^2}}{\zeta \sqrt{\alpha \pi p'(x_b) x_b}} +\frac{1}{2}\left[{\mathop\text{erf}}(t-\sqrt{2\pi n_b} W)+1\right] \right] \,dt\end{aligned}$$ with $x_b=1$, and (\[app:gam\]) for $\beta \gamma_{\text{hard}}$. Notice, once again, that the combination $$\frac{\zeta \sqrt{\alpha \pi x_b p'(x_b)}}{2n_b} =\frac{2\tilde{\zeta}L}{\sqrt{2 n_b}} \frac{\mathcal{C}_b}{M}$$ is finite in this fixed shape limit, since the perimeter $\mathcal{C}_{b}$ of the boundary at $x_b$ scales as $M$. Up to a rescaling of the fugacity $\tilde{\zeta}$ to absorb the factor $\mathcal{C}_b/M$, the surface tension near the metallic boundary $\gamma_{\text{metal}}$ is the same as the one found in Ref. [@Jancovici96] in flat space. It is also similar to the one found in Ref. [@Forrester85] with a small difference due to the fact that in that reference the background does not extend up to the metallic boundary, but has also a small gap near the boundary. There is no $\ln \alpha$ correction in the grand potential in agreement with the fact that the Euler characteristic of the manifold is $\chi=0$. Let us decompose $\ln \Xi$ into its bulk and perimeter parts, $$\label{eq:lnXi-bulk-surface} \ln \Xi = -\beta \Omega_b^{\text{gh}} -\mathcal{C}_1 \beta\gamma_{\text{metal}} -\mathcal{C}_{R} \beta\gamma_{\text{hard}} +O(1)$$ with the bulk grand potential $\Omega_{b}^{\text{gh}}$ given by $$-\beta \Omega_b^{\text{gh}} = -N_b \beta\omega_{B}+\frac{\alpha}{2}\left[h(x_m)-2p(x_m)\ln x_m\right] +\alpha \int_{1}^{x_m} \frac{(1+x)^4}{x^3}\ln\frac{(1+x)^4}{x^2}\,dx \,.$$ The average number of particles is given by the usual thermodynamic relation $\langle N \rangle = \zeta \partial(\ln\Xi)/\partial \zeta$. Following (\[eq:lnXi-bulk-surface\]), it can be decomposed into bulk and perimeter contributions, $$\langle N \rangle = N_b - \mathcal{C}_1\zeta \frac{\partial \beta \gamma_{\text{metal}}}{\partial \zeta} \,.$$ The boundary at $x=x_m$ does not contribute because $\gamma_{\text{hard}}$ does not depend on the fugacity. From this equation, we can deduce the perimeter linear charge density $\sigma$ which accumulates near the metallic boundary $$\sigma = - \zeta \frac{\partial \beta \gamma_{\text{metal}}}{\partial \zeta} \,.$$ We can also notice that the bulk Helmoltz free energy $F_{b}^{\text{gh}} = \Omega_{b}^{\text{gh}} + \mu N_b$ is the same as for the half surface, with Coulomb potential $G^{\text{hs}}$, given in (\[eq:free-energy-fixed-shape\]). ### Thermodynamic limit $R\to\infty$, $x_m\to \infty$, and fixed $M$ This limit is of restricted interest, since the metallic boundary perimeter remains of order $O(1)$, we expect to find the same thermodynamic quantities as in the half surface case with hard wall “horizon” boundary up to order $O(\ln x_m)$. This is indeed the case: let us split $\ln \Xi$ into two sums $S_6^{\text{gh}}$ and $S_7^{\text{gh}}$ as in (\[eq:S6gh\]) and (\[eq:S7gh\]). For $k<0$, the asymptotic expansion of $\mathcal{B}_{N_b}(k)$ derived in appendix \[app:gamma\] should be revised, because the absolute maximum of the integrand is obtained for values of the variable of integration outside the domain of integration. Within the domain of integration the maximum value of the integrand in (\[eq:BNgh\]) is obtained when $x=1+w$. Expanding the integrand around that value, we obtain to first order, for large $|k|$, $$\mathcal{B}_{N_b}^{\text{gh}}(k)\sim\frac{\alpha p'(1+w)}{2n_b |k|}e^{-2w |k|} \,.$$ Then $$\begin{aligned} S_6^{\text{gh}}&=&\sum_{k=-\infty}^{0} \ln\left[1+\zeta \mathcal{B}_{N_b}^{\text{gh}}(k)\right] \nonumber\\ &=& \int_{0}^{\infty} dk \ln \left[1+\zeta \frac{\alpha p'(1+w)}{2n_b |k|}e^{-2w |k|}\right] + O(1) \nonumber\\ &=& O(1)\,,\end{aligned}$$ does not contribute to the result at orders greater than $O(1)$. For the other sum, we have $$\begin{aligned} S_7^{\text{gh}}&=&\sum_{k=0}^{N_b} \ln\left[\zeta \mathcal{B}_{N_b}^{\text{gh}}(k) \right] +\sum_{k=0}^{N_b} \ln \left[ 1 + \frac{1}{\zeta \mathcal{B}_{N_b}^{\text{gh}}(k)} \right] \nonumber\\ &=& \sum_{k=0}^{N_b} \ln\left[\zeta \mathcal{B}_{N_b}^{\text{gh}}(k) \right] +O(1)\,.\end{aligned}$$ The second sum is indeed $O(1)$, because $1/[\zeta \mathcal{B}_{N_b}^{\text{gh}}(k)]$ has a fast exponential decay for large $k$, therefore the sum can be converted into an finite \[order $O(1)$\] integral over the variable $k$. Now, since the asymptotic behavior of $\mathcal{B}_{N_b}^{\text{gh}}(k)$, for $k>0$ and large, is essentially the same as the one for $\mathcal{B}_{N_b}(k)$, we immediately find, up to $O(1)$ corrections, $$\ln \Xi = \beta \mu N_b + \ln Z^{\text{hs}} +O(1)$$ where $\ln Z^{\text{hs}}$ is minus the free energy in the half surface case with hard wall boundary, given by (\[eq:Fhs-xm-infinity\]). ### The one-body density As usual one can compute the density by doing a functional derivative of the grand potential with respect to a position-dependent fugacity $\zeta({{\bf q}})$ $$\label{eq:n-funct-deriv} n^{\text{gh}}({{\bf q}})= \zeta({{\bf q}})\frac{\delta\ln\Xi}{\delta \zeta({{\bf q}})} \,.$$ For the present case of a curved space, we shall understand the functional derivative with the rule $\frac{\delta \zeta({{\bf q}}')}{\delta \zeta({{\bf q}})}=\delta({{\bf q}},{{\bf q}}')$ where $\delta({{\bf q}},{{\bf q}}')=\delta(x-x')\delta(\varphi-\varphi')/\sqrt{g}$ is the Dirac distribution on the curved surface. Using a Dirac-like notation, one can formally write $$\ln\Xi=\mbox{Tr} \ln(1+K)-\beta F_0^{\text{gh}}= \int \left<{{\bf q}}\left| \ln(1-\zeta({{\bf q}})A)\right|{{\bf q}}\right> \,dS -\beta F_0^{\text{gh}}$$ Then, doing the functional derivative (\[eq:n-funct-deriv\]), one obtains $$n^{\text{gh}}({{\bf q}})= \zeta\left<{{\bf q}}\left| (1+K)^{-1}(-A) \right|{{\bf q}}\right> = \zeta G({{\bf q}},{{\bf q}})$$ where we have defined $G({{\bf q}},{{\bf q}}')$ by $G=(1+K)^{-1}(-A)$. More explicitly, $G$ is the solution of $(1+K)G=-A$, that is $$\label{eq:eq-Green-function} G({{\bf q}},{{\bf q}}') - \int_{\tilde{\Omega}_R} \zeta(x'')\frac{G({{\bf q}}'',{{\bf q}}')}{1-z\bar{z}''} \, dS'' = -\frac{1}{1-z\bar{z}'} \,.$$ From this integral equation, one can see that ${G}({{\bf q}},{{\bf q}}')$ is an analytical function of $z$ in the region $|z|>1$. Then, we look for a solution in the form of a Laurent series $${G}({{\bf q}},{{\bf q}}')=\sum_{\ell=1}^{\infty} a_{\ell}({{\bf r}}') z^{-\ell}$$ into equation (\[eq:eq-Green-function\]) yields $$\label{eq:solution-G} {G}({{\bf q}},{{\bf q}}')= \sum_{\ell=1}^{\infty} \frac{\left(z\bar{z}'\right)^{-\ell}}{1+\lambda_\ell} \,.$$ Recalling that $\lambda_{\ell}=\zeta\mathcal{B}_{N}^{\text{gh}}(N_b-\ell)$, the density is given by $$\label{eq:densite-somme} n^{\text{gh}}(x)= \zeta \sum_{k=-\infty}^{N_b-1} \frac{x^{2k}e^{-\alpha h(x)}}{1+\zeta \mathcal{B}_{N}^{\text{gh}}(k)}$$ ### Density in the thermodynamic limit at fixed shape $\alpha\to\infty$ and $x_m$ fixed. {#density-in-the-thermodynamic-limit-at-fixed-shape-alphatoinfty-and-x_m-fixed.} Using the asymptotic behavior (\[eq:BN-fixed-shape\]) of $\mathcal{B}_N^{\text{gh}}$, we have $$n^{\text{gh}}(x)=\zeta \sum_{k=-\infty}^{N_b} \frac{\exp\left( -\alpha[h(x)-2p({\hat{x}}_k)\ln x - h({\hat{x}}_k) + 2 p({\hat{x}}_k)\ln {\hat{x}}_k] \right)}{ e^{\alpha[h({\hat{x}}_k)-2p({\hat{x}}_k)\ln {\hat{x}}_k]} +\frac{\zeta \sqrt{\alpha \pi {\hat{x}}_k p'({\hat{x}}_k)}}{2n_b} \left[ {\mathop\text{erf}}(\tilde{\epsilon}_{k,1})+{\mathop\text{erf}}(\epsilon_{k,m}) \right]} \,.$$ Once again, this sum can be evaluated using Laplace method. The exponential in the numerator presents a peaked maximum for $k$ such that ${\hat{x}}_k=x$. Expanding the argument of the exponential around its maximum, we have $$n^{\text{gh}}(x)=\zeta \sum_{k=-\infty}^{N_b} \frac{e^{-\alpha p'(x) (x-{\hat{x}}_k)^2/x}}{ e^{\alpha[h({\hat{x}}_k)-2p({\hat{x}}_k)\ln {\hat{x}}_k]} +\frac{\zeta \sqrt{\alpha \pi {\hat{x}}_k p'({\hat{x}}_k)}}{2n_b} \left[ {\mathop\text{erf}}(\tilde{\epsilon}_{k,1})+{\mathop\text{erf}}(\epsilon_{k,m}) \right]} \,.$$ Now, three cases has to be considered, depending on the value of $x$. If $x$ is in the bulk, [*i.e.*]{} $x-1$ and $x_m-x$ of order 1, the exponential term in denominator vanishes in the limit $\alpha\to \infty$, and we end up with an expression which is essentially the same as in the canonical case (\[eq:density-fixed-shape-start1\]) \[the difference in the lower limit of summation is irrelevant in this case since the summand vanishes very fast when ${\hat{x}}_k$ differs from $x$\]. Therefore, in the bulk, $n^{\text{gh}}(x)=n_b$ as expected. When $x_m-x$ is of order $O(1/\sqrt{\alpha})$, once again the exponential term in the denominator vanishes in the limit $\alpha\to\infty$. The resulting expression is transformed into an integral over the variable $\epsilon_{k,m}$, and following identical calculations as the ones from subsection \[sec:density-canonic-fixed-shape\], we find that, $n^{\text{gh}}(x)=n^{\text{hs}}(x)$, that is the same result (\[eq:density-border-xm\]) as for the hard wall boundary. This is somehow expected since, the boundary at $x=x_m$ is of the hard wall type. Notice that the density profile near this boundary does not depend on the fugacity $\zeta$. The last case is for the density profile close to the metallic boundary, when $x-1$ is of order $O(1/\sqrt{\alpha})$. In this case, contrary to the previous ones, the exponential term in the denominator does not vanish. Expanding it around ${\hat{x}}_k=1$, we have $$n^{\text{gh}}(x)=\zeta \sum_{k=-\infty}^{N_b} \frac{e^{-\alpha p'(x) (x-{\hat{x}}_k)^2/x}}{ e^{-\epsilon_{k,1}^2} +\frac{\zeta \sqrt{\alpha \pi {\hat{x}}_k p'({\hat{x}}_k)}}{2 n_b} \left[ {\mathop\text{erf}}(\tilde{\epsilon}_{k,1})+1 \right]} \,.$$ Transforming the summation into an integral over the variable $t=-\epsilon_{k,1}$, we find $$n^{\text{gh}}(x)= \zeta \sqrt{\alpha p'(1)} \int_{-\infty}^{+\infty} \frac{e^{-[t+\sqrt{\alpha p' (1)}(x-1)]^2}\ dt}{ e^{-t^2}+\frac{\zeta\sqrt{\alpha \pi p'(1)}}{2 n_b} {\mathop\text{erfc}}(t+\sqrt{2\pi n_b}W)} \,.$$ For purposes of comparison with Ref. [@Forrester85], this can be rewritten as $$n^{\text{gh}}(x)= \zeta \sqrt{\alpha p'(1)} e^{-\alpha p'(1)[(x-1-w)^2-w^2]} \int_{-\infty}^{+\infty} \frac{e^{-2\sqrt{\alpha p'(1)} (x-1)t}\ dt}{ 1+\frac{\zeta\sqrt{\alpha \pi p'(1)}}{2 n_b} {\mathop\text{erfc}}(t)e^{(t-\sqrt{2\pi n_b}W)^2}} \,.$$ Which is very similar to the density profile near a plane metallic wall in flat space found in Ref. [@Forrester85] \[there is a small difference, due to the fact that in [@Forrester85] the background did not extend up to the metallic wall, but also had a gap, contrary to our present model\]. Fig. \[fig:density-metal-fixed-shape\] shows the density profile for two different values of the fugacity, and compares the asymptotic results with a direct numerical evaluation of the density. ![The normalized one-body density $n^{\text{gh}}(x)/n_b$, in the grounded horizon case. The dashed lines correspond to a numerical evaluation, obtained from (\[eq:densite-somme\]), with $N=100$, $x_m=2$ and $\alpha=4.15493$ and truncating the sum to 301 terms (the lower value of $k$ is $-200$). The gap close to the metallic boundary has been chosen equal to $w=0.01$. The full lines correspond to the asymptotic result in the fixed shape limit when $\alpha\to\infty$, and $x_m=2$ fixed. The two upper curves correspond to a fugacity given by $\zeta\sqrt{\alpha}/(2n_b)=\tilde{\zeta}L\sqrt{\pi/n_b}=1$, while the two lower ones correspond to $\tilde{\zeta}L\sqrt{\pi/n_b}=0.1$. Notice how the value of the fugacity only affects the density profile close to the metallic boundary $x=1$. []{data-label="fig:density-metal-fixed-shape"}](density-metal-fixed-shape){width="\GraphicsWidth"} Interestingly, one again, the density profile shows a universality feature, in the sense that it is essentially the same as for a flat space. As in the flat space, the fugacity controls the excess charge which accumulates near the metallic wall. Only the density profile close to the metallic wall depends on the fugacity. In the bulk, the density is constant, equal to the background density. Close to the other boundary (the hard wall one), the density profile is the same as in the other models from previous sections, and it does not depend on the fugacity. Conclusions {#sec:conclusions} =========== The two-dimensional one-component classical plasma has been studied on Flamm’s paraboloid (the Riemannian surface obtained from the spatial part of the Schwarzchild metric). The one-component classical plasma had long been used as the simplest microscopic model to describe many Coulomb fluids such as electrolytes, plasmas, molten salts [@March84]. Recently it has also been studied on curved surfaces as the cylinder, the sphere, and the pseudosphere. From this point of view, this work presents new results as it describes the properties of the plasma on a surface that had never been considered before in this context. The Coulomb potential on this surface has been carefully determined. When we limit ourselves to study only the upper or lower half parts ($\mathcal{S}_{\pm}$) of the surface (see Fig. \[fig:surf\]) the Coulomb potential is $G^{\text{hs}}({{\bf q}},{{\bf q}}')= -\ln |z-z'|+ \text{constant}$, with the appropriate set of coordinates $(x,\varphi)$ defined in section \[sec:good-coordinates\], and $z=xe^{i\varphi}$. When charges from the upper part are allowed to interact with particles from the lower part then the Coulomb potential turns out to be $G^{\text{ws}}({{\bf q}},{{\bf q}}')= -\ln(|z-z'|/\sqrt{|zz'|})+\text{constant}$. When the charges live in the upper part with the horizon grounded, the Coulomb potential can be determined using the method of images form electrostatics. Since the Coulomb potential takes a form similar to the one of a flat space, this allows to use the usual techniques [@Jancovici81b; @Alastuey81] to compute the thermodynamic properties when the coupling constant $\Gamma=\beta q^2=2$. Two different thermodynamic limits have been considered: the one where the radius $R$ of the “disk” confining the plasma is allowed to become very big while keeping the surface hole radius $M$ constant, and the one where both $R\to\infty$ and $M\to\infty$ with the ratio $R/M$ kept constant (fixed shape limit). In both limits we computed the free energy up to corrections of order $O(1)$. The plasma on half surface is found to be thermodynamically stable, in both types of thermodynamic limit, upon choosing the arbitrary additive constant in the Coulomb potential equal to $-\ln M+\text{constant}$. The system on the full surface is found to be stable upon choosing the constant in the Coulomb potential equal to $-\ln(M x_m) + \text{constant}$ where $x_m=(\sqrt{R}+\sqrt{R-2M})^2/(2M)$. In the limit $R\to\infty$ while keeping $M$ fixed, most of the surface available to the particles is almost flat, therefore the bulk free energy is the same as in flat space, but corrections from the flat case, due to the curvature effects, appear in the terms proportional to $R$ and the terms proportional to $\ln R$. These corrections are different for each case (half or whole surface). The asymptotic expansion at fixed shape ($\alpha\to\infty$) presents a different value for the bulk free energy than in the flat space, due to the curvature corrections. On the other hand, the perimeter corrections to the free energy turn out to be the same as for a flat space. This expansion of the free energy does not exhibit the logarithmic correction, $\ln \alpha$, in agreement with the fact that the Euler characteristic of this surface vanishes. For completeness, we also studied the system on half surface letting the particles interact through the Coulomb potential $G^{\text{ws}}$. In this mixed case the result for the free energy is simply one-half the one found for the system on the full surface. In the case where the “horizon” is grounded (metallic boundary), the system is studied in the grand canonical ensemble. The limit $R\to \infty$ with $M$ fixed, reproduces the same results as the case of the half surface with potential $G^{\text{hs}}$ up to $O(1)$ corrections, because the effects of the size of the metallic boundary remain $O(1)$. More interesting is the thermodynamic limit at fixed shape, where we find that the bulk thermodynamics are the same as for the half surface with potential $G^{\text{hs}}$, but a perimeter correction associated to the metallic boundary appears. This turns out to be the same as for a flat space. This perimeter correction (“surface” tension) $\beta \gamma_{\text{metal}}$ depends on the value of the fugacity. In the grand canonical formalism, the system can be nonneutral, in the bulk the system is locally neutral, and the excess charge is found near the metallic boundary. In contrast, the outer hard wall boundary (at $x=x_m$), exhibits the same density profile as in the other cases, independent of the value of the fugacity. This reflects in a perimeter contribution $\beta \gamma_{\text{hard}}$ equal to the one of the previous cases. The plasma on Flamm’ s paraboloid is not homogeneous due to the fact that the curvature of the surface is not constant. When the horizon shrinks to a point the upper half surface reduces to a plane and one recovers the well known result valid for the one component plasma on the plane. In the same limit the whole surface reduces to two flat planes connected by a hole at the origin. We carefully studied the one body density for several different situations: plasma on half surface with potential $G^{\text{hs}}$ and $G^{\text{ws}}$, plasma on the whole surface with potential $G^{\text{ws}}$, and plasma on half surface with the horizon grounded. When only one-half of the surface is occupied by the plasma, if we use $G^{\text{hs}}$ as the Coulomb potential, the density shows a peak in the neighborhoods of each boundary, tends to a finite value at the boundary and to the background density far from it, in the bulk. If we use $G^{\text{ws}}$, instead, the qualitative behavior of the density remains the same. In the thermodynamic limit at fixed shape, we find that the density profile is the same as in flat space near a hard wall, regardless of the Coulomb potential used. In the grounded horizon case the density reaches the background density far from the boundaries. In this case, the fugacity and the background density control the density profile close to the metallic boundary (horizon). In the bulk and close to the outer hard wall boundary, the density profile is independent of the fugacity. In the thermodynamic limit at fixed shape, the density profile is the same as for a flat space. Internal and external screening sum rules have been briefly discussed. Nevertheless, we think that systems with non constant curvature should deserve a revisiting of all the common sum rules for charged fluids. Riccardo Fantoni would like to acknowledge the support from the italian MIUR (PRIN-COFIN 2006/2007). He would also wish to dedicate this work to his wife Ilaria Tognoni who is undergoing a very delicate and reflexive period of her life. G. T. acknowledges partial financial support from Comité de Investigaciones y Posgrados, Facultad de Ciencias, Universidad de los Andes. Green function of Laplace equation {#app:green} ================================== In this appendix, we illustrate the calculation of the Green function using the original system of coordinates $(r,\varphi)$. The Coulomb potential generated at ${{\bf q}}=(r,\varphi)$ by a unit charge placed at ${{\bf q}}_0=(r_0,\varphi_0)$ with $r_0>2M$ satisfies the Poisson equation $$\begin{aligned} \Delta G(r,\varphi;r_0,\varphi_0) = -2\pi\delta(r-r_0)\delta(\varphi-\varphi_0)/\sqrt{g}~,\end{aligned}$$ where $g=\det (g_{\mu\nu})=r^2/(1-2M/r)$. To solve this equation, we expand the Green function $G$ and the second delta distribution in a Fourier series as follows $$\begin{aligned} \label{gexp} G(r,\varphi;r_0,\varphi_0)&=& \sum_{n=-\infty}^\infty e^{in(\varphi-\varphi_0)} g_n(r,r_0)~,\\ \delta(\varphi-\varphi_0)&=&\frac{1}{2\pi} \sum_{n=-\infty}^\infty e^{in(\varphi-\varphi_0)}~,\end{aligned}$$ to obtain an ordinary differential equation for $g_n$ $$\begin{aligned} \left[\left(1-\frac{2M}{r}\right)\frac{\partial^2}{\partial r^2} +\left(\frac{1}{r}-\frac{M}{r^2}\right)\frac{\partial}{\partial r}-\frac{n^2}{r^2}\right]g_n(r,r_0)=-\delta(r-r_0)/\sqrt{g}~.\end{aligned}$$ To solve this equation we first solve the homogeneous one for $r<r_0$: $g_{n,-}(r,r_0)$ and $r>r_0$: $g_{n,+}(r,r_0)$. The solution is, for $n\neq 0$, $$\begin{aligned} g_{n,\pm}(r,r_0)=A_{n,\pm}(\sqrt{r}+\sqrt{r-2M})^{2n}+ B_{n,\pm}(\sqrt{r}+\sqrt{r-2M})^{-2n}~,\end{aligned}$$ and, for $n=0$, one finds $$\begin{aligned} g_{0,\pm}(r,r_0)=A_{0,\pm}+B_{0,\pm}\ln(\sqrt{r}+\sqrt{r-2M})~.\end{aligned}$$ The form of the solution immediately suggest that it is more convenient to work with the variable $x=(\sqrt{r}+\sqrt{r-2M})^{2}/(2M)$. For this reason, we introduced this new system of coordinates $(x,\varphi)$ which is used in the main text. Asymptotic expansions of $\mathcal{B}_N(k)$, $\tilde{\mathcal{B}}_N(k)$ and $\hat{\mathcal{B}}_N(k)$ {#app:gamma} ==================================================================================================== Asymptotic expansion of $\mathcal{B}_N(k)$ ------------------------------------------ ### Limit $N\to\infty$, $x_m\to\infty$, and fixed $\alpha$ Doing the change of variable $s=\alpha p(x)$ in the integral (\[gamma\]), we have $$\mathcal{B}_N(k)=\frac{1}{n_b} \int_0^N x^{2k} e^{-\alpha h(x)}\, ds$$ where $x$ is related to the variable of integration $s$ by $s=\alpha p(x)$. The limit $k\to\infty$ and $N\to\infty$ can be obtained using Laplace method [@Bender99]. To this end, let us write $\mathcal{B}_N(k)$ as $$\mathcal{B}_N(k)=\frac{k}{n}\int_0^{N/k} e^{k\phi_k(t)}\,dt$$ where we made the change of variable $t=s/k$ and we defined $$\label{eq:phi-k} \phi_{k}(t)= 2 \ln x -\frac{\alpha}{k}\, h(x)$$ where $$\label{eq:def-x} x=p^{-1}(kt/\alpha) \,.$$ The derivative of $\phi_k$ is $$\begin{aligned} \phi_k'(t)&=& \frac{1}{x}\frac{dx}{dt} (1-t)\\ &=& \frac{2k}{\alpha x p' (x)}(1-t)\end{aligned}$$ where we have used the definition (\[eq:def-x\]) of $x$ and the properties (\[eq:p-h-deriv\]) of $h$ and $p$. The maximum of $\phi_k(t)$ is obtained when $t=1$. At this point we have $$\begin{aligned} \phi_k''(1)&=&-\frac{2 k}{\alpha {\hat{x}}_k p' ({\hat{x}}_k)} =-1+O(1/\sqrt{k}) \\ \phi_k^{(3)}(1)&=&\frac{4 k^2}{\alpha^2} \frac{p'({\hat{x}}_k)+x_k p''({\hat{x}}_k)}{{\hat{x}}_k^2 p'({\hat{x}}_k)^3} =2+O(1/\sqrt{k})\\ \phi_k^{(4)}(1)&=&\frac{6k^3}{\alpha^3 p' ({\hat{x}}_k)} \frac{d}{dx}\left[ \frac{p'(x)+x p''(x)}{x^2 p'(x)^3} \right]_{x={\hat{x}}_k} =-6+O(1/\sqrt{k})\end{aligned}$$ \[eq:phis-k\] where $${\hat{x}}_k=p^{-1}(k/\alpha) \,.$$ Expanding $\phi_k(t)$ up to order $(t-1)^4$, and defining $v=\sqrt{k|\phi''_k(1)|}\,(t-1)$, we have $$\begin{aligned} \mathcal{B}_N(k)&=& \frac{\sqrt{k}e^{k\phi_k(1)}}{n \sqrt{|\phi''_k(1)|}}\, \int_{-\sqrt{k|\phi''_k(1)|}}^{(N-k)\sqrt{|\phi''_k(1)|/k}} e^{-v^2/2} \nonumber\\ && \times \left[ 1+\frac{v^3 \phi^{(3)}_{k}(1)}{3!\sqrt{k}|\phi''_k(1)|^{3/2}} +\frac{v^4 \phi_k^{(4)}(1)}{4! k |\phi''_k(1)|^2} +\frac{v^6 [\phi^{(3)}_{k}(1)]^2}{3!^2 2k |\phi''_k(1)|^3} +o\left(\frac{1}{k}\right) \right] \,dv\,.\end{aligned}$$ Let us define $$\label{eq:epsilon-k} \epsilon_k=\sqrt{|\phi''_k(1)|}\,\frac{N-k}{\sqrt{2k}} =\frac{N-k}{\sqrt{2N}}+O(1/\sqrt{N})$$ which is an order one parameter, since we are interested in an expansion for $N$ and $k$ large with $N-k$ of order $\sqrt{N}$. Using the integrals $$\begin{aligned} \int_{-\infty}^{\epsilon} e^{-v^2/2}\,dv & = & \sqrt{\frac{\pi}{2}} \left[ 1 + {\mathop\text{erf}}\left(\frac{\epsilon}{\sqrt{2}}\right) \right] \\ \int_{-\infty}^{\epsilon} e^{-v^2/2}\, v^3 \,dv & = & -(2+\epsilon^2) e^{-\epsilon^2/2} \\ \int_{-\infty}^{\epsilon} e^{-v^2/2}\, v^4 \,dv & = & 3 \sqrt{\frac{\pi}{2}} \left[ 1 + {\mathop\text{erf}}\left(\frac{\epsilon}{\sqrt{2}}\right) \right] -e^{-\epsilon^2/2}\epsilon(3+\epsilon^2) \\ \int_{-\infty}^{\epsilon} e^{-v^2/2}\, v^6 \,dv & = & 15 \sqrt{\frac{\pi}{2}} \left[ 1 + {\mathop\text{erf}}\left(\frac{\epsilon}{\sqrt{2}}\right) \right] -e^{-\epsilon^2/2}\epsilon(15+5\epsilon^2+\epsilon^4)\end{aligned}$$ where ${\mathop\text{erf}}(z)=(2/\sqrt{\pi}) \int_0^{z} e^{-u^2}\,du$ is the error function, we find in the limit $N\to\infty$, $k\to\infty$, and finite $\epsilon_k$, $$\begin{aligned} \label{eq:asymptics-B} \mathcal{B}_N(k)&=& \sqrt{\frac{\pi k}{2 |\phi''_k(1)|}} \frac{e^{k\phi_k(1)}}{n} \left[1+{\mathop\text{erf}}\left({\epsilon_k}\right)\right] \left[1+\frac{1}{12 k} + \frac{1}{\sqrt{k}}\,\xi_1(\epsilon_k) +\frac{1}{k}\,\xi_2(\epsilon_k)\right] \,.\end{aligned}$$ The functions $\xi_1(\epsilon_k)$ and $\xi_2(\epsilon_k)$ contain terms proportional $e^{-\epsilon_k^2}$, from the Gaussian integrals above. However, as explained in the main text, these do not contribute to the final result for the partition function up to order $O(1)$, because the exponential term $e^{-\epsilon_k^2}$ make convergent and finite the integrals of these functions that appear in the calculations, giving terms of order $O(1)$ and $O(1/\sqrt{N})$ respectively. ### Limit $N\to\infty$, $\alpha\to\infty$, fixed $x_m$ For the determination of the thermodynamic limit at fixed shape, we also need the asymptotic behavior of $\mathcal{B}_N(k)$ when $\alpha\to\infty$ at fixed $x_m$. We write $\mathcal{B}_N(k)$ as $$\mathcal{B}_N(k)=\frac{\alpha}{n_b} \int_{1}^{x_m} e^{-\alpha [ h(x)-2p({\hat{x}}_k) \ln x ]} \, p'(x)\,dx \,,$$ where we have defined once again ${\hat{x}}_k$ by $k=\alpha p({\hat{x}}_k)$. We apply Laplace method for $\alpha\to\infty$. Let $$F(x)=h(x)-2p({\hat{x}}_k)\ln x\,.$$ $F$ has a minimum for $x={\hat{x}}_k$ with $F''({\hat{x}}_k)=2p'({\hat{x}}_k)/{\hat{x}}_k$. Expanding to the order $(x-{\hat{x}}_k)^2$ the argument of the exponential and following calculations similar to the ones of the previous section, we find $$\begin{aligned} \mathcal{B}_N(k) &=& \frac{\sqrt{\alpha\pi {\hat{x}}_k p'({\hat{x}}_k)}}{2n_b} e^{-\alpha [h({\hat{x}}_k)-2p({\hat{x}}_k) \ln {\hat{x}}_k]} \left[ {\mathop\text{erf}}(\epsilon_{k,1}) +{\mathop\text{erf}}(\epsilon_{k,m}) \right] \nonumber\\ && \times \left(1+ \frac{1}{\alpha} \xi_0({\hat{x}}_k) + \frac{1}{\sqrt{\alpha}} \left[\xi_{1,m}(\epsilon_{k,m}) + \xi_{1,1}(\epsilon_{k,1}) \right] \right) \label{eq:BN-fixed-shape}\end{aligned}$$ where $$\begin{aligned} \label{eq:epsilon_km} \epsilon_{k,m} &=&\sqrt{\frac{ \alpha p'(x_m)}{x_m}}(x_m-{\hat{x}}_k) \,, \\ \label{eq:epsilon_k1} \epsilon_{k,1} &=&\sqrt{ \alpha p'(1)}({\hat{x}}_k-1) \,.\end{aligned}$$ The terms with the error functions come from incomplete Gaussian integral and take into account the contribution of values of $k$ such that $x_m-{\hat{x}}_k$ (or ${\hat{x}}_k-1$) is of order $1/\sqrt{\alpha}$, or equivalently $N-k$ (or $k$) of order $\sqrt{N}$. The functions $\xi_0({\hat{x}}_k)$, $\xi_{1,1}(\epsilon_{k,1})$, and $\xi_{1,m}(\epsilon_{k,m})$ can be computed explicitly, pushing the expansion one order further. These next order corrections are different than in the previous section, in particular $(1/\alpha) \xi_0({\hat{x}}_k)\neq 1/(12 k)$. However, these next order terms are not needed in the computation of the partition function at order $O(1)$, since they give contributions of order $O(1)$. Note in particular that the term $\xi_0({\hat{x}}_k)/\alpha$ gives contributions of order $O(1)$, contrary to the previous limit studied earlier where it gave contributions of order $\ln N$. Indeed, in the logarithm of the partition function, this term gives a contribution $$\sum_{k=0}^{N} \frac{\xi_0({\hat{x}}_k)}{\alpha } = \frac{1}{\alpha} \int_{1}^{x_m} \alpha p'(x) \xi_0(x) \, dx + o(1) = O(1) \,.$$ Asymptotic expansions of $\tilde{\mathcal{B}}_N(k)$ and $\hat{\mathcal{B}}_N(k)$ -------------------------------------------------------------------------------- To study $\tilde{\mathcal{B}}_N(k)$, it is convenient to define $k'=k-\frac{N}{2}$, then $$\tilde{\mathcal{B}}_N(k)=\frac{\alpha}{n_b} \int_{1/x_m}^{x_m} x^{2k'} e^{-\alpha h(x)}\,x\,p'(x)\,dx \,,$$ which is very similar to $$\hat{\mathcal{B}}_N(k)=\frac{\alpha}{n_b} \int_{1}^{x_m} x^{2k} e^{-\alpha h(x)}\,x\,p'(x)\,dx \,.$$ changing $k'$ by $k$, and taking into account the extended domain of integration $[1/x_m,1]$ for $\tilde{\mathcal{B}}_N$. As in the previous section, the asymptotic expansions for $\tilde{\mathcal{B}}_N(k)$ and $\hat{\mathcal{B}}_N(k)$ can be obtained using Laplace method. Notice that for $\tilde{\mathcal{B}}_N(k)$, $k'$ is in the range $[-\frac{N}{2},\frac{N}{2}]$. When $k'<0$, the maximum of the integrand is in the region $[1/x_m,1]$, and when $k'>0$, the maximum is in the region $[1,x_m]$. Due to the fact that the contribution to the integral from the region $[1/x_m,1]$ is negligible when $k'>0$, the asymptotics for $\hat{\mathcal{B}}_N(k)$ will be the same as those for $\tilde{\mathcal{B}}_N(k)$, for $k'>0$, doing the change $k\to k'$. Therefore, we present only the derivation of the asymptotics of $\tilde{\mathcal{B}}_N$. ### Limit $N\to\infty$, $x_m\to\infty$, and fixed $\alpha$ We proceed as for ${\mathcal{B}}_N(k)$, defining the variable of integration $t=\alpha p(x)/k'$, then $$\tilde{\mathcal{B}}_N(k)=\frac{|k'|}{n_b} \int_{-\frac{N}{2|k'|}}^{\frac{N}{2|k'|}} x\,e^{k'\phi_{k'}(t)}\,dt$$ where $\phi_{k'}(t)$ is the same function defined in equation (\[eq:phi-k\]). Now we apply Laplace method to compute this integral. The main difference with the calculations done for $\mathcal{B}_{N}$ are the following. First, taking into account that $k'$ can be positive or negative, we should note that $$\begin{aligned} \phi_{k'}''(1)&=& \begin{cases} -1+O(1/\sqrt{|k'|})& k'>0 \\ 1+O(1/\sqrt{|k'|})& k'<0 \end{cases} \\ \phi_{k'}^{(3)}(1)&=& \begin{cases} 2+O(1/\sqrt{|k'|})& k'>0\\ -2+O(1/\sqrt{|k'|})& k'<0 \end{cases} \\ \phi_{k'}^{(4)}(1) &=& \begin{cases} -6+O(1/\sqrt{|k'|})&k'>0\\ 6+O(1/\sqrt{|k'|})&k'<0 \end{cases}\end{aligned}$$ Second, we also need to expand $x$ close to the maximum which is obtained for $t=1$, $$x={\hat{x}}_{k'}[1+a (t-1) + b(t-1)^2 +O((t-1)^3)]$$ with $$a=\frac{p({\hat{x}}_{k'})}{{\hat{x}}_{k'} p'({\hat{x}}_{k'})} = \begin{cases} \frac{1}{2}+O(1/\sqrt{|k'|})&k'>0\\ -\frac{1}{2}+O(1/\sqrt{|k'|})&k'<0 \end{cases}$$ and $$b=-\frac{p({\hat{x}}_{k'})^2 p''({\hat{x}}_{k'})}{2 {\hat{x}}_{k'} p'({\hat{x}}_{k'})^3} = \begin{cases} -\frac{1}{8}+O(1/\sqrt{|k'|})& k'>0\\ \frac{3}{8}+O(1/\sqrt{|k'|})& k'<0 \end{cases}$$ Notice in particular that for the term $b$, the difference between positive and negative values of $k'$ is not only a change of sign. This is to be expected since the function $x$ is not invariant under the change $x\to 1/x$. Following very similar calculations to the ones done for $\mathcal{B}_N$ with the appropriate changes mentioned above, we finally find $$\begin{aligned} \label{eq:asympt-tildeBN} \tilde{B}_N(k)&=&\frac{{\hat{x}}_{k'}}{2n_b}\sqrt{\pi\alpha {\hat{x}}_{k'} p'({\hat{x}}_{k'})} e^{-\alpha [h({\hat{x}}_{k'})-2p({\hat{x}}_{k'}) \ln {\hat{x}}_{k'}]} \nonumber\\ &&\times \left[{\mathop\text{erf}}\left(\epsilon_{k,\min}\right)+{\mathop\text{erf}}\left(\epsilon_{k,\max}\right) \right] \left[1+\left(\frac{1}{12}+ c \right)\frac{1}{|k'|}+\cdots\right]\end{aligned}$$ with $$c= \begin{cases} \frac{3}{8}&k'>0\\ -\frac{1}{8}&k'<0 \end{cases}$$ and \[eq:epsilons-min-max\] $$\begin{aligned} \epsilon_{k,\max} &=&\sqrt{\frac{ \alpha p'(x_m)}{x_m}} (x_m-{\hat{x}}_{k-\frac{N}{2}}) \,, \\ \epsilon_{k,\min} &=&\sqrt{\frac{ \alpha p'(1/x_m) }{1/x_m}}\left({\hat{x}}_{k-\frac{N}{2}}-\frac{1}{x_m}\right) \,.\end{aligned}$$ The dots in (\[eq:asympt-tildeBN\]) represent contributions of lower order and of functions of $\epsilon_{k,\min}$ and $\epsilon_{k,\max}$ that give $O(1)$ contributions to the partition function. Comparing to the asymptotics of $\mathcal{B}_N$ we notice two differences: the factor ${\hat{x}}_{k'}$ multiplying all the expressions and the correction $c/|k'|$. ### Limit $N\to\infty$, $\alpha\to\infty$, and fixed $x_m$ The asymptotic expansion of $\tilde{\mathcal{B}}_N$ in this fixed shape situation is simpler, since we do not need the terms of order $1/\alpha$. Doing similar calculations as the ones done for $\mathcal{B}_N$ taking into account the additional factor $x$ in the integral we find $$\begin{aligned} \tilde{\mathcal{B}}_N(k) &=& \frac{{\hat{x}}_{k'}\sqrt{\alpha\pi {\hat{x}}_{k'} p'({\hat{x}}_{k'})}}{2n_b} e^{-\alpha [h({\hat{x}}_{k'})-2p({\hat{x}}_{k'}) \ln {\hat{x}}_{k'}]} \left[ {\mathop\text{erf}}(\epsilon_{k,\min}) +{\mathop\text{erf}}(\epsilon_{k,\max}) \right]\,. \label{eq:tildeBN-fixed-shape}\end{aligned}$$
{ "pile_set_name": "ArXiv" }
--- abstract: 'A five-dimensional (5D) generalized Gödel-type manifolds are examined in the light of the equivalence problem techniques, as formulated by Cartan. The necessary and sufficient conditions for local homogeneity of these 5D manifolds are derived. The local equivalence of these homogeneous Riemannian manifolds is studied. It is found that they are characterized by three essential parameters $k$, $m^2$ and $\omega\,$: identical triads $(k, m^2, \omega)$ correspond to locally equivalent 5D manifolds. An irreducible set of isometrically nonequivalent 5D locally homogeneous Riemannian generalized Gödel-type metrics are exhibited. A classification of these manifolds based on the essential parameters is presented, and the Killing vector fields as well as the corresponding Lie algebra of each class are determined. It is shown that the generalized Gödel-type 5D manifolds admit maximal group of isometry $G_r$ with $r=7$, $r=9$ or $r=15$ depending on the essential parameters $k$, $m^2$ and $\omega\,$. The breakdown of causality in all these classes of homogeneous Gödel-type manifolds are also examined. It is found that in three out of the six irreducible classes the causality can be violated. The unique generalized Gödel-type solution of the induced matter (IM) field equations is found. The question as to whether the induced matter version of general relativity is an effective therapy for these type of causal anomalies of general relativity is also discussed in connection with a recent work by Romero, Tavakol and Zalaletdinov.' author: - | H.L. Carrion[^1],    M.J. Rebouças[^2]    and   A.F.F. Teixeira[^3]\ \ Centro Brasileiro de Pesquisas Físicas\ Departamento de Relatividade e Partículas\ Rua Dr. Xavier Sigaud 150\ 22290-180 Rio de Janeiro – RJ, Brazil\ \ title: | **Gödel-type Spacetimes in Induced Matter\ Gravity Theory\ ** --- Introduction {#intro} ============ The field equations of the general relativity theory, which in the usual notation are written in the form $$G_{\alpha \beta} = \kappa \,\; T_{\alpha \beta}\;, \label{ein}$$ relate the geometry of the spacetime to its source. The general relativity theory, however, does not prescribe the various forms of matter, and takes over the energy-momentum tensor $\,T_{\alpha \beta}\,$ from other branches of physics. In this sense, general relativity (GR) is not a closed theory. The separation between the gravitational field and its source has been often considered as one undesirable feature of GR [@Einstein56] – [@Salam80]. Recently, Wesson and co-workers [@Wesson90; @WessonLeon92a] have introduced a new approach to GR, in which the matter and its role in the determination of the spacetime geometry is given from a purely five-dimensional geometrical point of view. In their five-dimensional (5D) version of general relativity the field equations are given by $$\label{5DfeqsG} \widehat{G}_{AB} = 0 \;.$$ Henceforth, the five-dimensional geometrical objects are denoted by overhats and Latin letters are 5D indices and run from $0$ to $4$. In this new approach to GR the 5D vacuum field equations (\[5DfeqsG\]) give rise to both curvature and matter in 4D. Indeed, it can be shown [@WessonLeon92a] that it is always possible to rewrite the fifteen field equations (\[5DfeqsG\]) as a set of equations such that ten of which are precisely Einstein’s field equations (\[ein\]) in 4D with an [*induced*]{} energy-momentum $$\begin{aligned} \label{Tinduced} \kappa \; T_{\alpha\beta} & = & \frac{\phi_{\alpha\,;\:\beta}}{\phi} - \frac{\varepsilon}{2\,\phi^2} \left\{ \frac{\phi^{*} \, g^{*}_{\alpha\beta}}{\phi} - g^{**}_{\alpha\beta} + g^{\gamma\delta} \, g^{*}_{\alpha\gamma} \, g^{*}_{\beta\delta} - \frac{g^{\gamma\delta} \, g^{*}_{\gamma\delta} \, g^{*}_{\alpha\beta}}{2} \right. \nonumber \\ & & + \; \left. \frac{g^{}_{\alpha\beta}}{4^{} {}_{} } \, \left[\,g^{*}{}^{\gamma\delta} \, g^{*}_{\gamma\delta} + (g^{\gamma\delta_{}} \,g^{*}_{\gamma\delta})^2 \, \right] \,\right\} \;,\end{aligned}$$ where the Greek letters denote 4D indices and run from $0$ to $3$, $g_{44} \equiv \varepsilon\, \phi^2$ with $\varepsilon=\pm 1 $, $\phi_\alpha \equiv \partial \phi / \partial x^\alpha$, a star denotes $\partial / \partial x^4$, and a semicolon denotes the usual 4D covariant derivative. Obviously, the remaining five equations (a wave equation and four conservation laws) are automatically satisfied by any solution of the 5D vacuum equations (\[5DfeqsG\]). Thus, not only the matter but also its role in the determination of the geometry of the 4D spacetime can be considered to have a five-dimensional geometrical origin. This approach unifies the gravitational field with its source (not just with a particular field) within a purely 5D geometrical framework. This 5D version of general relativity is often referred to as induced matter gravity theory (IM gravity theory, for short). The IM theory has become a focus of a recent research field [@Overduin97]. The basic features of the theory have been explored by Wesson and others [@Leon93] – [@Wesson96a], whereas the implications for cosmology and astrophysics have been investigated by a number of researchers [@Wesson92b] – [@Liu96b]. For a fairly updated list of references on IM gravity theory and related issues we refer the reader to Overduin and Wesson [@Overduin97]. In general relativity, the causal structure of 4D spacetime has locally the same qualitative nature as the flat spacetime of special relativity — causality holds locally. The global question, however, is left open and significant differences can occur. On large scale, the violation of causality is not excluded. Actually, it has long been known that there are solutions to the Einstein field equations which possess causal anomalies in the form of closed timelike curves. The famous solution found by Gödel [@Godel49] in 1949 might not be the first but it certainly is the best known example of a cosmological model which makes it apparent that general relativity, as it is normally formulated, does not exclude the existence of closed timelike world lines, despite its Lorentzian character which leads to the local validity of the causality principle. Owing to its striking properties Gödel’s model has a well-recognized importance and has to a certain extent motivated the investigations on rotating cosmological Gödel-type models and on causal anomalies in the framework of general relativity [@Som68] – [@Krasinski98] and other theories of gravitation [@Vaidya84] – [@FonsecaReboucas98]. Two recent articles have been concerned with [*five-dimensional*]{} Gödel-type spacetimes. Firstly in Ref. [@ReboucasTeixeira98a] the main geometrical properties of five-dimensional Riemannian manifolds endowed with a 5D counterpart of the 4D Gödel-type metric of general relativity were investigated. Among several results, an irreducible set of isometrically nonequivalent 5D homogeneous (locally) Gödel-type metrics were exhibited. Therein it was also shown that, apart from the degenerated Gödel-type metric, in all classes of homogeneous Gödel-type geometries there is breakdown of causality. As no use of any particular field equations was made in this first paper, its results hold for any 5D Gödel-type manifolds regardless of the underlying 5D Kaluza-Klein gravity theory. In the second article [@ReboucasTeixeira98b] the classes of 5D Gödel-type spacetimes discussed in [@ReboucasTeixeira98a] were investigated from a more physical viewpoint. Particularly, it was examined the question as to whether the induced matter theory of gravitation permits the family of noncausal solutions of Gödel-type metrics studied in [@ReboucasTeixeira98a]. It was shown that the IM gravity excludes this class of 5D Gödel-type non-causal geometries as solution to its field equations. In both articles [@ReboucasTeixeira98a; @ReboucasTeixeira98b] the 5D Gödel-type family of metrics discussed is the simplest 5D class of geometries for which the section $u = \mbox{const}$ ($u$ is the extra coordinate) is the 4D Gödel-type metric of general relativity. Actually the 5D Gödel-type line element of both papers does not depend on the fifth coordinate $u$, and therefore as regards to the IM theory a radiation-like equation of state is an underlying assumption of both articles. However, it is well know [@Overduin97] that the dependence of the 5D metric on the extra coordinate is necessary to ensure that the 5D IM theory permits the induction of matter of a very general type in 4D. In this work, on the one hand, we shall examine the main geometrical properties of a class of [*generalized*]{} Gödel-type geometries in which the 5D metric depends on the fifth coordinate, generalizing therefore the results found in Ref. [@ReboucasTeixeira98a]. On the other hand, we shall also investigate the question as to whether the induced matter gravity theory, as formulated by Wesson and co-workers [@Wesson90; @WessonLeon92a], admits these generalized Gödel-type metrics as solutions to its field equations, thus also extending the investigations of Ref.  [@ReboucasTeixeira98b]. The outline of this article is as follows. In the next section we present a summary of some important prerequisites for Section 3, where using the equivalence problem techniques as formulated by Cartan [@Cartan] we derive the necessary and sufficient conditions for local homogeneity of this class of 5D generalized Gödel-type manifolds. In Section 3 we also exhibit an irreducible set of isometrically nonequivalent homogeneous generalized Gödel-type metrics. In Section 4 we discuss the integration of the Killing equations and present the Killing vector fields as well as the corresponding Lie algebra for all homogeneous generalized Gödel-type metrics. In the last section we examine whether the IM field equations permit solutions of this generalized Gödel-type class of geometries. The unique solution of this type is found therein. The question as to whether the IM version of general relativity rules out the existence of closed timelike curves of Gödel type is also discussed (Section 5) in connection with a recent paper by Romero [*et al.*]{} [@Romero96]. Prerequisites {#prereq} ============= The arbitrariness in the choice of coordinates in the metric theories of gravitation gives rise to the problem of deciding whether or not two manifolds whose metrics $g$ and $\tilde{g}$ are given explicitly in terms of coordinates, viz., $$ds^2 = g_{\mu \nu} \,dx^\mu \, dx^\nu \qquad \; \mbox{and} \qquad \; d\tilde{s}^2 = \tilde{g}_{\mu \nu}\,d\tilde{x}^\mu\,d\tilde{x}^\nu\:,$$ are locally isometric. This is the so-called equivalence problem (see Cartan [@Cartan] for the local equivalence of $n$-dimensional Riemannian manifolds, Karlhede [@Karlhede80] and MacCallum [@MacCallumSkea94] for the special case $n=4$ of general relativity). The Cartan solution [@Cartan] to the equivalence problem for Riemannian manifolds can be summarized as follows. Two $n$-dimensional Lorentzian Riemannian manifolds ${\cal M}_n$ and $\widetilde{\cal M}_n$ are locally equivalent if there exist coordinate and generalized $n$-dimensional Lorentz transformations such that the following [*algebraic*]{} equations relating the frame components of the curvature tensor and their covariant derivatives: are compatible as [*algebraic*]{} equations in $\left( x^{\mu}, \xi^{A} \right)$. Here and in what follows we use a semicolon to denote covariant derivatives. Note that $x^{\mu}$ are coordinates on the manifold ${\cal M}_n$ while $ \xi^{A}$ parametrize the group of allowed frame transformations \[$n$-dimensional generalized Lorentz group usually denoted [@HawkingEllis73] by $O(n-1, 1)\,$\]. Reciprocally, equations (\[eqvcond\]) imply local equivalence between the $n$-dimensional manifolds ${\cal M}_n$ and $\widetilde{\cal M}_n$. In practice, a fixed frame is chosen to perform the calculations so that only coordinates appear in the components of the curvature tensor, i.e. there is no explicit dependence on the parameters $\xi^{A}$ of the generalized Lorentz group. Another important practical point to be considered, once one wishes to test the local equivalence of two Riemannian manifolds, is that before attempting to solve eqs. (\[eqvcond\]) one can extract and compare partial pieces of information at each step of differentiation as, for example, the number $\{t_{0},t_1, \dots ,t_{p}\}$ of functionally independent functions of the coordinates $x^\mu$ contained in the corresponding set $$\label{CartanScl} I_{p} = \{ R^{A}_{\ BCD} \,, \,R^{A}_{\ BCD;M_{1}} \,, \, R^{A}_{\ BCD;M_1 M_2}\,,\,\ldots,\,R^{A}_{\ BCD;M_1 M_2\ldots M_{p}}\}\,,$$ and the isotropy subgroup $\{H_{0}, H_1, \ldots ,H_{p}\}$ of the symmetry group $G_r$ under which the set corresponding $I_p$ is invariant. They must be the same for each step $q= 0, 1, \cdots ,p$ if the manifolds are locally equivalent. In practice it is also important to note that in calculating the curvature and its covariant derivatives, in a chosen frame, one can stop as soon as one reaches a step at which the $p^{th}$ derivatives (say) are algebraically expressible in terms of the previous ones, and the residual isotropy group (residual frame freedom) at that step is the same isotropy group of the previous step, i.e. $H_p = H_{(p-1)}$. In this case further differentiation will not yield any new piece of information. Actually, if $H_p = H_{(p-1)}$ and, in a given frame, the $p^{th}$ derivative is expressible in terms of its predecessors, for any $q > p$ the $q^{th}$ derivatives can all be expressed in terms of the $0^{th}$, $1^{st}$, $\cdots$, $(p-1)^{th}$ derivatives [@Cartan; @MacCallumSkea94]. Since there are $t_p$ essential coordinates, in 5D clearly $5-t_p$ are ignorable, so the isotropy group will have dimension $s = \mbox{dim}\,( H_p )$, and the group of isometries of the metric will have dimension $r$ given by (see Cartan [@Cartan]) $$r = s + 5 - t_p \:, \label{gdim}$$ acting on an orbit with dimension $$d = r - s = 5 - t_p \:. \label{ddim}$$ Homogeneity and Nonequivalent Metrics {#homoconds} ===================================== The line element of the five-dimensional [*generalized*]{} Gödel-type manifolds ${\cal M}_5$ we are concerned with is given by $$\label{ds2a} d\hat{s}^{2} = dt^2 + 2\,H(x)\, dt\,dy - dx^2 - G(x)\,dy^2 - \widetilde{F}^2(\tilde{u})\,(d\tilde{z}^2 + d\tilde{u}^2) \:,$$ where $H(x)$, $G(x)$ and $\widetilde{F}(\tilde{u})$ are arbitrary real functions. By a suitable choice of coordinates the line element (\[ds2a\]) can be brought into the form $$\label{ds2} d\hat{s}^{2} = [\,dt + H(x)\,dy\,]^2 - dx^2 - D^2(x)\,dy^2 - F^2(u)\,\,dz^2 - du^2 \:,$$ where $D^2(x) = G + H^2$ and $u$ clearly is a new fifth coordinate. At an arbitrary point of ${\cal M}_5$ one can choose the following set of linearly independent one-forms $\widehat{\Theta}^A$: $$\label{lorpen} \widehat{\Theta}^{0} = dt + H(x)\,dy\:, \: \quad \widehat{\Theta}^{1} = dx\:, \: \quad \widehat{\Theta}^{2} = D(x)\,dy\:, \:\quad \widehat{\Theta}^{3} = F(u)\, dz \:, \: \quad \widehat{\Theta}^{4} = du \:,$$ such that the Gödel-type line element (\[ds2\]) can be written as $$\label{ds2f} d\hat{s}^2 = \widehat{\eta}^{}_{AB} \: \widehat{\Theta}^A \,\, \widehat{\Theta}^B = (\widehat{\Theta}^0)^2 - (\widehat{\Theta}^1)^2 - (\widehat{\Theta}^2)^2 - (\widehat{\Theta}^3)^2 - (\widehat{\Theta}^4)^2\:.$$ Here and in what follows capital letters are 5D Lorentz frame indices and run from 0 to 4; they are raised and lowered with Lorentz matrices $\widehat{\eta}^{AB} = \widehat{\eta}^{}_{AB} = \mbox{diag} (+1, -1, -1, -1, -1)$, respectively. Using as input the one-forms (\[lorpen\]) and the Lorentz frame (\[ds2f\]), the computer algebra package [classi]{} [@MacCallumSkea94; @Aman87], e.g., gives the following nonvanishing Lorentz frame components $\widehat{R}_{ABCD}$ of the curvature: $$\begin{aligned} \widehat{R}_{0101} &=& \widehat{R}_{0202}=- \frac{1}{4} \, \left( \frac{H'}{D}\, \right)^2\:, \label{rie1st} \\ \widehat{R}_{0112} & =& \frac{1}{2}\, \left(\frac{H'}{D}\,\right)' \:, \label{rie2nd} \\ \widehat{R}_{1212} &=& \frac{D''}{D}-\frac{3}{4}\, \left( \frac{H'}{D}\,\right)^2 \label{rie3rd}\:, \\ \widehat{R}_{3434} &=& \frac{\ddot{F}}{F} \label{rielast} \;\,,\end{aligned}$$ where the prime and the dot denote, respectively, derivative with respect to $x$ and $u$. For 5D (local) homogeneity from eq. (\[ddim\]) one must have $t_q=0$ for $q=0, 1, \cdots\, p$, that is, the number of functionally independent functions of the coordinates $x^\mu$ in the set $I_p$ must be zero. Therefore, from eqs. (\[rie1st\]) – (\[rielast\]) we conclude that for 5D homogeneity it is necessary that $$\begin{aligned} \frac{H'}{D} &=&\mbox{const} \equiv -\,2\,\omega \label{cond1} \:, \\ \frac{D''}{D}&=&\mbox{const} \equiv m^2 \label{cond2} \:, \\ \frac{\ddot{F}}{F}&=&\mbox{const} \equiv k \:. \label{cond3}\end{aligned}$$ The above necessary conditions are also sufficient for 5D local homogeneity. Indeed, under these conditions the nonvanishing frame components of the curvature reduce to $$\begin{aligned} \widehat{R}_{0101} &=& \widehat{R}_{0202}=- \omega^2 \label{rieh1st} \:, \\ \widehat{R}_{1212} &=& m^2 - 3\,\omega^2 \label{rieh2nd} \:, \\ \widehat{R}_{3434} &=& k \label{riehlast} \:.\end{aligned}$$ Following Cartan’s method for the local equivalence, we calculate the first covariant derivative of the Riemann tensor. One obtains the following non-null covariant derivatives of the curvature: $$\label{drieh} \widehat{R}_{0112;1} = \widehat{R}_{0212;2}= \omega\, (m^2 - 4\,\omega^2) \:.$$ Clearly, regardless of the value of the constant $k\,$, the first covariant derivative of the curvature is algebraically expressible in terms of the Riemann tensor. Moreover, the number of functionally independent functions of the coordinates $x^\mu$ among the components of the curvature and its first covariant derivative is zero ($t_0=t_1=0$). As far as the dimension of the residual isotropy group is concerned we distinguish three different classes of locally homogeneous 5D generalized Gödel-type curved manifolds, according to the relevant parameters $m^2$, $\omega$ and $k\,$, namely [@foot1] 1. $\mbox{dim}\,(H_0) = \mbox{dim}\, (H_1)= 2\,$ when 1. $\,\omega \not=0\,$, any real $k\,$, $\,m^2 \not=4\,\omega^2\,$ ; 2. $\,\omega =0\,$, $k\not= 0\,$, $\,m^2 \not=0\,$ ; 2. $\,\mbox{dim}\,(H_0) = \mbox{dim}\, (H_1)= 4\,$ when 1. $\,\omega \not=0\,$, any real $k\,$, $\,m^2=4\,\omega^2\,$ ; 2. $\,\omega =0\,$, $k=0\,$, $\,m^2 \not=0\,$ ; 3. $\,\omega =0\,$, $k\not=0\,$, $\,m^2=0\,$ ; 3. $\,\mbox{dim}\,(H_0) = \mbox{dim}\, (H_1)= 10\,$ when $\omega = k = m^2 = 0\,$. Thus, from eqs. (\[gdim\]) and (\[ddim\]) one finds that the locally homogeneous 5D generalized Gödel-type manifolds admit a (local) $G_r$, with either $r =7$, $r=9\,$, or $r=15$ acting on an orbit of dimension $d = 5$, that is on the manifold ${\cal M}_5$. The above results can be collected together in the following theorems: \[TheoHom\] The necessary and sufficient conditions for a five-dimensional generalized Gödel-type manifold to be locally homogeneous are those given by equations (\[cond1\]) – (\[cond3\]). \[EquivTheo\] The five-dimensional homogeneous generalized Gödel-type manifolds are locally characterized by three independent real parameters $\omega$, $k$ and $m^2\,$: identical triads ($\omega, k,\, m^2$) specify locally equivalent manifolds. \[GroupTheo\] The five-dimensional locally homogeneous generalized Gödel-type manifolds admit group of isometry $G_r$ with 1. $r=7\:$ if either of the above conditions (1.a) and (1.b) is fulfilled; 2. $r=9\:$ if one of the above set of conditions (2.a), (2.b) and (2.c) is fulfilled; 3. $r=15\;$ if the above condition (3) is satisfied. We shall now focus our attention on the irreducible set of isometrically nonequivalent homogeneous generalized Gödel-type metrics. These nonequivalent classes of metrics can be obtained by a similar procedure to that used by Rebouças and Tiomno [@Reboucas83], namely by integrating equations (\[cond1\]) – (\[cond3\]), and eliminating through coordinate transformations the non-essential integration constants taking into account the relevant parameters according to the above theorem \[EquivTheo\]. For the sake of brevity, however, we shall only present the irreducible classes without going into details of calculations. It turns out that one ought to distinguish six classes of metrics according to: [**Class I**]{} : $\,m^2 > 0\,$, any real $k\,$, $\,\omega \not=0 $. The line element for this class of homogeneous generalized Gödel-type manifolds can always be brought \[in cylindrical coordinates $(r, \phi, z)$\] into the form $$\label{ds2c} d\hat{s}^{2}=[\,dt+H(r)\, d\phi\,]^{2} -D^{2}(r)\, d\phi^{2} -dr^{2} -F^2(u)\,dz^{2} - du^2$$ with the metric functions given by $$\begin{aligned} H(r) &=&\frac{2\,\omega}{m^{2}}\: [1 - \cosh\,(mr)]\;, \label{Hh} \\ D(r) &=& m^{-1}\, \sinh\,(mr) \label{Dh} \;,\end{aligned}$$ $$\begin{aligned} \label{Ffun} F\,(u) = \left\{ \begin{array} {l@{\qquad \mbox{if} \qquad}l} \alpha^{-1}\, \sin\, (\alpha\, u ) & k = - \alpha^2 < 0 \;, \\ u & k = 0 \;, \\ \alpha^{-1}\, \sinh\, (\alpha\,u) & k = \alpha^2 > 0 \;. \end{array} \right.\end{aligned}$$ According to theorem \[GroupTheo\] the possible isometry groups for this class are either $G_7$ (for $m^2 \not= 4\,\omega^2$) or $G_9$ (when $m^2 = 4\,\omega^2$), irrespective of the value of $k\,$. [**Class II**]{} : $\,m^2 = 0\,$, any real $k\,$, $\,\omega \not=0 $. The line element for this class can be brought into the form (\[ds2c\]), with the metric function $F(u)$ given by (\[Ffun\]), but now the functions $H(r)$ and $D(r)$ are given by $$\label{DHsr} H(r) = - \,\omega\, r^{2} \: \qquad \mbox{and} \: \qquad D(r) = r \:.$$ For this class from theorem \[GroupTheo\] there is a group $G_7$ of isometries, regardless of the value of $k$. [**Class III**]{} : $\,m^{2} \equiv - \mu^{2} < 0\,$, any real $k\,$, $\,\omega \not=0 $. Similarly for this class the line element reduces to  (\[ds2c\]) with $F(u)$ given by (\[Ffun\]) and $$\begin{aligned} H(r) &=& \frac{2\,\omega}{\mu^{2}} \:[\cos\,(\mu r) - 1 ] \;, \label{Hc} \\ D(r) &=& \mu^{-1}\, \sin\,(\mu r)\;. \label{Dc} \end{aligned}$$ From theorem \[GroupTheo\], regardless the value of $k$ for this class there is a group $G_7$ of isometries. [**Class IV**]{} : $\,m^{2} \not= 0\,$, any real $k\,$, and $\,\omega = 0$. We shall refer to this class as degenerated Gödel-type manifolds, since the cross term in the line element, related to the rotation $\omega$ in 4D Gödel model, vanishes. By a trivial coordinate transformation one can make $H = 0$ with $D(r)$ given, respectively, by (\[Dh\]) or (\[Dc\]) depending on whether $m^2>0$ or $m^{2} \equiv - \mu^{2} < 0$. The function $F(u)$ depends on the sign of $k$ and is again given by (\[Ffun\]). For this class according to theorem \[GroupTheo\] one may have either a $G_7$ for $k\not=0$, or a $G_9$ for $k=0\,$. [**Class V**]{} : $\,m^{2} = 0\,$, $k \not=0 \,$, and $\,\omega = 0$. By a trivial coordinate transformation one can make $H=0\,$, $D = r\,$ and $F(u) = \alpha^{-1}\,\sin\, (\alpha\,u)\,$ or $\,F(u) = \alpha^{-1}\sinh\, (\alpha\,u) \,$ depending on whether $\,k<0\,$ or $\,k>0\,$, respectively. From theorem \[GroupTheo\] there is a group $G_9$ of isometries. [**Class VI**]{} : $\,m^{2} = 0\,$, $k=0\,$, and $\,\omega = 0$. From (\[rieh1st\]) – (\[riehlast\]) this corresponds to the 5D flat manifold. Therefore, one can make $H=0\,$, $D(r)= r\,$ and $F(u) = u \,$. Theorem \[GroupTheo\] ensures that there is a group $G_{15}$ of isometries. Killing Vector Fields {#Killing} ===================== In this section we shall present the infinitesimal generators of isometries of the 5D homogeneous generalized Gödel-type manifolds, whose line element (\[ds2c\]) can be brought into the Lorentzian form (\[ds2f\]) with $\widehat{\Theta}^A$ given by $$\label{lorpen1} \widehat{\Theta}^{0} = dt + H(r)\,d\phi\:, \: \quad \widehat{\Theta}^{1} = dr\:, \: \quad \widehat{\Theta}^{2} = D(r)\,d\phi\:, \:\quad \widehat{\Theta}^{3} = F(u)\,dz \:, \: \quad \widehat{\Theta}^{4} = du \:,$$ where the functions $H(r)\,$, $D(r)$ and $F(u)$ depend upon the essential parameters $m^2\,$, $k\,$ and $\omega\,$ according to the above classes of locally homogeneous manifolds. Denoting the coordinate components of a generic Killing vector field $\widehat{K}$ by $\widehat{K}^{u} \equiv (Q, R, S,\bar{Z},U)$, where $Q, R, S, \bar{Z} $ and $U$ are functions of all coordinates $t,r,\phi,z$, $u$, then the fifteen Killing equations $$\label{killeqs} \widehat{K}_{(A;B)} \equiv \widehat{K}_{A;B} + \widehat{K}_{B;A} = 0$$ can be written in the Lorentz frame (\[ds2f\]) – (\[lorpen1\]) as $$\begin{aligned} &T_t = 0 \:, \qquad T_{u} - U_t = 0 \label{um} \:, \\ &R_r = 0 \:, \qquad U_r + R_{u} = 0 \label{dois} \:, \\ &U_{u} = 0 \:, \label{tres} \\ &D\,(T_r - R_t) - H_r P = 0 \label{quatro} \:, \\ & D P_{u} + U_{\phi} - H U_t = 0 \label{cinco} \:, \\ &T_{\phi} + H_r R - D P_t = 0 \label{seis} \:, \\ &R_{\phi} - H R_t - D_r P + D P_r = 0 \label{sete} \:, \\ &P_{\phi} - H P_t + D_r R = 0 \label{oito} \:, \\ & T_z - F\,Z_t = 0 \label{nove} \:, \\ & F\, Z_r + R_z = 0 \label{dez} \:, \\ & Z_z + U\, F_u = 0 \label{onze} \:, \\ &U_z + F\, Z_{u} - Z\, F_u = 0 \label{doze} \:, \\ & D P_z + F\,( Z_{\phi} - H Z_t) = 0 \label{treze} \:,\end{aligned}$$ where the subscripts denote partial derivatives, and where we have made $$\label{pencor} T \equiv H\,S + Q, \qquad P \equiv D\,S, \qquad \mbox{and} \qquad Z \equiv F\, \bar{Z}$$ to make easier the comparison and the use of the results obtained in [@Teixeira85]. To this end we note that with the changes $u \rightarrow z$ and $U \rightarrow Z$ the above equations (\[um\]) – (\[oito\]) are formally identical to the Killing equations (4) to (11) of [@Teixeira85]. However, in the equations (\[um\]) – (\[oito\]) the functions $T, R, P, U$ depend additionally on the fifth coordinate $u$. Taking into account this similitude, the integration of the Killing equations (\[um\]) – (\[treze\]) can be obtained in two steps as follows. First, by analogy with (4) to (11) of Ref. [@Teixeira85] one integrates (\[um\]) – (\[oito\]), but at this step instead of the integration constants one has integration functions of the fifth coordinate $u$. Second, one uses the remaining eqs. (\[nove\]) – (\[treze\]) to achieve explicit forms for these integration functions and to obtain the last component $U$ of the generic Killing vector $K$. We have used the above two-steps procedure to integrate the Killing equations (\[um\]) – (\[treze\]) for all class of homogeneous generalized Gödel-type manifolds. However, for the sake of brevity, we shall only present the Killing vector fields and the corresponding Lie algebras without going into details of calculations, which can be verified by using, for example, the computer algebra program [killnf]{}, written in [classi]{} by [Å]{}man [@Aman87]. [**Class I**]{} : $\,m^2 > 0\,$, any real $k\,$, $\,\omega \not=0 $. In the integration of the Killing equation for this general class one is led to distinguish two different subclasses of solutions depending on whether $m^2 \not= 4 \,\omega^2$ or $m^2 =4 \,\omega^2$. We shall refer to these subclasses as classes Ia and Ib, respectively. [**Class Ia**]{} : $\,m^2 >0\,$, any real $k\,$, $\,m^2\not=4\,\omega^2$. In the coordinate basis in which as (\[ds2c\]) is given, a set of linearly independent Killing vector fields $K_N$ ($N$ is an enumerating index) is given by $$\begin{aligned} K_1 &=&\partial_t \:, \qquad \quad K_2 = \frac{2\,\omega}{m}\, \,\partial_t - m \,\partial_{\phi} \:, \label{KIa1} \\ K_3 &=& -\,\frac{H}{D}\, \sin\phi\, \,\partial_t +\cos\phi\, \,\partial_r -\,\frac{D_r}{D}\, \sin\phi\, \,\partial_{\phi} \:, \label{KIa2} \\ K_4 &=& -\,\frac{H}{D}\, \cos\phi\, \,\partial_t -\sin\phi\, \,\partial_r -\,\frac{D_r}{D}\, \cos\phi\, \,\partial_{\phi} \:, \label{KIa3} \\ K_5 &=& \sin z\, \,\partial_u +\frac{F_u}{F}\,\cos z\, \,\partial_z \:, \label{KIa4} \\ K_6 &=& \cos z\, \,\partial_u - \frac{F_u}{F}\,\sin z\, \,\partial_{z} \:, \label{KIa5} \\ K_7 &=& \partial_{z} \;. \label{KIa6}\end{aligned}$$ The Lie algebra has the following nonvanishing commutators: $$\begin{aligned} & \left[ K_2, K_3 \right] = - m\, K_4 \:, \qquad \left[ K_2, K_4 \right] = m\, K_3 \:, \qquad \left[ K_3, K_4 \right] = m\, K_2 \:, \\ &\left[ K_5, K_6 \right] = -\, k \, K_7 \:, \qquad \left[ K_5, K_7 \right] = - K_6 \:, \qquad \left[ K_6, K_7 \right] = K_5 \:.\end{aligned}$$ Therefore the corresponding algebra is ${\cal L}_{Ia} = {\cal L}_k \oplus \tau \oplus so\,(2,1)$. Here and in what follows the symbols $\oplus\,$ and $\,{\mbox{$\subset \!\!\!\!\!\!+$}}$ denote and direct and semi-direct sum of sub-algebras, and the sub-algebra ${\cal L}_k$ is $so\,(3)$ for $k < 0 \,$, $so\,(2,1)$ for $k > 0 \,$, and $t^2 \,{\mbox{$\subset \!\!\!\!\!\!+$}}\, so\,(2)\,$ for $k=0\,$. For the present class ${\cal L}_k$ is generated by $K_5, K_6\,$ and $K_7$, the symbol $\tau$ is associated to the time translation $K_1$, and finally the infinitesimal generators of sub-algebra $so\,(2,1)$ are $K_2,\, K_3\,$ and $K_4$. [**Class Ib**]{} : $\,m^2 =4\, \omega^2\,$, any real $k\,$, $\,\omega\not=0$. For this class the Killing vector fields are $$\begin{aligned} K_1 &=&\partial_t \:, \quad \qquad K_2 = \partial_t - m \,\partial_{\phi} \:, \label{KIb1} \\ K_3 &=& -\,\frac{H}{D}\, \sin\phi\, \,\partial_t +\cos\phi\, \,\partial_r -\,\frac{D_r}{D}\, \sin\phi\, \,\partial_{\phi} \:, \label{KIb2} \\ K_4 &=& -\,\frac{H}{D}\, \cos\phi\, \,\partial_t -\sin\phi\, \,\partial_r -\,\frac{D_r}{D}\, \cos\phi\, \,\partial_{\phi} \:, \label{KIb3} \\ K_5 &=&-\,\frac{H}{D}\,\cos(mt+\phi)\,\,\partial_t +\sin(mt+\phi)\,\,\partial_r +\,\frac{1}{D}\, \cos(mt+\phi)\,\,\partial_{\phi} \:, \label{KIb4} \\ K_6 &=&-\,\frac{H}{D}\,\sin(mt+\phi)\,\,\partial_t -\cos(mt+\phi)\,\,\partial_r +\,\frac{1}{D}\, \sin(mt+\phi)\,\,\partial_{\phi} \:, \label{KIb5} \\ K_7 &=& \sin z\, \,\partial_u +\frac{F_u}{F}\,\cos z\, \,\partial_z \:, \label{KIb6} \\ K_8 &=& \cos z\, \,\partial_u - \frac{F_u}{F}\,\sin z\, \,\partial_{z} \:, \label{KIb7} \\ K_9 &=& \partial_{z} \:, \label{KIb8} \end{aligned}$$ whose Lie algebra is given by $$\begin{aligned} &\left[ K_1, K_5 \right] = -m\, K_6 \:, \qquad \left[ K_1, K_6 \right] = m\, K_5 \:, \qquad \left[ K_2, K_3 \right] = - m\, K_4 \:, \\ &\left[ K_2, K_4 \right] = m\, K_3 \:, \qquad \left[ K_3, K_4 \right] = m\, K_2 \:, \qquad \left[ K_5, K_6 \right] = m\, K_1 \:, \\ &\left[ K_7, K_8 \right] = -\, k \, K_9 \:, \qquad \left[ K_7, K_9 \right] = - K_8 \:, \qquad \left[ K_8, K_9 \right] = K_7 \:. \end{aligned}$$ So, the corresponding algebra for this case is ${\cal L}_{Ib} = {\cal L}_k \oplus so\,(2,1) \oplus so\,(2,1)$. As in the previous class the sub-algebra ${\cal L}_k$ depends on the sign of $k$, and here is generated by $K_7,K_8$ and $K_9$. The two sub-algebras $so\,(2,1)$ are generated by the Killing vector fields $K_1, K_5, K_6$ and $K_2, K_3, K_4$. [**Class II**]{} : $\,m^2 = 0\,$, any real $k\,$, $\,\omega \not=0$. For this class the Killing vector fields turns out to be the following: $$\begin{aligned} K_1 &=&\partial_t \:, \quad \qquad K_2 = \partial_{\phi} \:, \label{KII1} \\ K_3 &=& -\,\omega\,r\, \sin\phi\, \,\partial_t -\cos\phi\, \,\partial_r +\,\frac{1}{r}\, \sin\phi\, \,\partial_{\phi} \:, \label{KII2} \\ K_4 &=& -\,\omega\,r\, \cos\phi\, \,\partial_t +\sin\phi\, \,\partial_r +\,\frac{1}{r}\, \cos\phi\, \,\partial_{\phi} \:, \label{KII3} \\ K_5 &=& \sin z\, \,\partial_u +\frac{F_u}{F}\,\cos z\, \,\partial_z \:, \label{KII4} \\ K_6 &=& \cos z\, \,\partial_u - \frac{F_u}{F}\,\sin z\, \,\partial_{z} \:, \label{KII5} \\ K_7 &=& \partial_{z} \:. \label{KII6}\end{aligned}$$ The Lie algebra has the following nonvanishing commutators: $$\begin{aligned} &\left[ K_2, K_3 \right] = K_4 \:, \quad \left[ K_2, K_4 \right] = - K_3 \:, \quad \left[ K_3, K_4 \right] = 2\, \omega\, K_1 \:, \\ &\left[ K_5, K_6 \right] = -\, k \, K_7 \:, \qquad \left[ K_5, K_7 \right] = - K_6 \:, \qquad \left[ K_6, K_7 \right] = K_5 \:. \end{aligned}$$ Therefore, the corresponding algebra for this case is ${\cal L}_{II} = {\cal L}_k \oplus {\cal L}_4$. The sub-algebra ${\cal L}_4$ is generated by $K_1, K_2, K_3$ and $K_4$. This algebra ${\cal L}_4$ is soluble and does not contain abelian 3D sub-algebras; it is classified as type $III$ with $q=0$ by Petrov [@Petrov69]. The sub-algebra ${\cal L}_k$ is the same of the previous classes and is generated by $K_5, K_6$ and $K_7$. [**Class III**]{} : $\,m^{2} \equiv - \mu^{2} < 0\,$, any real $k\,$, $\,\omega \not=0 $. For this class the set of linearly independent Killling vector fields we have found is given by $$\begin{aligned} K_1 &=&\partial_t \:, \quad \: K_2 = \frac{2\,\omega}{\mu} \, \partial_t + \mu\, \partial_{\phi} \:, \label{KIII1} \\ K_3 &=& -\,\frac{H}{D}\, \sin\phi\, \,\partial_t +\cos\phi\, \,\partial_r -\,\frac{D_r}{D}\, \sin\phi\, \,\partial_{\phi} \:, \label{KIII2} \\ K_4 &=& -\,\frac{H}{D}\, \cos\phi\, \,\partial_t -\sin\phi\, \,\partial_r -\,\frac{D_r}{D}\, \cos\phi\, \,\partial_{\phi} \:, \label{KIII3} \\ K_5 &=& \sin z\, \,\partial_u +\frac{F_u}{F}\,\cos z\, \,\partial_z \:, \label{KIII4} \\ K_6 &=& \cos z\, \,\partial_u - \frac{F_u}{F}\,\sin z\, \,\partial_{z} \:, \label{KIII5} \\ K_7 &=& \partial_{z} \:. \label{KIII6}\end{aligned}$$ The Lie algebra has the following nonvanishing commutators: $$\begin{aligned} &\left[ K_2, K_3 \right] = \mu \, K_4 \:, \qquad \left[ K_2, K_4 \right] = -\mu \, K_3 \:, \qquad \left[ K_3, K_4 \right] = \mu \, K_2 \:, \\ &\left[ K_5, K_6 \right] = -\, k \, K_7 \:, \qquad \left[ K_5, K_7 \right] = - K_6 \:, \qquad \left[ K_6, K_7 \right] = K_5 \:. \end{aligned}$$ Thus, the corresponding algebra for this case is ${\cal L}_{III} = {\cal L}_k \oplus \tau \oplus so\,(3)$. Here $\tau$ is associated to the Killing vector field $K_1$, whereas to the sub-algebra $so\,(3)$ correspond $K_2$, $K_3$ and $K_4$. Again ${\cal L}_k$ is generated by $K_5, K_6$ and $K_7$. [**Class IV**]{} : $\,m^2 \not= 0\,$, any real $k\,$, $\,\omega=0 $. In the integration of the Killing equation for this general class one is led to distinguish two different subclasses according to $k\not=0$ or $k=0$. We shall denote these subclasses as classes IVa and IVb, respectively. [**Class IVa**]{} : $\,m^{2} \not= 0\,$, $k\not=0$, $\,\omega = 0$. This class corresponds to the so-called degenerated Gödel-type manifolds. One obtains for this class the following Killing vector fields: $$\begin{aligned} K_1 &=&\partial_t \:, \qquad \quad K_2 = \partial_\phi \:, \label{KIVa1} \\ K_3 &=& \cos\phi\, \,\partial_r- \,\frac{D_r}{D}\, \sin\phi\, \,\partial_{\phi} \:, \label{KIVa2} \\ K_4 &=& -\sin\phi\, \,\partial_r -\,\frac{D_r}{D}\, \cos\phi\, \,\partial_{\phi} \:, \label{KIVa3} \\ K_5 &=& \sin z\, \,\partial_u +\frac{F_u}{F}\,\cos z\, \,\partial_z \:, \label{KIVa4} \\ K_6 &=& \cos z\, \,\partial_u - \frac{F_u}{F}\,\sin z\, \,\partial_{z} \:, \label{KIVa5} \\ K_7 &=& \partial_{z} \:, \label{KIVa6}\end{aligned}$$ where $D(r) = (1/m)\, \sinh mr$ for $m^2>0\,$, or $\,D(r) = (1/\mu)\, \sin \mu r$ for $m^2 \equiv - \mu^2 < 0 $, and the function $F(u)$ for $k\not=0$ is given by (\[Ffun\]). The Lie algebra has the following nonvanishing commutators: $$\begin{aligned} &\left[ K_2, K_3 \right] = K_4 \:, \qquad \left[ K_2, K_4 \right] = - K_3 \:, \qquad \left[ K_3, K_4 \right] = - m^2 \, K_2 \:, \\ &\left[ K_5, K_6 \right] = -\, k \, K_7 \:, \qquad \left[ K_5, K_7 \right] = - K_6 \:, \qquad \left[ K_6, K_7 \right] = K_5 \:,\end{aligned}$$ where one should substitute $-m^2$ by $\mu^2$ if $m^2 < 0 $. So, the corresponding Lie algebra is ${\cal L}_{IVa}={\cal L}_k \oplus \tau \oplus {\cal L}_m$, where ${\cal L}_m$ is $so\,(2,1)$ for $m^2>0$, and $so\,(3)$ for $m^2 = - \mu^2 <0$. The sub-algebra ${\cal L}_k$ (generated by $K_5, K_6$ and $K_7$) is $so\,(3)$ for $k < 0\,$, and $so\,(2,1)$ for $k > 0 \,$. Again $\tau$ is associated to the Killing vector field $K_1$. [**Class IVb**]{} : $\,m^{2} \not= 0\,$, $k=0$, $\,\omega = 0$. We shall refer to this class as doubly-degenerated Gödel-type manifolds. One obtains for this class the following Killing vector fields: $$\begin{aligned} K_1 &=&\partial_t \:, \qquad \quad K_2 = \partial_\phi \:, \label{KIVb1} \\ K_3 &=& \cos\phi\, \,\partial_r- \,\frac{D_r}{D}\, \sin\phi\, \,\partial_{\phi} \:, \label{KIVb2} \\ K_4 &=& -\sin\phi\, \,\partial_r -\,\frac{D_r}{D}\, \cos\phi\, \,\partial_{\phi} \:, \label{KIVb3} \\ K_5 &=& \sin z\, \,\partial_u +\frac{1}{u}\,\cos z\, \,\partial_z \:, \label{KIVb4} \\ K_6 &=& \cos z\, \,\partial_u - \frac{1}{u}\,\sin z\, \,\partial_{z} \:, \label{KIVb5} \\ K_7 &=& \partial_{z} \:, \label{KIVb6} \\ K_8 &=& u\,\sin z\, \,\partial_t + t \, \sin z\, \,\partial_u + \frac{1}{u}\,\,t\, \cos z \, \,\partial_z \:, \label{KIVb7} \\ K_9 &=& u\,\cos z\, \,\partial_t + t\,\cos z\, \,\partial_{u} - \frac{1}{u}\,\,t\, \sin z\, \,\partial_z \:, \label{KIVb8} \end{aligned}$$ where again $D(r) = (1/m)\, \sinh mr$ for $m^2>0\,$, or $\,D(r) = (1/\mu)\, \sin \mu r$ for $m^2 \equiv - \mu^2 < 0 $. The Lie algebra has the following nonvanishing commutators: $$\begin{aligned} &\left[ K_2, K_3 \right] = K_4 \:, \qquad \left[ K_2, K_4 \right] = - K_3 \:, \qquad \left[ K_3, K_4 \right] = - m^2 \, K_2 \:, \\ &\left[ K_5, K_7 \right] = - K_6 \:, \qquad \left[ K_6, K_7 \right] = K_5 \:, \qquad \left[ K_1, K_8 \right] = K_5 \:, \\ &\left[ K_1, K_9 \right] = K_6 \:, \qquad \left[ K_5, K_8 \right] = K_1 \:, \qquad \left[ K_6, K_9 \right] = K_1 \:, \\ &\left[ K_7, K_8 \right] = K_9 \:, \qquad \left[ K_7, K_9 \right] = - K_8 \:, \qquad \left[ K_8, K_9 \right] = - K_7 \:, \end{aligned}$$ where one should substitute $-m^2$ by $\mu^2$ if $m^2 < 0 $. So, the corresponding Lie algebra is ${\cal L}_{IVb} = t^3\, {\mbox{$\subset \!\!\!\!\!\!+$}}\, so\,(2,1) \oplus {\cal L}_m$, where ${\cal L}_m$ is generated by $K_2, K_3, K_4$, and is either $so\,(2,1)$ or $so\,(3)$ depending on whether $m^2>0$ or $m^2 = -\mu^2 <0$. The sub-algebra $t^3\,{\mbox{$\subset \!\!\!\!\!\!+$}}\, so\,(2,1)\,$ is generated by $K_1, K_5, K_6, K_7, K_8, K_9$. [**Class V**]{} : $\,m^{2} = 0\,$, $k\not=0$, $\,\omega = 0$. A set of linearly independent Killing vector field for this class is $$\begin{aligned} K_1 &=&\partial_t \:, \qquad \quad K_2 = \partial_\phi \:, \label{KV1} \\ K_3 &=& \cos\phi\, \,\partial_r- \,\frac{1}{r}\, \sin\phi\, \,\partial_{\phi} \:, \label{KV2} \\ K_4 &=& -\sin\phi\, \,\partial_r -\,\frac{1}{r}\, \cos\phi\, \,\partial_{\phi} \:, \label{KV3} \\ K_5 &=& \sin z\, \,\partial_u +\frac{F_u}{F}\,\cos z\, \,\partial_z \:, \label{KV4} \\ K_6 &=& \cos z\, \,\partial_u - \frac{F_u}{F}\,\sin z\, \,\partial_{z} \:, \label{KV5} \\ K_7 &=& \partial_{z} \:, \label{KV6} \\ K_8 &=& r\,\sin \phi\, \,\partial_t + t \, \sin\phi\, \,\partial_r + \frac{1}{r}\,\,t\, \cos \phi \, \,\partial_\phi \:, \label{KV7} \\ K_9 &=& r\,\cos \phi\, \,\partial_t + t\,\cos \phi\, \,\partial_r - \frac{1}{r}\,\,t\, \sin \phi\, \,\partial_\phi \:, \label{KV8} \end{aligned}$$ where $F(u)$ depends upon the sign of $k$ and is given by eq. (\[Ffun\]). The Lie algebra has the following nonvanishing commutators: $$\begin{aligned} &\left[ K_2, K_3 \right] = K_4 \:, \qquad \left[ K_2, K_4 \right] = - K_3 \:, \qquad \left[ K_5, K_6 \right] = -\, k \, K_7 \:, \\ &\left[ K_5, K_7 \right] = - K_6 \:, \qquad \left[ K_6, K_7 \right] = K_5 \:, \qquad \left[ K_1, K_8 \right] = - K_4 \:, \\ &\left[ K_1, K_9 \right] = K_3 \:, \qquad \left[ K_4, K_8 \right] = - K_1 \:, \qquad \left[ K_3, K_9 \right] = K_1 \:, \\ &\left[ K_2, K_8 \right] = K_9 \:, \qquad \left[ K_2, K_9 \right] = - K_8 \:, \qquad \left[ K_8, K_9 \right] = - K_2 \:. \end{aligned}$$ So, the corresponding Lie algebra is ${\cal L}_{V} = t^3\, {\mbox{$\subset \!\!\!\!\!\!+$}}\, so\,(2,1) \oplus {\cal L}_k$, where ${\cal L}_k$ is generated by $K_5, K_6, K_7$, and is either $so\,(2,1)$ or $so\,(3)$ depending on whether $k>0$ or $k<0$. The sub-algebra $t^3\,{\mbox{$\subset \!\!\!\!\!\!+$}}\, so\,(2,1)\,$ is generated by $K_1, K_2, K_3, K_4, K_8, K_9$. [**Class VI**]{} : $\,m^{2} = 0\,$, $k= 0$, $\,\omega = 0$. From (\[rieh1st\]) – (\[riehlast\]) this case corresponds to the 5D flat manifold whose Lie algebra is ${\cal L}_{VI} = so\,(4,1)\,$ since it clearly has the well known fifteen Killing vector fields, namely five translations, four spacetime rotations, and six space rotations. It is worth noting that none of the above Lie algebras is semi-simple, but some of their sub-algebras are. Besides, most of the simple sub-algebras are noncompact. The 3D sub-algebra $so\,(3)$ present in all classes is compact, though. The number of Killing vector fields we have found for each of the above six classes makes explicit that the 5D locally homogeneous generalized Gödel-type manifolds admit a group of isometry $G_7$ when ([*1a*]{}): $\: m^2 \not = 4\,\omega^2\,$, any real $k\,$, $\,\omega \not=0$, or when ([*1b*]{}): $\,m^2 \not=0$, $k\not=0$, $\omega=0\,$. Groups $G_9$ of isometry occur when ([*2a*]{}): $\:\,m^2 = 4\, \omega^2$, any real $k$, $\omega \not=0$, or ([*2b*]{}): $m^2 \not= 0$, $k = 0$, $\omega =0$, or when ([*2c*]{}): $m^2 = 0$, $k \not= 0$, $\omega =0$. Clearly when $\: m^2 = \omega = k = 0\,$ there is $G_{15}$. These possible groups are in agreement with theorem \[GroupTheo\] of the previous section. Actually the integration of the Killing equations constitutes a different way of deriving that theorem. Furthermore, these equations also show that the isotropy subgroup $H$ of $G_r$ is such that $\,\mbox{dim}\,(H) = 2\,$ when the above conditions ([*1a*]{}) and ([*1b*]{}) are satisfied, while the conditions ([*2a*]{}), ([*2b*]{}) and ([*2c*]{}) lead to $\,\mbox{dim}\,(H) = 4\,$, also in agreement with the previous section. Clearly $\,\mbox{dim}\,(H) = 10\,$ when $m^2 = \omega = k = 0$. Causal Anomalies and Final Remarks {#Anom} ================================== In this section we shall initially be concerned with the problem of causal anomalies in the generalized Gödel-type manifolds. Then we proceed by examining whether the IM gravity allows solutions of generalized Gödel-type metrics (\[ds2c\]). Finally, we conclude by addressing to the general question as to whether the IM gravity theory rules out the 4D noncausal Gödel-type solutions to Einstein’s equations of general relativity. In the first three of the six classes of homogeneous generalized Gödel-type manifolds we have discussed in Section \[homoconds\], there are closed timelike curves. Indeed, the analysis made in a previous paper [@ReboucasTeixeira98a] can be easily extended to the generalized 5D Gödel-type manifolds of the present article. To this end, we write the line element (\[ds2c\]) in the form $$\label{ds2cx} ds^2=dt^2 + 2\,H(r)\, dt\,d\phi -dr^2 -G(r)\,d\phi^2 - F^2(u)\,dz^2 - du^2\,,$$ where $G(r)= D^2 - H^2$ and $(r,\phi, z)$ are cylindrical coordinates. Now, the existence of closed timelike curves of the Gödel-type depends on the behavior of $G(r)$. Indeed, if $G(r) < 0$ for a certain range of $r$ ($r_1 < r < r_2$, say), Gödel’s circles [@Calvao88] $u,t,z,r =const$ are closed timelike curves. Since one can always make $H=0$ for the generalized Gödel-type manifolds of classes IV, V and VI, then $G(r) > 0$ for all $r>0$. Thus there are no closed timelike Gödel’s circles in these classes of manifolds. On the other hand, following the above-outlined reasoning it easy to show (see [@ReboucasTeixeira98a] for details) that for each of the remaining three classes (Class I to Class III) one can always find a critical radius $r_c$ such that for all $r > r_c$ one has $G(r) < 0$, making clear that there are closed timelike curves in these families of homogeneuous generalized Gödel-type manifolds. However, in what follows we shall show that these types of noncausal [*curved*]{} manifolds are not permitted in the context of the induced matter theory. In the Lorentz frame $\widehat{\Theta}^A$ given by (\[lorpen1\]) the nonvanishing frame components of the Einstein tensor $\widehat{G}_{AB}=\widehat{R}_{AB}-\frac{1}{2}\,R\,\widehat{\eta}_{AB}$ are $$\begin{aligned} \widehat{G}_{00} &=& - \,\frac{D''}{D} + \frac{3}{4}\, \left( \frac{H'}{D}\,\right)^2 - \frac{\ddot{F}}{F} \label{ein00} \;, \\ \widehat{G}_{02} &=& \frac{1}{2} \, \left( \frac{H'}{D}\, \right)' \;, \label{ein02} \\ \widehat{G}_{11} &=& \widehat {G}_{22}\; = \; \frac{1}{4} \, \left( \frac{H'}{D}\, \right)^2 + \frac{\ddot{F}}{F} \:, \label{ein11} \\ \widehat{G}_{33} &=& \widehat{G}_{44}\; = \; \frac{D''}{D} - \frac{1}{4}\, \left( \frac{H'}{D}\,\right)^2 \label{ein33} \;, \end{aligned}$$ where the prime and dot denote derivative with respect to $r$ and $u$, respectively. The field equations (\[5DfeqsG\]) require that $\widehat{G}_{02}=0$, which in turn implies that $$\label{G02} \frac{H'}{D} = \mbox{const} \equiv - 2\,\omega\;.$$ Inserting (\[G02\]) into (\[ein11\]), (\[ein33\]) and (\[ein00\]) one easily finds that the IM field equations are fulfilled if and only if the independent parameters $\omega$, $k$ and $m^2$ \[ see eqs. (\[cond1\]) – (\[cond2\] \] vanish identically, which leads to the only solution given by $$\label{sol} H = a \;, \qquad D = b\,r + c \;, \qquad \mbox{and} \qquad F = \beta \, u + \gamma,$$ where $a$, $b$, $c$, $\beta$, and $\gamma$ are arbitrary real constants. However, these constants have no physical meaning, and can be taken to be $a = c = \gamma = 0$ and $b=\beta= 1$ by a suitable choice of coordinates. Indeed, if one performs the coordinate transformations $$\begin{aligned} t & = & \bar{t} - \frac{a}{b}\,\, \bar{\phi}\:, \qquad \qquad r = \bar{r} - \frac{c}{b}\:, \label{tr} \\ \phi & = & \frac{\bar{\phi}}{b}\, \:, \; \quad z = \frac{\bar{z}}{\beta} \:, \qquad u = \bar{u}- \frac{\gamma}{\beta} \:, \label{ppz}\end{aligned}$$ the line element (\[ds2cx\]) becomes $$\label{ds2flat} d\hat{s}^2=d\bar{t}^2 -d\bar{r}^2 -\bar{r}^2 \,d\bar{\phi}^2 - d\bar{z}^2 -d\bar{u}^2\;,$$ in which we obviously have $\,G(\bar{r})= \bar{r}^2 >0\,$ for $\bar{r} \not=0$. The line element (\[ds2flat\]) corresponds to a manifestly flat 5D manifold, making it clear that the underlying manifold can be taken to be the simply connected Euclidean manifold ${\mbox{\Mb\symbol{82}}}^5$, and therefore as $\,G(\bar{r})>0\,$ no closed timelike circles are permitted. Furthermore the above results clearly show that the IM theory does not admit any [*curved*]{} 5D Gödel-type metric (\[ds2c\]) as solution to its field equations (\[5DfeqsG\]). However, in a recent work Mc Manus [@McManus94] has shown that a one-parameter family of solutions of the field equations (\[5DfeqsG\]) previously found by Ponce de Leon [@Leon88] was in fact flat in five dimensions. And yet the corresponding 4D induced models were shown to be a perfect fluid family of Friedmann-Robertson-Walker curved models (see Refs. [@Wesson96a; @Wesson92c; @Coley95] and also [@Abolghasem96] – [@Liu98], where other Riemann-flat solutions are also discussed). Therefore a question which naturally arises here is whether the above 5D flat metric, which is the only solution to the IM field equations, can similarly give rise to any 4D [*curved*]{} spacetime. However, from (\[ds2flat\]) one obviously has that the corresponding 4D spacetime is nothing but the Minkowski flat space (this result can also be derived by using a computer algebra package as, e.g., [classi]{} [@Aman87; @MacCallumSkea94] to calculate the 4D curvature tensor for $m^2=\omega=0\,$). In brief, the only solution of the IM field equations (\[5DfeqsG\]) of generalized Gödel-type is the 5D flat space (\[sol\]), which give rise only to the 4D Minkowski (flat) spacetime, whose topology can be taken to be the simply connected Euclidean ${\mbox{\Mb\symbol{82}}}^5$, in which no closed timelike curves are permitted. Although the above results can be looked upon as if the induced matter theory works as an effective therapy for the causal anomalies which arises when one starts from the specific generalized 5D Gödel-type family of metrics (\[ds2cx\]), this does not ensure that the induced matter version of general relativity is an efficient treatment for the causal anomalies (solutions with closed timelike curves) in general relativity as it has been conjectured in [@ReboucasTeixeira98b]. Actually, in a recent paper (which unfortunately has not been initially noticed by Rebouças and Teixeira [@ReboucasTeixeira98b]) Romero [*et al.*]{} [@Romero96] (see also [@Lidsey97]) have shown that the induced matter 5D scheme is indeed general enough to locally generate all solutions to 4D Einstein’s field equations. This is ensured by a theorem due to Campbell [@Campbell26] which states that any analytic $n$-dimensional Riemannian space can be locally embedded in a $(n+1)$-dimensional Ricci-flat space. In our context this amounts to saying that there must exist a five-dimensional Ricci-flat space which locally gives rise to the 4D Gödel noncausal solution of Einstein’s equations of general relativity. Thus, what still remains to be done regarding Gödel-type spaces is to find out this 5D Ricci-flat space which gives rise (locally) to the 4D Gödel-type spacetimes of general relativity. To conclude it is worth stressing some features of the local underlying embedding of the induced matter theory. Any Riemann-flat manifold obviously is also Ricci-flat. The reverse, however, does not necessarily holds, and one can have Ricci-flat spaces which are not Riemann-flat. For the generalized 5D Gödel-type geometries we have discussed in this paper the condition for Ricci-flatness ($\widehat{R}_{AB}=0$) necessarily leads to Riemann-flat spaces. Remarkably many solutions of the field equations (\[5DfeqsG\]) are indeed Riemann-flat (see [@Wesson96a; @McManus94; @Coley95] and  [@Leon88] – [@Liu98]). From a purely mathematical 5D point of view all Riemann-flat spaces are locally equivalent (locally isometric). However, from the viewpoint of the 5D induced matter gravity all the above-referred 5D Riemann-flat solutions give rise to physically (and geometrically) distinct 4D spacetimes [@Wesson96a; @McManus94; @Coley95],  [@Leon88] – [@Liu98]. On the other hand, in the light of the equivalence problem techniques we have discussed in Section \[prereq\], these 5D Riemann-flat examples also show that all 5D Cartan scalars (\[CartanScl\]) can vanish identically, with or without the vanishing of the corresponding (induced) 4D Cartan scalars. Acknowledgement {#acknowl .unnumbered} =============== The authors are grateful to the scientific agency CNPq for the grants under which this work was carried out. M.J. Rebouças thanks C. Romero, V.B. Bezerra and J.B. Fonseca-Neto for motivating and fruitful discussions. [99]{} A. Einstein, [*The meaning of relativity*]{}, Princeton U. P., Princeton (1956). J. A. Wheeler, [*Einstein’s Vision*]{}, Springer-Verlag, Berlin (1968). A. Salam, Rev. Mod. Phys. [**52**]{}, 525 (1980). P. S. Wesson, Gen. Rel. Grav. [**22**]{}, 707 (1990). P. S. Wesson and J. Ponce de Leon, J. Math.Phys. [**33**]{}, 3883 (1992). J. M. Overduin and P. S. Wesson, Phys. Rep.  [**283**]{}, 303 (1997). J. Ponce de Leon and P. S. Wesson, J. Math. Phys. [**34**]{}, 4080 (1993). P. S. Wesson, J. Ponce de Leon, P. H. Lim and H. Liu, Int. J. Mod. Phys. D [**2**]{}, 163 (1993). P. S. Wesson, Mod. Phys. Lett. A [**10**]{}, 15 (1995). S. Rippl, C. Romero and R. Tavakol, Class. Quant. Grav. [**12**]{}, 2411 (1995). P. S. Wesson, J. Ponce de Leon, H. Liu, B. Mashhoon, D. Kalligas, C. W. F. Everitt, A. Billyard, P. H. Lim and J. M. Overduin, Int. J. Mod. Phys. A [**11**]{}, 3247 (1996). P. S. Wesson, Mod. Phys. Lett. A [**7**]{}, 921 (1992). P. S. Wesson, Astrophys. J. [**394**]{}, 19 (1992). P. S. Wesson and H. Liu, Astrophys. J. [**440**]{}, 1 (1995). P. S. Wesson and J. Ponce de Leon, Astron. Astrophys. [**294**]{}, 1 (1995). H. Liu and P. S. Wesson, Int. J. Mod. Phys. D [**3**]{}, 627 (1994). D. J. Mc Manus, J. Math. Phys. [**35**]{}, 4889 (1994). A. A. Coley, Astrophys. J. [**427**]{}, 585 (1994). A. A. Coley and D. J. Mc Manus, J. Math. Phys. [**36**]{}, 335 (1995). P. S. Wesson and J. Ponce de Leon, Gen. Rel. Grav. [**26**]{}, 555 (1994). B. Mashhoon, H. Liu and P. S. Wesson, Phys. Lett. B [**331**]{}, 305 (1994). P. S. Wesson, Phys. Lett. B [**276**]{}, 299 (1992). H. Liu and P. S. Wesson, J. Math. Phys. [**33**]{}, 3888 (1992). P. S. Wesson and J. Ponce de Leon, Class. Quant. Grav. [**11**]{}, 1341 (1994). H. Liu and P. S. Wesson, Class. Quant. Grav. [**13**]{}, 2311 (1996). A. Billyard and P. S. Wesson, Phys. Rev. D [**53**]{} 731 (1996). P. H. Lim and P. S. Wesson, Astrophys. J. [**397**]{}, L91 (1992). D. Kalligas, P. S. Wesson and C. W. F. Everitt, Astrophys. J. [**439**]{}, 548 (1995). P. H. Lim, J. M. Overduin and P. S. Wesson, J. Math. Phys. [**36**]{}, 6907 (1995). P. S. Wesson, H. Liu and P. H. Lim, Phys. Lett. B [**298**]{}, 69 (1993). H. Liu, P. S. Wesson and J. Ponce de Leon, J. Math. Phys. [**34**]{}, 4070 (1993). H. Liu and P. S. Wesson, Phys. Lett. B [**381**]{}, 420 (1996). K. Gödel, Rev.  Mod. Phys. [**21**]{}, 447 (1949). M. M. Som and A. K. Raychaudhuri, Proc. Roy.Soc. London A [**304**]{}, 81 (1968). A. Banerjee and S. Banerji, J. Phys. A [**1**]{}, 188 (1968). F. Bampi and C. Zordan, Gen. Rel. Grav. [**9**]{}, 393 (1978). M. Novello and M. J. Rebouças, Phys. Rev. D [**19**]{}, 2850 (1979). M. J. Rebouças, Phys. Lett. A, [**70**]{}, 161 (1979). A. K. Raychaudhuri and S. N. G. Thakurta, Phys. Rev.  D [**22**]{}, 802 (1980). S. K. Chakraborty, Gen. Rel. Grav. [**12**]{}, 925 (1980). M. J. Rebouças and J. Tiomno, Phys.  Rev. D [**28**]{}, 1251 (1983). A. F. F. Teixeira, M. J. Rebouças and J. E. [Å]{}man, Phys.  Rev.  D [**32**]{}, 3309 (1985). Note that in this reference there are two misprints, namely: in eq.(5) the term $-R_z$ should read $+R_z$, and in eq. (38) the term $\kappa_3\, m D$ should read $\kappa_3\, m^2 D$. M. J. Rebouças and A. F. F. Teixeira, Phys. Rev. D [**34**]{}, 2985 (1986). M. J. Rebouças, J. E. [Å]{}man and A. F. F. Teixeira, J.  Math. Phys. [**27**]{}, 1370 (1986). M. J. Rebouças and J. E. [Å]{}man, J.  Math. Phys. [**28**]{}, 888 (1987). F. M. Paiva, M. J. Rebouças and A. F. F. Teixeira, Phys. Lett. A [**126**]{}, 168 (1987). K. Dunn, Gen. Rel. Grav. [**21**]{}, 137 (1989). R. X. Saibatalov, Gen. Rel. Grav. [**7**]{}, 697 (1995). M. Rooman and Ph. Spindel, Class. Quant. Grav., 3241 (1998). M. Tsamparlis, D. Nikolopoulos and P.S. Apostolopoulos, Class.  Quat.  Grav. [**15**]{}, 2909 (1998). A.M. Candela and M. Sánchez, “Geodesic Connectedness in Gödel-type Space-times” Report 17/98, Dipartimento Interuniversitario di Matematica, Università degli Studi - Politecnico di Bari (1998). A. Krasiński, J. Math. Phys. [**39**]{}, 2148 (1998). This reference contains a quite good overview of the literature on rotating models in general relativity. E. P. V. Vaidya, M. L. Bedran and M. M. Som, Prog. Theor. Phys. [**72**]{}, 857 (1984). L. L. Smalley, Phys. Rev. D [**32**]{}, 3124 (1985). J. D. Oliveira, A. F. F. Teixeira and J. Tiomno, Phys. Rev. D [**34**]{}, 3661 (1986). L. L. Smalley, Phys. Lett. A [**113**]{}, 463 (1986). W. M. Silva-Jr., J. Math. Phys. [**32**]{}, 3223 (1991). A. J. Accioly and A. T. Gonçalves, J. Math. Phys. [**28**]{}, 1547 (1987). A. J. Accioly and G. E. A. Matsas, Phys. Rev. D [**38**]{}, 1083 (1988). T. Singh and A. K. Agrawal, Fortschr. Phys. [**42**]{}, 71 (1994). J. D. Barrow and M. P. Dabrowski, Phys. Rev. D [**58**]{}, 103502 (1998). J. E. [Å]{}man, J. B. Fonseca-Neto, M. A. H. MacCallum and M. J. Rebouças, Class.  Quant.  Grav.  [**15**]{}, 1089 (1998). J.B. Fonseca-Neto and M.J. Rebouças, Gen.  Rel.  Grav.  [**30**]{}, 1301 (1998). M. J. Rebouças and A. F. F. Teixeira, J. Math. Phys. [**39**]{}, 2180 (1998). M. J. Rebouças and A. F. F. Teixeira, Int. J. Mod. Phys. A [**13**]{}, 3181 (1998). E. Cartan, “Leçons sur la Géométrie des Éspaces de Riemann”, Gauthier-Villars, Paris (1928, 2nd ed. 1946, reprinted 1951). English translation by J. Glazebrook, Math Sci Press, Brookline (1983). C. Romero, R. Tavakol and R. Zalaletdinov, Gen. Rel. Grav. [**3**]{}, 365 (1996). A. Karlhede, Gen. Rel. Grav. [**12**]{}, 693 (1980). M. A. H. MacCallum and J. E. F. Skea, “[sheep]{}: A Computer Algebra System for General Relativity”, in [*Algebraic Computing in General Relativity, Lecture Notes from the First Brazilian School on Computer Algebra*]{}, Vol. II, edited by M. J. Rebouças and W. L. Roque. Oxford U. P., Oxford (1994). See also references therein. S. W. Hawking and G. F. R. Ellis, “The Large Scale Structure of Space-Time”, Cambridge U. P., Cambridge (1973). J. E. [Å]{}man, “Manual for [classi]{}: Classification Programs for Geometries in General Relativity”, Institute of Theoretical Physics Technical Report, 1987. Third provisional edition. Distributed with the [sheep]{} sources. The integration of the Killing equations for these classes of generalized Gödel-type manifolds, which will be performed in the next section, shows that the isotropy group indeed depends on these special relations between the essential parameters $\,\omega$, $k$ and $m^2\,$. A. Z. Petrov, “Einstein Spaces”, first English edition, Pergamon Press (1969). See page 63. M. O. Calvão, M. J. Rebouças, A. F. F. Teixeira, W. M. Silva-Jr., J. Math. Phys. [**29**]{}, 1127 (1988). J. Ponce de Leon, Gen. Rel. Grav.  [**20**]{}, 539 (1988). G. Abolghasem, A. A. Coley and D. J. Mc Manus, J. Math. Phys. [**37**]{}, 361 (1996). A. Billyard and P. S. Wesson, Gen. Rel. Grav. [**28**]{}, 137 (1996). H. Liu and P. S. Wesson, Gen. Rel. Grav. [**30**]{}, 509 (1998). J. E. Lidsey, C. Romero, R. Tavakol and S. Rippl, Class. Quant. Grav. [**14**]{}, 865 (1997). J. E. Campbell “A Course of Differential Geometry”, Clarendon Press, Oxford (1926). [^1]: [e-mail:]{} [email protected] [^2]: [e-mail:]{} [email protected] [^3]: [e-mail:]{} [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using single star models including the effects of shellular rotation with and without magnetic fields, we show that massive stars at solar metallicity with initial masses lower than about 20-25 M$_\odot$ and with an initial rotation above $\sim 350$ km s$^{-1}$ likely reach the critical velocity during their Main-Sequence phase. This results from the efficient outwards transport of angular momentum by the meridional circulation. This could be a scenario for explaining the Be stars. After the Main-Sequence phase, single star in this mass range can again reach the critical limit when they are on a blue loop after a red supergiant phase [@HL98]. This might be a scenario for the formation of B\[e\] stars, however as discussed by Langer & Heger (1998), this scenario would predict a short B\[e\] phase (only some 10$^4$ years) with correspondingly small amounts of mass lost.' author: - Georges Meynet and André Maeder title: 'Single massive stars at the critical rotational velocity: possible links with Be and B\[e\] stars' --- Link between Be, B\[e\] supergiants stars and rotation ====================================================== A common feature of Be and B\[e\] supergiants is the non-sphericity of their circumstellar envelopes (see e.g. the review by Zickgraf 2000). More precisely, in both cases, disks are supposed to be present, likely disk of outflowing material. How do these disk form ? How long are their lifetimes ? Are they intermittent ? Are they Keplerian ? Many of these questions have been discussed in this conference and are still subject of lively debate. A point however which seems well accepted is the fact that the origin of an axisymmetric wind structure such as a disk might be connected to the fast rotation of the star [@Pe00]. If correct, this connection between fast rotation and the Be and B\[e\] phenomena leads to the question, when such fast rotation can be encountered in the course of the evolution of massive single stars ? In this paper, we give some elements of answer to that question based on models accounting for the effects of shellular rotation [@Za92]. The critical velocity ===================== The critical angular velocity corresponds to the angular velocity at the equator of the star such that the centrifugal force exactly balances the gravity. The critical angular velocity $\Omega_{\rm crit,1}$ in the frame of the Roche model for computing the gravity due to the deformed star, is given by $$\Omega_{\rm crit,1}=\left({2 \over 3}\right)^{3 \over 2}\left({GM \over R^3_{\rm pc}}\right)^{1 \over 2}, \label{eqn1}$$ where $R_{\rm pc}$ is the polar radius when the surface rotates with the critical velocity. Looking at Eq. (\[eqn1\]), one can be surprised that the stellar luminosity does not appear. Indeed, we could expect that, in addition to the centrifugal acceleration, the radiative acceleration would help in balancing the gravity. When the stellar luminosity is sufficiently far from the Eddington limit (see below for a more precise statement), it has been shown by Glatzel (1998) and Maeder & Meynet (2000) that radiative acceleration does not play any role. Physically, this comes from the fact that when the star reaches the critical limit at the equator, the effective gravity (gravity decreased by the effect of the centrifugal acceleration) becomes zero there and the radiative flux, responsible for the radiative acceleration, tends also toward zero due to the von Zeipel theorem (von Zeipel 1924; Maeder 1999). In contrast, when the stellar luminosity approaches the Eddington limit, the radiative acceleration becomes a dominant effect. Why such a difference between the case far from the Eddington limit and the case near the Eddington limit ? One could indeed argue that even near the Eddington limit, when the critical limit is approached, the radiative flux becomes zero at the equator. This is correct, but another mechanism comes into play here: the fact that the Eddington limit is modified when the star is rotating. Let us first recall that the classical Eddington luminosity is given by the expression $$L_{\rm Edd}=4\pi c G M/\kappa, \label{eqn2}$$ where $\kappa$ is the total opacity, $L$ the luminosity, $M$ the mass of the star and the other symbols have their usual meanings. Now, when the star is rotating, two important differences appear: first the Eddington limit varies as a function of the colatitude $\theta$, second, it is decreased when the rotational velocity increases. The Eddington limit modified by rotation is given by (Glatzel 1998; Maeder & Meynet 2000) $$L_{\rm Edd}(\Omega)=4\pi c G M\left(1-{\Omega^2 \over 2\pi G \overline{\rho}}\right)/\kappa(\theta), \label{eqn3}$$ where $\overline{\rho}=M/V(\omega)$ is the average density of the star, $V(\omega)$ being the stellar volume when the star is rotating and $\omega$ is the ratio of the angular velocity to the critical angular velocity given in Eq. (\[eqn1\]). Thus, when the star is near the Eddington limit, rotation may sufficiently decrease $L_{\rm Edd}$ to make it equal to the actual luminosity of the star. In that case, we say the star has reached the $\Omega\Gamma$-limit and strong mass loss ensues. Now at which velocity does this occur ? To obtain it, one has to find the value of $\Omega$ such that $L=L_{\rm Edd}(\Omega)$, where $L_{\rm Edd}(\Omega)$ is given by Eq. (\[eqn3\]). This is equivalent to find $\Omega$ such that $$\Gamma_{\rm max}\equiv{\kappa_{\rm max} L \over 4\pi c G M}=1-{\Omega^2 \over 2\pi G \overline{\rho}}, \label{eqn4}$$ where we have added the subscript “max” to indicate that the critical limit will be reached first at the position on the surface where the opacity is maximum. If the value of $\Omega$ satisfying this equality is higher than $\Omega_{\rm crit,1}$ (from now on called the classical limit), then only the classical limit is relevant since it will be reached first. To see if Eq. (\[eqn4\]) can be fulfilled with $\Omega < \Omega_{\rm crit,1}$, let us use the definition of $\overline{\rho}$ and of $\omega$ to write $${\Omega^2 \over 2\pi G \overline{\rho}}={16 \over 81}\omega^2 V'(\omega), \label{eqn5}$$ with $V'(\omega)={V(\omega) \over {4 \over 3} \pi R^3_{\rm pc}}$. From the Roche model, it can be shown that the quantity ${16 \over 81}\omega^2 V'(\omega)$ increases from 0 to 0.361 when $\omega$ varies from 0 to 1 (see Maeder & Meynet 2000). It means that if $\Gamma_{\rm max}$ is strictly inferior to 1 - 0.361 = 0.639, Eq. (\[eqn4\]) cannot be fulfilled. If $\Gamma_{\rm max}=0.639$, Eq. (\[eqn4\]) can be fulfilled with $\Omega = \Omega_{\rm crit,1}$ and if $\Gamma_{\rm max}$ is superior to 0.639, then values of $\Omega < \Omega_{\rm crit,1}$ can satisfy Eq. (\[eqn4\]). A new expression for the critical velocity, valid when the star is sufficiently near the Eddington limit, can be derived. It is given by $$\Omega_{\rm crit,2}=\left({9 \over 4}\right)\Omega_{\rm crit,1} \sqrt{{1-\Gamma_{\rm max}\over V'(\omega)}} \label{eqn6}$$ where $R_{\rm e}(\omega)$ is the equatorial radius for a given value of the rotation parameter $\omega$. These expressions for the critical velocities are different from the expression $$\Omega_{\rm crit}=\Omega_{\rm crit,1} (1-\Gamma) \label{eqn7}$$ used by some authors. Expression (\[eqn7\]) is correct only if the surface is uniformly bright, which is not the case when the star is rotating fast. Evolution of the surface velocity: two simple extreme cases =========================================================== The evolution of the rotational velocity at the surface of stars depends mainly on three physical processes: - The efficiency of the angular momentum transport mechanisms in the interior, - The movement of expansion/contraction of the surface, - The mass loss. An extreme case of internal angular momentum transport is the one which imposes solid body rotation at each time in the course of the evolution of the star. A strong coupling is then realised between the contracting core and the expanding envelope. In that case, the angular velocity $\Omega$ is given by the ratio of the total angular momentum $J$ and the total momentum of inertia of the star $I$. Fig. \[momi\] shows the variation of $I$ as a function of the growing stellar radius for a few stellar models at solar metallicity. In the case of the 9 M$_\odot$ model, $I$ varies as $R^\alpha$ with $\alpha\sim 1$. In the case of the 60 M$_\odot$ model, $\alpha$ becomes negative. This results from the strong mass loss experienced by this star. For the 9 M$_\odot$, since mass loss by stellar winds remains very modest during the Main Sequence phase, the total angular momentum is conserved during this phase. As a consequence, $\Omega$ varies as the inverse of $I$ and since $I\propto R$, $\Omega\propto 1/R$. From the previous section, we saw that when the star is sufficiently far from the Eddington limit, which is the case for the 9 M$_\odot$ stellar model, then $\Omega_{\rm crit,1} \propto R^{-3/2}$. Thus the critical angular velocity decreases more rapidly than the surface angular velocity when the star expands. Clearly this favors the reaching of the critical limit (see also Sackmann and Anand 1970; Langer 1998). To illustrate this last point, let us consider the rotating track for the 20 M$_\odot$ model (solar metallicity and initial rotational velocity of 300 km s$^{-1}$) computed by Meynet & Maeder (2003) for obtaining values of the momentum of inertia during the evolution. From these values of $I$ and also using the values for the actual total angular momentum, we deduce the surface velocity that the star would have in case of solid body rotation. We obtain the short dashed line in Fig. \[momi\]. Although the model is not self consistently computed (the evolutionary tracks were not computed imposing solid body rotation), it illustrates the fact that indeed when solid body rotation is achieved, the star may reach very easily the critical limit (here represented by the long–dashed line). Another extreme case is the case of no transport of angular momentum. Each stellar layer keeps its own angular momentum. The variation of $\Omega$ is then simply governed by the local conservation of the angular momentum and, at the surface, $\Omega \propto 1/R^2$. When the radius increases, the surface angular velocity decreases more rapidly than the classical critical angular velocity, thus the star evolves away from the critical limit. Reality is likely in between the cases of solid body rotation and of local conservation of the angular momentum. Let us now see what are the predictions of more physical models. Evolution of the surface velocity in models with shellular rotation =================================================================== In the interior of stars, at least three mechanisms can transport angular momentum along a radial direction: - Convection:here we suppose that convective zones have solid body rotation - Meridional circulation:this is the main mechanism for the transport of the angular momentum in radiative zones. - Shear turbulence:only the secular shear turbulence, occurring on thermal timescales, appears to be important. Dynamical shear, occurring on dynamical timescales, appears in the advanced stages of the evolution of massive stars and only affects very locally the profile of $\Omega$ (Hirschi et al. 2003). The efficiency of the vertical secular shear turbulence for transporting the angular momentum is in general much smaller than that of the meridional currents. Let us emphasize that the evolution of the angular velocity inside the star depends on the gradients of the chemical species and of the angular velocity, these gradients being themselves deduced from the variation of $\Omega$ as a function of the radius. Thus the problem has to be solved self-consistently, which is done in the computations presented here. More precisely, in the models discussed in this section, the effects of the centrifugal acceleration in the stellar structure equations are accounted for as explained in Kippenhahn & Thomas (1970) (see also Meynet and Maeder 1997). The equations describing the transport of the chemical species and angular momentum resulting from meridional circulation and shear turbulence are given in Zahn (1992) and Maeder & Zahn (1998) (see details on the derivation of the angular momentum transport equation in Meynet & Maeder 2005b). The expressions for the diffusion coefficients are taken from Talon & Zahn (1997), Maeder (1997). The effects of rotation on the mass loss rates is taken into account as explained in Maeder & Meynet (2000). Let us stress also that these models are able to account for many observational constraints that non–rotating models cannot account for: they can reproduce surface enrichments (Heger & Langer 2000; Meynet & Maeder 2000), the blue to red supergiant ratios at low metallicity [@MMVII], the variation with the metallicity of the Wolf-Rayet populations and of the number ratios of type Ibc to type II supernovae [@MMXI]. In the right part of Fig. \[momi\], the evolution of the surface velocity for a 20 M$_\odot$ stellar model at solar metallicity with $\upsilon_{\rm ini}$= 300 km s$^{-1}$ is shown (see the continuous line). Interestingly we note that the evolution of the surface velocity given by consistently taking into account the above transport mechanisms is not very far from the solid body rotation case, except at the very beginning and at the end of the Main Sequence phase. At the beginning, the differentially rotating model presents a decrease of the surface velocity, not shown by the solid body rotation model. This initial decrease is due to the action of the meridional currents, which build up a gradient of $\Omega$ inside the star, transporting angular momentum from the outer regions to the inner ones. This slows down the surface of the star. Then, in the interior shear turbulence becomes active and erodes the gradients built by the meridional circulation. Under the influence of these two counteracting effects the $\Omega$-profile converges toward an equilibrium configuration. This occurs on a very small timescale (a few percents of the Main-Sequence lifetime, see Denissenkov et al. 1999). After this short phase, the variation of $\Omega$ in the radiative zone continues to be shaped by shear turbulence, meridional circulation and the change of stellar structure (expansion/contraction of the stellar layers). At the end of the Main Sequence phase, when the star is older than about 9 Myr, the surface velocity rapidly decreases. This is a consequence of the mass loss rate recipe we used in this computation (Vink et al. 2000; 2001) which shows important enhancement of the mass loss rates when some critical effective temperature are crossed (bistability limits, see the above references). In absence of such strong stellar winds, the surface velocity would increase during this phase. From this computation we can deduce the following results: first, during the Main-Sequence phase the transport mechanisms are efficient enough to maintain a relatively weak gradient of $\Omega$ inside the star (see the left part of Fig. \[vh\]). The situation is thus not too far from the solid body rotation case. Let us however note that the gradients of $\Omega$, although modest, are sufficient enough to drive chemical mixing. These models predict changes of the surface abundances during the Main-Sequence phase well in agreement with what is observed (Heger and Langer 2000; Meynet & Maeder 2000; Maeder and Meynet 2001). Second, this numerical example illustrates the importance of the mass loss in shaping the evolution of the surface velocity (see also the discussion below and Fig. \[vz\]). Third, such models, starting with a higher initial velocity (typically above $\sim$ 350 km s$^{-1}$) would easily reach the critical limit during the Main-Sequence phase (see the results shown is Meynet and Maeder 2005b). What does happen after the Main-Sequence phase ? The evolution speeds up and the variation of $\Omega$ inside the star is mainly governed by the local conservation of the angular momentum. We just saw in Sect. 3 above that, in situation where the radius grows up, this makes the surface velocity to evolve away from the critical limit. Only when the star contracts, for instance when a blue loop occurs in the HR diagram, may the star reach the critical limit [@HL98]. This would be a possible scenario for the occurrence of B\[e\] stars, however as discussed by Langer & Heger (1998), this scenario would predict a short B\[e\] phase (only some 10$^4$ years) with correspondingly small amounts of mass lost. Since such stars would have had their surface abundances changed by the deep outer convective zone appearing at the red supergiant phase, one expects that their surface abundance be highly enriched in CNO-processed material: as a numerical example, the N/C and N/O ratios at the surface of a 9 M$_\odot$ stellar model at solar metallicity, with an initial rotation of $\upsilon_{\rm ini}= 300$ km s$^{-1}$ are enhanced by factors equal to 2.5 and 2.0 respectively during the first crossing of the HR gap from the blue to the red. These two ratios become 8.5 and 5.3 on the blue loop after a red supergiant stage. The models without rotation would predict for these ratios the following values: first crossing, no enhancement for both ratios; on the blue loop, enhancement factors of 5 and 3.5 respectively [@MMX]. This numerical example shows that whatever the initial rotation velocity, models would predict some CNO-processed material at the surface if the star comes back from a red supergiant stage, secondly, rotation reinforces the surface enrichment on the blue loop with respect to non–rotating models. Let us now see what happens to more massive stars. In Fig. \[vz\], the evolution of the surface velocity for various 60 M$_\odot$ stellar models at two different metallicities is shown. We see that at solar metallicity, the mass loss rates are so high that, even starting with an initial velocity of 500 km s$^{-1}$, the star does not reach the critical limit. The situation is quite different at low $Z$, due to the metallicity dependence of the mass loss rates (here we used $\dot M (Z)=(Z/Z_\odot)^{\alpha}\dot M(Z_\odot)$ with $\alpha$ equal to 0.5 as devised by Kudritzki & Puls 2000). Indeed, at low $Z$ little amounts of mass are removed by stellar winds, therefore little amounts of angular momentum. As a consequence, the angular momentum brought to the surface by the meridional circulation is not removed and accelerate the outer layers. Starting with $\upsilon_{\rm ini}=$ 300 km s$^{-1}$, the star reaches the critical limit at the end of the Main-Sequence phase. Starting with an initial velocity of 800 km s$^{-1}$, the reaching of the critical limit occurs at a much earlier time. For still higher initial masses, the stellar luminosity approaches the Eddington limit and, as explained above, the critical velocity becomes smaller than the value given by Eq. (\[eqn1\]). The star may reach the $\Omega\Gamma$-limit and lose very high amounts of mass [@MMVI]. Interestingly, in the HR diagram, the observed position of the de Jager or Humphreys-Davidson limit coincides with the position where this $\Omega\Gamma$-limit would occur. This may be an indication that in the physics underlying this limit, both rotation and supra-Eddington luminosity play an important role. Evolution of the surface velocity in models with shellular rotation and magnetic field ====================================================================================== Spruit (2002) has proposed a dynamo mechanism operating in stellar radiative layers in differential rotation. This dynamo is based on the Tayler instability, which is the first one to occur in a radiative zone (Tayler 1973; Pitts & Tayler 1986). Even a very weak horizontal magnetic field is subject to Tayler instability, which then creates a vertical field component, which is wound up by differential rotation. As a result, the field lines become progressively closer and denser and thus a strong horizontal field is created at the energy expense of differential rotation. In a first paper [@Magn1], we have shown that in a rotating star a magnetic field can be created during MS evolution by the Spruit dynamo. We have examined the timescale for the field creation, its amplitude and the related diffusion coefficients. The clear result is that magnetic field and its effects are quite important. In the second paper [@Magn2], a generalisation of the equations of the dynamo has been developed. The solutions fully agree with Spruit’s solution in the two limiting cases this author has considered [@Spruit02], i.e. “Case 0” when the $\mu$–gradient dominates and “Case 1” when the $T$–gradient dominates with large non–adiabatic effects. Our more general solution encompasses all cases of $\mu$– and $T$–gradients, as well as all cases from the fully adiabatic to non–adiabatic solutions. In a last paper [@Magn3], we examine the effects of the magnetic field created by Tayler–Spruit dynamo in differentially rotating stars. Magnetic fields of the order of a few $10^4$ G are present through most the stellar envelope, with the exception of the outer layers. The diffusion coefficient for the transport of angular momentum is very large and it imposes nearly solid body rotation during the MS phase. This can be seen in Fig. \[vh\] where are compared the evolutions of the angular velocity inside models with and without magnetic fields. The surface velocities resulting from these two models are shown in Fig. \[v15\]. Except at the end of the Main Sequence phase, the model with magnetic field is strictly equivalent to the solid body rotation case. Does the model with magnetic field predict surface enrichments ? Shear turbulence in magnetic models is very weak due to the flatness of the $\Omega$ internal profile. The excess of the energy in the shear in the magnetic model is only a few ten thousanths of the excess of the energy in the shear in the non–magnetic one (see Fig. \[energie\]). On the other hand, solid body rotation drives meridional circulation currents which are much faster than usual and leads to much larger diffusion coefficients than the shear diffusivity and than the magnetic diffusivity for the chemical species. As a consequence, the surface enrichments obtained in the models with rotation and magnetic fields are higher than in models with rotation only (see Fig. \[abond\]). Conclusion ========== Be stars might be the natural outcome of stars with initial rotational velocity in the upper tail of the initial velocity distribution. Depending on when the critical limit is reached, one expects more or less high surface enrichments. If the critical limit is reached very early during the Main-Sequence phase, no enrichment is expected, while if the critical limit is reached at the end of the Main-Sequence phase, high N/C and N/O ratios are expected. At solar metallicity, for initial masses superior to about 50 M$_\odot$, mass loss rates prevent the stars to reach the critical limit. The lower initial value for reaching the critical limit is likely limited by variation of the distribution of the initial velocities. After the Main-Sequence phase, the variation of $\Omega$ inside the star is governed by the local conservation of the angular momentum. In phases during which the radius expands, this makes the surface velocity to evolve away from the critical limit. In contracting phases, the reverse occurs. In the frame of single star evolution models, B\[e\] could be in a stage on a blue loop where the star contracted from a previous red supergiant phase. In that case, the surface is predicted to be enriched in CNO processed material. The effects of magnetic fields in this context remain to be studied. However, already at this stage, it appears that magnetic field will facilitate the reaching of the critical limit. Denissenkov, P.A., Ivanova, N.P., Weiss, A. 1991, A&A, 341, 181 Glatzel, W. 1998, A&A, 339, L5 Heger, A., Langer, N. 1998a, A&A, 334, 210 Heger, A., Langer, N. 2000, ApJ, 544, 1016 Hirschi, R., Maeder, A., Meynet, G. 2003, in Stellar Rotation, IAU Symp. 215, A. Maeder & P. Eenens (eds.), ASPC, p. 510 Kippenhahn, R., Thomas, H.C. 1970, in [*Stellar Rotation*]{}, IAU Coll. 4, Ed. A. Slettebak, p. 20 Kudritzki R.P., Puls J. 2000, ARAA, 38, 613 Langer, N. 1998, A&A, 329, 551 Langer, N., Heger, A. 1998, in B\[e\] stars, A.M. Hubert & C. Jaschek (eds.), Ap&SS, 233, 235 Maeder A. 1997, A&A, 321, 134 Maeder, A. 1999, A&A, 347, 185 Maeder, A., Meynet, G. 2000, A&A, 361, 159 Maeder, A., Meynet, G. 2001, A&A, 373, 555 Maeder, A., Meynet, G. 2003, A&A, 411, 543 Maeder, A., Meynet, G. 2004, A&A, 422, 225 Maeder, A., Meynet, G. 2005, A&A, 440, 1041 Maeder A., Zahn J.P. 1998, A&A, 334, 1000 Meynet, G., Maeder, A. 1997, A&A, 321, 465 Meynet, G., Maeder, A. 2000, A&A, 361, 101 Meynet, G., Maeder, A. 2003, A&A, 404, 975 Meynet, G., Maeder, A. 2005a, A&A, 429, 581 Meynet, G., Maeder, A. 2005b, in The Nature and Evolution of Disks Around Hot Stars, R. Ignace and K. G. Gayley (eds.), ASP Conf Ser. 337, p.15 Pelupessy, I., Lamers, H.J.G.L.M., Vink, J.S. 2000, ApJ, 359, 695 Pitts, E., Tayler, R.J. 1986, MNRAS, 216, 139 Sackmann, I.-J., Anand, S.P.S. 1970, ApJ, 162, 105 Spruit, H.C. 2002, A&A, 381, 923 Talon, S., Zahn, J.P. 1997 , A&A, 317, 749 Tayler, R.J. 1973, MNRAS, 161, 365 Vink, J.S., de Koter, A., Lamers, H.J.G.L.M. 2000, A&A, 362, 295 Vink, J.S., de Koter, A., Lamers, H.J.G.L.M. 2001, A&A, 369, 574 von Zeipel, H. 1924, MNRAS, 84, 665 Zahn, J.P. 1992, A&A, 265, 115 Zickgraf, F.-J. 2000, in IAU Coll. 175, Myron A. Smith and Huib F. Henrichs (eds), ASPC, 214, p. 26
{ "pile_set_name": "ArXiv" }
--- abstract: 'Asymptotic stability of small solitons in one dimension is proved in the framework of a discrete nonlinear Schrödinger equation with septic and higher power-law nonlinearities and an external potential supporting a simple isolated eigenvalue. The analysis relies on the dispersive decay estimates from Pelinovsky & Stefanov (2008) and the arguments of Mizumachi (2008) for a continuous nonlinear Schrödinger equation in one dimension. Numerical simulations suggest that the actual decay rate of perturbations near the asymptotically stable solitons is higher than the one used in the analysis.' author: - | P.G. Kevrekidis$^1$, D.E. Pelinovsky$^2$, and A. Stefanov$^3$\ [$^{1}$ Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 01003]{}\ [$^{2}$ Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada, L8S 4K1]{}\ [$^{3}$ Department of Mathematics, University of Kansas, 1460 Jayhawk Blvd, Lawrence, KS 66045–7523]{} title: '**Asymptotic stability of small solitons in the discrete nonlinear Schrödinger equation in one dimension**' --- Introduction ============ Asymptotic stability of solitary waves in the context of continuous nonlinear Schrödinger equations in one, two, and three spatial dimensions was considered in a number of recent works (see Cuccagna [@cuccagna] for a review of literature). Little is known, however, about asymptotic stability of solitary waves in the context of discrete nonlinear Schrödinger (DNLS) equations. Orbital stability of a global energy minimizer under a fixed mass constraint was proved by Weinstein [@weinstein] for the DNLS equation with power nonlinearity $$i \dot{u}_n + \Delta_d u_n + |u_n|^{2 p} u_n = 0, \quad n \in \mathbb{Z}^d,$$ where $\Delta_d$ is a discrete Laplacian in $d$ dimensions and $p > 0$. For $p < \frac{2}{d}$ (subcritical case), it is proved that the ground state of an arbitrary energy exists, whereas for $p \geq \frac{2}{d}$ (critical and supercritical cases), there is an energy threshold, below which the ground state does not exist. Ground states of the DNLS equation with power-law nonlinearity correspond to single-humped solitons, which are excited in numerical and physical experiments by a single-site initial data with sufficiently large amplitude [@KEDS]. Such experiments have been physically realized in optical settings with both focusing [@mora] and defocusing [@rosberg] nonlinearities. We would like to consider long-time dynamics of the ground states and prove their asymptotic stability under some assumptions on the spectrum of the linearized DNLS equation. From the beginning, we would like to work in the space of one spatial dimension $(d = 1)$ and to add an external potential $V$ to the DNLS equation. These specifications are motivated by physical applications (see, e.g., the recent work of [@kroli] and references therein for a relevant discussion). We hence write the main model in the form $$\label{dNLS} i \dot{u}_n = (-\Delta + V_n) u_n + \gamma |u_n|^{2p} u_n, \quad n \in \mathbb{Z},$$ where $\Delta u_n := u_{n+1} - 2 u_n + u_{n-1}$ and $\gamma = 1$ ($\gamma = -1$) for defocusing (focusing) nonlinearity. Besides physical applications, the role of potential $V$ in our work can be explained by looking at the differences between the recent works of Mizumachi [@Miz] and Cuccagna [@Cuc] for a continuous nonlinear Schrödinger equation in one dimension. Using an external potential, Mizumachi proved asymptotic stability of small solitons bifurcating from the ground state of the Schrodinger operator $H_0 = -\partial_x^2 + V$ under some assumptions on the spectrum of $H_0$. He needed only spectral theory of the self-adjoint operator $H_0$ in $L^2$ since spectral projections and small nonlinear terms were controlled in the corresponding norm. Pioneering works along the same lines are attributed to Soffer–Weinstein [@SW1; @SW2; @SW3], Pillet & Wayne [@PW], and Yao & Tsai [@YT1; @YT2; @YT3]. Compared to this approach, Cuccagna proved asymptotic stability of nonlinear space-symmetric ground states in energy space of the continuous nonlinear Schrödinger equation with $V \equiv 0$. He had to invoke the spectral theory of non-self-adjoint operators arising in the linearization of the nonlinear Schrödinger equation at the ground state, following earlier works of Buslaev & Perelman [@BP1; @BP2], Buslaev & Sulem [@BS], and Gang & Sigal [@GS1; @GS2]. Since our work is novel in the context of the DNLS equation, we would like to simplify the spectral formalism and to focus on nonlinear analysis of asymptotic stability. This is the main reason why we work with small solitons bifurcating from the ground state of the discrete Schrodinger operator $H = -\Delta + V$. We will make use of the dispersive decay estimates obtained recently for operator $H$ by Stefanov & Kevrekidis [@SK] (for $V \equiv 0$), Komech, Kopylova & Kunze [@KKK] (for compact $V$), and Pelinovsky & Stefanov [@PS] (for decaying $V$). With more efforts and more elaborate analysis, our results can be generalized to large solitons with or without potential $V$ under some restrictions on spectrum of the non-self-adjoint operator associated with linearization at the nonlinear ground state. From a technical point of view, many previous works on asymptotic stability of solitary waves in continuous nonlinear Schrödinger equations address critical and supercritical cases, which in $d = 1$ corresponds to $p \geq 2$. Because the dispersive decay in $l^1-l^{\infty}$ norm is slower for the DNLS equation, the critical power appears at $p = 3$ and the proof of asymptotic stability of discrete solitons can be developed for $p \geq 3$. The most interesting case of the cubic DNLS equation for $p = 1$ is excluded from our consideration. To prove asymptotic stability of discrete solitons for $p \geq 3$, we extend the pointwise dispersive decay estimates from [@PS] to Strichartz estimates, which allow us for a better control of the dispersive parts of the solution. The nonlinear analysis follows the steps in the proof of asymptotic stability of continuous solitons by Mizumachi [@Miz]. In addition to analytical results, we also approximate time evolution of small solitons numerically in the DNLS equation (\[dNLS\]) with $p = 1,2,3$. Not only we confirm the asymptotic stability of discrete solitons in all the cases but also we find that the actual decay rate of perturbations near the small soliton is faster than the one used in our analytical arguments. The article is organized as follows. The main result for $p \geq 3$ is formulated in Section 2. Linear estimates are derived in Section 3. The proof of the main theorem is developed in Section 4. Numerical illustrations for $p = 1, 2, 3$ are discussed in Section 5. Appendix A gives proofs of technical formulas used in Section 3. [**Acknowledgement.**]{} When the paper was essentially complete, we became aware of a similar work of Cuccagna & Tarulli [@CT], where asymptotic stability of small discrete solitons of the DNLS equation (\[dNLS\]) was proved for $p \geq 3$. Stefanov’s research is supported in part by NSF-DMS 0701802. Kevrekidis’ research is supported in part by NSF-DMS-0806762, NSF-CAREER and the Alexander von Humboldt Foundation. Preliminaries and the main result ================================= In what follows, we use bold-faced notations for vectors in discrete spaces $l_s^1$ and $l_s^2$ on $\mathbb{Z}$ defined by their norms $$\| {\bf u} \|_{l^1_s} := \sum_{n \in \mathbb{Z}} (1+n^2)^{s/2} |u_n|, \quad \| {\bf u} \|_{l^2_s} := \left( \sum_{n \in \mathbb{Z}} (1+n^2)^{s} |u_n|^2 \right)^{1/2}.$$ Components of ${\bf u}$ are denoted by regular font, e.g. $u_n$ for $n \in \mathbb{Z}$. We shall make the following assumptions on the external potential ${\bf V}$ defined on the lattice $\mathbb{Z}$ and on the spectrum of the self-adjoint operator $H = -\Delta + {\bf V}$ in $l^2$. - ${\bf V} \in l^1_{2\sigma}$ for a fixed $\sigma > \frac{5}{2}$. - ${\bf V}$ is generic in the sense that no solution $\mbox{\boldmath $\psi$}_0$ of equation $H \mbox{\boldmath $\psi$}_0 = 0$ exists in $l^2_{-\sigma}$ for $\frac{1}{2} < \sigma \leq \frac{3}{2}$. - ${\bf V}$ supports exactly one negative eigenvalue $\omega_0 < 0$ of $H$ with an eigenvector $\mbox{\boldmath $\psi$}_0 \in l^2$ and no eigenvalues above $4$. The first two assumptions (V1) and (V2) are needed for the dispersive decay estimates developed in [@PS]. The last assumption (V3) is needed for existence of a family $\mbox{\boldmath $\phi$}(\omega)$ of real-valued decaying solutions of the stationary DNLS equation $$\label{stationaryDNLS} (-\Delta + V_n) \phi_n(\omega) + \gamma \phi_n^{2p+1}(\omega) = \omega \phi_n(\omega), \quad n \in \mathbb{Z},$$ near $\omega = \omega_0 < 0$. This is a standard local bifurcation of decaying solutions in a system of infinitely many algebraic equations (see [@Nirenberg] for details). \[lemma-bifurcation\] Assume that ${\bf V} \in l^{\infty}$ and that $H$ has an eigenvalue $\omega_0$ with a normalized eigenvector $\mbox{\boldmath $\psi$}_0 \in l^2$ such that $\| \mbox{\boldmath $\psi$}_0 \|_{l^2} = 1$. Let $\epsilon := \omega - \omega_0$, $\gamma = +1$, and $\epsilon_0 > 0$ be sufficiently small. For any $\epsilon \in (0,\epsilon_0)$, there exists an $\epsilon$-independent constant $C > 0$ such that the stationary DNLS equation (\[stationaryDNLS\]) admits a solution $\mbox{\boldmath $\phi$}(\omega) \in C^2([\omega_0,\omega_0+\epsilon_0],l^2)$ satisfying $$\left\| \mbox{\boldmath $\phi$}(\omega) - \frac{\epsilon^{\frac{1}{2p}} \mbox{\boldmath $\psi$}_0}{\| \mbox{\boldmath $\psi$}_0 \|^{1+\frac{1}{p}}_{l^{2p+2}}} \right\|_{l^2} \leq C \epsilon^{1 + \frac{1}{2p}}.$$ Moreover, the solution $\mbox{\boldmath $\phi$}(\omega)$ decays exponentially to zero as $|n| \to \infty$. \[remark-bifurcation\] Because of the exponential decay of $\mbox{\boldmath $\phi$}(\omega)$ as $|n| \to \infty$, the solution $\mbox{\boldmath $\phi$}(\omega)$ exists in $l^2_{{\sigma}}$ for all ${\sigma}\geq 0$. In addition, since $ \| \mbox{\boldmath $\phi$}\|_{l^1} \leq C_{{\sigma}} \| \mbox{\boldmath $\phi$} \|_{l^2_{{\sigma}}}, $ for any ${\sigma}> \frac{1}{2}$, the solution $\mbox{\boldmath $\phi$}(\omega)$ also exists in $l^1$. The case $\gamma = -1$ with the local bifurcation to the domain $\omega < \omega_0$ is absolutely analogous. For simplification, we shall develop analysis for $\gamma = +1$ only. To work with solutions of the DNLS equation (\[dNLS\]) for all $t \in {\mathbb R}_+$ starting with some initial data at $t = 0$, we need global well-posedness of the Cauchy problem for (\[dNLS\]). Because $H$ is a bounded operator from $l^2$ to $l^2$, global well-posedness for (\[dNLS\]) follows from simple arguments based on the flux conservation equation $$\label{balance} i \frac{d}{dt} |u_n|^2 = u_n (\bar{u}_{n+1} + \bar{u}_{n-1}) - \bar{u}_n (u_{n+1}+u_{n-1})$$ and the contraction mapping arguments (see [@PP] for details). \[lemma-wellposedness\] Fix ${\sigma}\geq 0$. For any ${\bf u}_0 \in l^2_{{\sigma}}$, there exists a unique solution ${\bf u}(t) \in C^1(\mathbb{R}_+,l^2_{{\sigma}})$ such that ${\bf u}(0) = {\bf u}_0$ and ${\bf u}(t)$ depends continuously on ${\bf u}_0$. Global well-posedness holds also on $\mathbb{R}_-$ (and thus on $\mathbb{R}$) since the DNLS equation (\[dNLS\]) is a reversible dynamical system. We shall work in the positive time intervals only. Equipped with the results above, we decompose a solution to the DNLS equation (\[dNLS\]) into a family of stationary solutions with time varying parameters and a radiation part using the substitution $$\label{decomposition} {\bf u}(t) = e^{-i \theta(t)} \left( \mbox{\boldmath $\phi$}(\omega(t)) + {\bf z}(t) \right),$$ where $(\omega,\theta) \in \mathbb{R}^2$ represents a two-dimensional orbit of stationary solutions ${\bf u}(t) = e^{-i\theta -i \omega t} \mbox{\boldmath $\phi$}(\omega)$ (their time evolution will be specified later) and ${\bf z}(t) \in C^1(\mathbb{R}_+,l^2_{\sigma})$ solves the time-evolution equation in the form $$\begin{aligned} \label{time-evolution-z} i \dot{{\bf z}} = (H-\omega) {\bf z} - (\dot{\theta} - \omega) (\mbox{\boldmath $\phi$}(\omega) + {\bf z}) - i \dot{\omega} \partial_{\omega} \mbox{\boldmath $\phi$}(\omega) + {\bf N}(\mbox{\boldmath $\phi$}(\omega)+{\bf z}) - {\bf N}(\mbox{\boldmath $\phi$}(\omega)),\end{aligned}$$ where $H = -\Delta + {\bf V}$, $[{\bf N}(\mbox{\boldmath $\psi$})]_n = \gamma |\psi_n|^{2p} \psi_n$, and $\partial_{\omega} \mbox{\boldmath $\phi$}(\omega)$ exists thanks to Lemma \[lemma-bifurcation\]. The linearized time evolution at the stationary solution $ \mbox{\boldmath $\phi$}(\omega)$ involves operators $$L_- = H - \omega + {\bf W}, \quad L_+ = H - \omega + (2p+1) {\bf W},$$ where $W_n = \gamma \phi_n^{2p}(\omega)$ and ${\bf W}$ decays exponentially as $|n| \to \infty$ thanks to Lemma \[lemma-bifurcation\]. The linearized time evolution in variables ${\bf v} = {\rm Re}({\bf z})$ and ${\bf w} = {\rm Im}({\bf z})$ involves a symplectic structure which can be characterized by the non-self-adjoint eigenvalue problem $$\label{linearizedNLS} L_+ {\bf v} = - \lambda {\bf w}, \quad L_- {\bf w} = \lambda {\bf v}.$$ Using Lemma \[lemma-bifurcation\], we derive the following result. For any $\epsilon \in (0,\epsilon_0)$, the linearized eigenvalue problem (\[linearizedNLS\]) admits a double zero eigenvalue with a one-dimensional kernel, isolated from the rest of the spectrum. The generalized kernel is spanned by vectors $({\bf 0},\mbox{\boldmath $\phi$}(\omega)), (- \partial_{\omega} \mbox{\boldmath $\phi$}(\omega),{\bf 0}) \in l^2$ satisfying $$L_- \mbox{\boldmath $\phi$}(\omega) = {\bf 0}, \qquad L_+ \partial_{\omega} \mbox{\boldmath $\phi$}(\omega) = \mbox{\boldmath $\phi$}(\omega).$$ If $({\bf v},{\bf w}) \in l^2$ is symplectically orthogonal to the double subspace of the generalized kernel, then $$\langle {\bf v},\mbox{\boldmath $\phi$}(\omega) \rangle = 0, \quad \langle {\bf w},\partial_{\omega} \mbox{\boldmath $\phi$}(\omega) \rangle = 0,$$ where $\langle {\bf u},{\bf v} \rangle := \sum_{n \in \mathbb{Z}} u_n \bar{w}_n$. By Lemma 1 in [@PS], operator $H$ has the essential spectrum on $[0,4]$. Because of the exponential decay of ${\bf W}$ as $|n| \to \infty$, the essential spectrum of $L_+$ and $L_-$ is shifted by $-\omega \approx -\omega_0 > 0$, so that the zero point in the spectrum of the linearized eigenvalue problem (\[linearizedNLS\]) is isolated from the continuous spectrum and other isolated eigenvalues. The geometric kernel of the linearized operator $L = {\rm diag}(L_+,L_-)$ is one-dimensional for $\epsilon \in (0,\epsilon_0)$ since $L_- \mbox{\boldmath $\phi$}(\omega) = {\bf 0}$ is nothing but the stationary DNLS equation (\[stationaryDNLS\]) whereas $L_+$ has an empty kernel thanks to the perturbation theory and Lemma \[lemma-bifurcation\]. Indeed, for a small $\epsilon \in (0,\epsilon_0)$, we have $$\langle \mbox{\boldmath $\psi$}_0, L_+ \mbox{\boldmath $\psi$}_0 \rangle = 2p \gamma \epsilon + {\cal O}(\epsilon^2) \neq 0.$$ By the perturbation theory, a simple zero eigenvalue of $L_+$ for $\epsilon = 0$ becomes a positive eigenvalue for $\epsilon > 0$ (if $\gamma = +1$). The second (generalized) eigenvector $(- \partial_{\omega} \mbox{\boldmath $\phi$}(\omega),{\bf 0})$ is found by direct computation thanks to Lemma \[lemma-bifurcation\]. It remains to show that the third (generalized) eigenvector does not exist. If it does, it would satisfy the equation $$L_- {\bf w}_0 = -\partial_{\omega} \mbox{\boldmath $\phi$}(\omega).$$ However, $$\langle \mbox{\boldmath $\phi$}(\omega),\partial_{\omega} \mbox{\boldmath $\phi$}(\omega) \rangle = \frac{1}{2} \frac{d}{d \omega} \| \mbox{\boldmath $\phi$}(\omega) \|^2_{l^2} = \frac{\epsilon^{\frac{1}{p}-1}}{2p \| \mbox{\boldmath $\psi$}_0\|^{2 + \frac{2}{p}}_{l^{2p+2}}} \left( 1 + {\cal O}(\epsilon) \right) \neq 0$$ for $\epsilon \in (0,\epsilon_0)$ by Lemma \[lemma-bifurcation\]. Therefore, no ${\bf w}_0 \in l^2$ exists. To determine the time evolution of varying parameters $(\omega,\theta)$ in the evolution equation (\[time-evolution-z\]), we shall add the condition that ${\bf z}(t)$ is symplectically orthogonal to the two-dimensional null subspace of the linearized problem (\[linearizedNLS\]). To normalize the eigenvectors uniquely, we set $$\label{eigenvectors-normalized} \mbox{\boldmath $\psi$}_1 = \frac{\mbox{\boldmath $\phi$}(\omega)}{\|\mbox{\boldmath $\phi$}(\omega)\|_{l^2}}, \quad \mbox{\boldmath $\psi$}_2 = \frac{\partial_{\omega} \mbox{\boldmath $\phi$}(\omega)}{ \|\partial_{\omega} \mbox{\boldmath $\phi$}(\omega)\|_{l^2}}$$ and require that $$\label{constraints} \langle {\rm Re}{\bf z}(t),\mbox{\boldmath $\psi$}_1 \rangle = \langle {\rm Im}{\bf z}(t),\mbox{\boldmath $\psi$}_2 \rangle = 0.$$ By Lemma \[lemma-bifurcation\], both eigenvectors $\mbox{\boldmath $\psi$}_1$ and $\mbox{\boldmath $\psi$}_2$ are locally close to $\mbox{\boldmath $\psi$}_0$, the eigenvector of $H$ for eigenvalue $\omega_0$, in any norm, e.g. $$\| \mbox{\boldmath $\psi$}_1 - \mbox{\boldmath $\psi$}_0 \|_{l^2} + \| \mbox{\boldmath $\psi$}_2 - \mbox{\boldmath $\psi$}_0 \|_{l^2} \leq C \epsilon,$$ for some $C > 0$. Although the vector field of the time evolution problem (\[time-evolution-z\]) does not lie in the orthogonal complement of $\mbox{\boldmath $\psi$}_0$, that is in the absolutely continuous spectrum of $H$, the difference is small for small $\epsilon > 0$. We shall prove that the conditions (\[constraints\]) define a unique decomposition (\[decomposition\]). \[lemma-decomposition\] Fix $\epsilon > 0$ and $\delta > 0$ be sufficiently small. Assume that there exists $T = T(\epsilon,\delta)$ and $C_0 > 0$, such that ${\bf u}(t) \in C^1([0,T],l^2)$ satisfies $$\label{u-bound} \| {\bf u}(t) - \mbox{\boldmath $\phi$}(\omega_0 + \epsilon))\|_{l^2} \leq C_0 \delta \epsilon^{\frac{1}{2p}},$$ uniformly on $[0,T]$. There exists a unique choice of $(\omega,\theta) \in C^1([0,T],\mathbb{R}^2)$ and ${\bf z}(t) \in C^1([0,T],l^2)$ in the decomposition (\[decomposition\]) provided the constraints (\[constraints\]) are met. Moreover, there exists $C > 0$ such that $$\label{theta-omega-bounds} |\omega(t) - \omega_0 - \epsilon | \leq C \delta \epsilon, \quad | \theta(t)| \leq C \delta, \quad \| {\bf z}(t) \|_{l^2} \leq C \delta \epsilon^{\frac{1}{2p}},$$ uniformly on $[0,T]$. We write the decomposition (\[decomposition\]) in the form $$\label{z-representation} {\bf z} = e^{i \theta} \left({\bf u} - \mbox{\boldmath $\phi$}(\omega_0+\epsilon)\right) + \left( e^{i \theta} \mbox{\boldmath $\phi$}(\omega_0+\epsilon) - \mbox{\boldmath $\phi$}(\omega) \right).$$ First, we show that the constraints (\[constraints\]) give unique values of $(\omega,\theta)$ satisfying bounds (\[theta-omega-bounds\]) uniformly in $[0,T]$ provided the bound (\[u-bound\]) holds. To do so, we rewrite (\[constraints\]) and (\[z-representation\]) as a fixed-point problem ${\bf F}(\omega,\theta) = {\bf 0}$, where ${\bf F} : \mathbb{R}^2 \mapsto \mathbb{R}^2$ is given by $${\bf F}(\omega,\theta) = \left[ \begin{array}{c} \langle {\rm Re} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}) e^{i \theta},\mbox{\boldmath $\psi$}_1 \rangle + \langle \mbox{\boldmath $\phi$}^{(0)} \cos \theta - \mbox{\boldmath $\phi$}(\omega),\mbox{\boldmath $\psi$}_1 \rangle \\ \langle {\rm Im} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}) e^{i \theta},\mbox{\boldmath $\psi$}_2 \rangle + \langle \mbox{\boldmath $\phi$}^{(0)} \sin \theta, \mbox{\boldmath $\psi$}_2 \rangle \end{array} \right],$$ where $\mbox{\boldmath $\phi$}^{(0)} := \mbox{\boldmath $\phi$}(\omega_0 + \epsilon)$. We note that ${\bf F}$ is $C^1$ in $(\theta,\omega)$ thanks to Lemma \[lemma-bifurcation\]. Direct computations give the vector field $${\bf F}(\omega_0+\epsilon,0) = \left[ \begin{array}{c} \langle {\rm Re} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}),\mbox{\boldmath $\psi$}^{(0)}_1 \rangle \\ \langle {\rm Im} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}),\mbox{\boldmath $\psi$}^{(0)}_2 \rangle \end{array} \right]$$ and the Jacobian $D {\bf F}(\omega_0+\epsilon,0) = {\bf D}_1 + {\bf D}_2$ with $$\begin{aligned} {\bf D}_1 & = & \left[ \begin{array}{cc} - \langle \partial_{\omega} \mbox{\boldmath $\phi$}^{(0)},\mbox{\boldmath $\psi$}_1^{(0)} \rangle & 0 \\ 0 & \langle \mbox{\boldmath $\phi$}^{(0)}, \mbox{\boldmath $\psi$}^{(0)}_2 \rangle \end{array} \right], \\ {\bf D}_2 & = & \left[ \begin{array}{cc} \langle {\rm Re} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}), \partial_{\omega} \mbox{\boldmath $\psi$}_1^{(0)} \rangle & - \langle {\rm Im} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}), \mbox{\boldmath $\psi$}^{(0)}_1 \rangle \\ \langle {\rm Im} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}), \partial_{\omega} \mbox{\boldmath $\psi$}^{(0)}_2 \rangle & \langle {\rm Re} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}),\mbox{\boldmath $\psi$}^{(0)}_2 \rangle \end{array} \right],\end{aligned}$$ where $\mbox{\boldmath $\psi$}^{(0)}_{1,2} = \mbox{\boldmath $\psi$}_{1,2} |_{\omega = \omega_0 + \epsilon}$ and $\partial_{\omega} \mbox{\boldmath $\psi$}^{(0)}_{1,2} = \partial_{\omega} \mbox{\boldmath $\psi$}_{1,2} |_{\omega = \omega_0 + \epsilon}$. Thanks to the bound (\[u-bound\]) and the normalization of $\mbox{\boldmath $\psi$}_{1,2}$, there exists an $(\epsilon,\delta)$-independent constant $C_0 > 0$ such that $$\| {\bf F}(\omega_0+\epsilon,0) \| \leq C_0 \delta \epsilon^{\frac{1}{2p}}.$$ On the other hand, $D {\bf F}(\omega_0+\epsilon,0)$ is invertible for small $\epsilon > 0$ since $$|({\bf D}_1)_{11}| \geq C_1 \epsilon^{\frac{1}{2p}-1}, \quad |({\bf D}_1)_{22}| \geq C_2 \epsilon^{\frac{1}{2p}}$$ and $$|({\bf D}_2)_{11}| + |({\bf D}_2)_{21}| \leq C_3 \delta \epsilon^{\frac{1}{2p} - 1}, \quad |({\bf D}_2)_{12}| + |({\bf D}_2)_{22}| \leq C_4 \delta \epsilon^{\frac{1}{2p}},$$ for some ($\epsilon$,$\delta$)-independent constants $C_1,C_2,C_3,C_4 > 0$. By the Implicit Function Theorem, there exists a unique root of ${\bf F}(\omega,\theta) = {\bf 0}$ near $(\omega_0+\epsilon,0)$ for any ${\bf u}(t)$ satisfying (\[u-bound\]) such that $$|\omega(t) - \omega_0 - \epsilon | \leq C \delta \epsilon, \quad | \theta(t)| \leq C \delta,$$ for some $C > 0$. Moreover, if ${\bf u}(t) \in C^1([0,T],l^2)$, then $(\omega,\theta) \in C^1([0,T],\mathbb{R}^2)$. Finally, existence of a unique ${\bf z}(t)$ and the bound $\| {\bf z}(t) \|_{l^2} \leq C \delta \epsilon^{\frac{1}{2p}}$ follow from the representation (\[z-representation\]) and the triangle inequality. Assuming $(\omega,\theta) \in C^1([0,T],\mathbb{R}^2)$ at least locally in time and using Lemma \[lemma-decomposition\], we define the time evolution of $(\omega,\theta)$ from the projections of the time evolution equation (\[time-evolution-z\]) with the symplectic orthogonality conditions (\[constraints\]). The resulting system is written in the matrix–vector form $$\label{3} {\bf A}(\omega,{\bf z}) \left[ \begin{array}{cc} \dot{\omega} \\ \dot{\theta} - \omega \end{array} \right] = {\bf f}(\omega,{\bf z}),$$ where $${\bf A}(\omega,{\bf z}) = \left[ \begin{array}{ccc} \langle \partial_{\omega} \mbox{\boldmath $\phi$}(\omega),\mbox{\boldmath $\psi$}_1 \rangle - \langle {\rm Re} {\bf z},\partial_{\omega} \mbox{\boldmath $\psi$}_1 \rangle & \langle {\rm Im} {\bf z},\mbox{\boldmath $\psi$}_1 \rangle \\ \langle {\rm Im} {\bf z}, \partial_{\omega} \mbox{\boldmath $\psi$}_2 \rangle & \langle \mbox{\boldmath $\phi$}(\omega) + {\rm Re} {\bf z}, \mbox{\boldmath $\psi$}_2 \rangle \end{array} \right]$$ and $${\bf f}(\omega,{\bf z}) = \left[ \begin{array}{l} \langle {\rm Im} {\bf N}(\mbox{\boldmath $\phi$}+{\bf z})- {\bf W} {\bf z}, \mbox{\boldmath $\psi$}_1 \rangle \\ \langle {\rm Re} {\bf N}(\mbox{\boldmath $\phi$}+{\bf z}) - {\bf N}(\mbox{\boldmath $\phi$})-(2p+1) {\bf W} {\bf z}, \mbox{\boldmath $\psi$}_2 \rangle \end{array} \right].$$ Using an elementary property for power functions $$||a+b|^{2p}(a+b)-|a|^{2p}a|\leq C_p (|a|^{2p}|b|+|b|^{2p+1}),$$ for some $C_p > 0$, where $a,b \in \mathbb{C}$ are arbitrary, we bound the vector fields of (\[time-evolution-z\]) and (\[3\]) by $$\begin{aligned} \label{estimate-N} \| {\bf N}(\mbox{\boldmath $\phi$}(\omega)+{\bf z}) - {\bf N}(\mbox{\boldmath $\phi$}(\omega) \|_{l^2} & \leq & C \left( \| |\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf z}| \|_{l^2} + \| {\bf z} \|_{l^2}^{2p+1} \right), \\ \label{estimate-f} \| {\bf f}(\omega,{\bf z}) \| & \leq & C \sum_{j=1}^2 \left( \| |\mbox{\boldmath $\phi$}(\omega)|^{2p-1} |\mbox{\boldmath $\psi$}_j| |{\bf z}|^2 \|_{l^1} + \| |\mbox{\boldmath $\psi$}_j| |{\bf z}|^{2p+1} \|_{l^1} \right),\end{aligned}$$ for some $C > 0$, where the pointwise multiplication of vectors on $\mathbb{Z}$ is understood in the sense $$(|\mbox{\boldmath $\phi$}| |\mbox{\boldmath $\psi$}|)_n = \phi_n \psi_n.$$ By Lemmas \[lemma-bifurcation\] and \[lemma-decomposition\], ${\bf A}(\omega,{\bf z})$ is invertible for a small ${\bf z} \in l^2$ and a small $\epsilon \in (0,\epsilon_0)$ so that solutions of system (\[3\]) satisfy the estimates $$\begin{aligned} \label{33} |\dot{\omega}| & \leq & C \epsilon^{2-\frac{1}{p}} \left( \| |\mbox{\boldmath $\psi$}_1| |{\bf z}|^2 \|_{l^1} + \| |\mbox{\boldmath $\psi$}_2| |{\bf z}|^2 \|_{l^1} \right), \\ \label{33a} |\dot{\theta}-\omega| & \leq & C \epsilon^{1-\frac{1}{p}} \left( \| |\mbox{\boldmath $\psi$}_1| |{\bf z}|^2 \|_{l^1} + \| |\mbox{\boldmath $\psi$}_2| |{\bf z}|^2 \|_{l^1} \right),\end{aligned}$$ for some $C > 0$ uniformly in $\| {\bf z} \|_{l^2} \leq C_0 \epsilon^{\frac{1}{2p}}$ for some $C_0 > 0$. [The estimates (\[33\]) and (\[33a\]) show that if $\| {\bf z} \|_{l^2} \leq C \delta \epsilon^{\frac{1}{2p}}$ for some $C > 0$, then $$|\omega(t) - \omega(0)| \leq C \delta^2 \epsilon^2, \quad \left| \theta(t) - \int_0^t \omega(t') dt' \right| \leq C \delta^2 \epsilon,$$ uniformly on $[0,T]$ for any fixed $T > 0$. These bounds are smaller than bounds (\[theta-omega-bounds\]) of Lemma \[lemma-decomposition\]. They become comparable with bounds (\[theta-omega-bounds\]) for larger time intervals $[0,T]$, where $T \leq \frac{C_0}{\delta \epsilon}$ for some $C_0 > 0$. Our main task is to extend these bounds globally to $T = \infty$.]{} By the theorem on orbital stability in [@weinstein], the trajectory of the DNLS equation (\[dNLS\]) originating from a point in a local neighborhood of the stationary solution $\mbox{\boldmath $\phi$}(\omega(0))$ remains in a local neighborhood of the stationary solution $\mbox{\boldmath $\phi$}(\omega(t))$ for all $t \in \mathbb{R}_+$. By a definition of orbital stability, for any $\mu_0 > 0$ there exists a $\nu_0 > 0$ such that if $|\omega(0) - \omega_0| \leq \nu_0$ then $|\omega(t) - \omega_0| \leq \mu_0$ uniformly on $t \in \mathbb{R}_+$. Therefore, there exists a $\delta(\epsilon)$ for each $\epsilon \in (0,\epsilon_0)$ such that $T(\epsilon,\delta) = \infty$ for any $\delta \in (0,\delta(\epsilon))$ in Lemma \[lemma-decomposition\]. To prove the main result on asymptotic stability, we need to show that the trajectory approaches to the stationary solution $\mbox{\boldmath $\phi$}(\omega_{\infty})$ for some $\omega_{\infty} \in (\omega_0,\omega_0 + \epsilon_0)$. Our main result is formulated as follows. \[theorem-main\] Assume (V1)–(V3), fix $\gamma = +1$ and $p \geq 3$. Fix $\epsilon > 0$ and $\delta > 0$ be sufficiently small and assume that $\theta(0) = 0$, $\omega(0) = \omega_0 + \epsilon$, and $$\| {\bf u}(0) - \mbox{\boldmath $\phi$}(\omega_0 + \epsilon) \|_{l^2} \leq C_0 \delta \epsilon^{\frac{1}{2p}}$$ for some $C_0 > 0$. Then, there exist $\omega_{\infty} \in (\omega_0,\omega_0 + \epsilon_0)$, $(\omega,\theta) \in C^1(\mathbb{R}_+,\mathbb{R}^2)$, and a solution ${\bf u}(t) \in X:= C^1(\mathbb{R}_+,l^2)\cap L^6(\mathbb{R}_+,l^\infty)$ to the DNLS equation (\[dNLS\]) such that $$\lim_{t \to \infty} \omega(t) = \omega_{\infty}, \quad \| {\bf u}(t) - e^{-i\theta(t)} \mbox{\boldmath $\phi$}(\omega(t)) \|_{X} \leq C{\delta}{\varepsilon}^{1/(2p)}.$$ Theorem \[theorem-main\] is proved in Section 4. To bound solutions of the time-evolution problem (\[time-evolution-z\]) in the space $X$ (intersected with some other spaces of technical nature), we need some linear estimates, which are described in Section 3. Linear estimates ================ We need several types of linear estimates, each is designed to control different nonlinear terms of the vector field of the evolution equation (\[time-evolution-z\]). For notational convenience, we shall use $L^p_t$ and $l^q_n$ to denote $L^p$ space on $t \in [0,T]$ and $l^q$ space on $n \in \mathbb{Z}$, where $T > 0$ is an arbitrary time including $T = \infty$. The notation $<n> = (1 + n^2)^{1/2}$ is used for the weights in $l^q_n$ norms. The constant $C > 0$ is a generic constant, which may change from one line to another line. Decay and Strichartz estimates ------------------------------ Under assumptions (V1)–(V2) on the potential, the following result was proved in [@PS]. \[lemma-dispersive\] Fix $\sigma > \frac{5}{2}$ and assume (V1)–(V2). There exists a constant $C > 0$ depending on ${\bf V}$ such that $$\begin{aligned} \label{eq:15} \left\| \langle n \rangle^{-{\sigma}} e^{-i t H}P_{a.c.}(H) {\bf f} \right\|_{l^2_n} & \leq & C (1+t)^{-3/2} \| \langle n \rangle^{{\sigma}} {\bf f} \|_{l^2_n}, \\ \label{eq:16} \left\| e^{-i t H}P_{a.c.}(H) {\bf f} \right\|_{l^\infty_n} & \leq & C (1+t)^{-1/3} \| {\bf f} \|_{l^1_n},\end{aligned}$$ for all $t \in \mathbb{R}_+$, where $P_{a.c.}(H)$ is the projection to the absolutely continuous spectrum of $H$. Unlike the continuous case, the upper bound (\[eq:16\]) is non-singular as $t \to 0$ because the discrete case always enjoys an estimate $\left\| {\bf f} \right\|_{l^\infty_n} \leq \| {\bf f} \|_{l^2_n} \leq \| {\bf f} \|_{l^1_n}$. Using Lemma \[lemma-dispersive\] and Theorem 1.2 of Keel-Tao [@KT], the following corollary transfers pointwise decay estimates into Strichartz estimates. \[corollary-Strichartz\] There exists a constant $C > 0$ such that $$\begin{aligned} \label{eq:Strichartz1} \left\| e^{-i t H} P_{a.c.}(H) {\bf f} \right\|_{L^6_t l^{\infty}_n \cap L^{\infty}_t l^2_n} & \leq & C \| {\bf f} \|_{l^2_n}, \\ \label{eq:Strichartz2} \left\| \int_0^t e^{-i (t-s) H} P_{a.c.}(H) {\bf g}(s) ds \right\|_{L^6_t l^{\infty}_n \cap L^{\infty}_t l^2_n} & \leq & C \| {\bf g} \|_{L^1_t l^2_n},\end{aligned}$$ where the norm in $L^p_t l^q_n$ is defined by $$\| {\bf f} \|_{L^p_t l^q_n} = \left( \int_{\mathbb{R}_+} \left( \| {\bf f}(t) \|_{l^q_n} \right)^p dt \right)^{1/p}.$$ Time averaged estimates ----------------------- To control the evolution of the varying parameters $(\omega,\theta)$, we derive additional time averaged estimates. Similar to the continuous case, these estimates are only needed in one dimension, because the time decay provided by the Strichartz estimates is insufficient to guarantee time integrability of $\dot{\omega}(t)$ and $\dot{\theta}(t)-\omega(t)$ bounded from above by the estimates (\[33\]) and (\[33a\]). Without the time integrability of these quantities, the arguments on the decay of various norms of ${\bf z}(t)$ satisfying the time evolution problem (\[time-evolution-z\]) cannot be closed. \[le:01\] Fix $\sigma > \frac{5}{2}$ and assume (V1) and (V2). There exists a constant $C > 0$ depending on ${\bf V}$ such that $$\begin{aligned} \label{eq:01} \|<n>^{-3/2} e^{-i t H} P_{a.c.}(H) {\bf f} \|_{l^\infty_n L^2_t} & \leq & C\| {\bf f} \|_{l^2_n} \\ \label{eq:02} \left\|\int_{\mathbb{R}_+} e^{-i t H} P_{a.c.}(H) {\bf F}(s)dt \right\|_{l^2_n} & \leq & C\|<n>^{3/2} {\bf F} \|_{l^1_nL^2_t}, \\ \label{eq:033} \left\|<n>^{-{\sigma}} \int_0^t e^{-i(t-s)H} P_{a.c.}(H) {\bf F}(s) ds \right\|_{l^\infty_n L^2_t} & \leq & C \|<n>^{{\sigma}} {\bf F} \|_{l^1_n L^2_t} \\ \label{eq:0333} \left\|<n>^{-{\sigma}} \int_0^t e^{-i(t-s)H} P_{a.c.}(H) {\bf F}(s) ds \right\|_{l^\infty_n L^2_t} & \leq & C \| {\bf F} \|_{L^1_t l^2_n} \\ \label{eq:03} \left\|\int_0^t e^{-i(t-s)H} P_{a.c.}(H) {\bf F}(s) ds \right\|_{L^6_tl^\infty_n \cap L^{\infty}_t l^2_n} & \leq & C \|<n>^3 {\bf F} \|_{L^2_t l^2_n}.\end{aligned}$$ To proceed with the proof, let us set up a few notations. First, introduce the perturbed resolvent $R_V({\lambda}):=(H-{\lambda})^{-1}$ for ${\lambda}\in \mathbb{C} \backslash [0,4]$. We proved in [@PS Theorem 1] that for any fixed $\omega \in (0,4)$, there exists $R_V^{\pm}(\omega) = \lim_{\epsilon \downarrow 0} R(\omega \pm i \epsilon)$ in the norm of $B({\sigma},-{\sigma})$ for any $\sigma > \frac{1}{2}$, where $B({\sigma},-{\sigma})$ denotes the space of bounded operators from $l^2_{{\sigma}}$ to $l^2_{-{\sigma}}$. Next, we recall the Cauchy formula for $e^{i t H}$ $$\label{eq:010} e^{-i t H} P_{a.c.}(H) = {\frac{1}{\pi}} \int_0^4 e^{-i t \omega} {\rm Im} R_V (\omega) d\omega = \frac{1}{2\pi i} \int_0^4 e^{-i t \omega} \left[ R^+(\omega) - R^-(\omega) \right] d\omega,$$ where the integral is understood in norm $B({\sigma},-{\sigma})$. We shall parameterize the interval $[0,4]$ by $\omega = 2 - 2 \cos(\theta)$ for $\theta \in [-\pi,\pi]$. Let $\chi_0, \chi \in C^{\infty}_0: \; \chi_0 +\chi = 1$ for all $\theta\in [-\pi, \pi]$, so that $${\rm supp} \chi_0 \subset [-\theta_0,\theta_0] \cup (-\pi, -\pi+\theta_0) \cup (\pi-\theta_0, \pi)$$ and $${\rm supp} \chi \subset [\theta_0/2,\pi-\theta_0/2] \cup [-\pi+\theta_0/2,-\theta_0/2],$$ where $0< \theta_0 \leq \frac{\pi}{4}$. Note that the support of $\chi$ stays away from both $0$ and $\pi$. Following Mizumachi [@Miz], the proof of Lemma \[le:01\] relies on the technical lemma. \[le:08\] Assume (V1) and (V2). There exists a constant $C > 0$ such that $$\begin{aligned} \label{eq:05} & & \sup_{n \in \mathbb{Z}} \|\chi R^{\pm}_V(\omega) {\bf f} \|_{L^2_{\omega}(0,4)} \leq C\| {\bf f} \|_{l^2_n}, \\ \label{eq:06} & & \sup_{n \in \mathbb{Z}} \| <n>^{-3/2} \chi_0 R^{\pm}_V({\omega}) {\bf f}\|_{L^2_{\omega}(0,4)}\leq C\| {\bf f} \|_{l^2_n}.\end{aligned}$$ The proof of Lemma \[le:08\] is developed in Appendix A. Using Lemma \[le:08\], we can now prove Lemma \[le:01\]. [*of Lemma \[le:01\].*]{} Let us first show , since it can be deduced from , although, it can also be viewed (and proved) as a dual of as well. Indeed, is equivalent to $$\|<n>^{-{\sigma}} \int_0^t e^{-i (t-s)H} P_{a.c.}(H) <n>^{-{\sigma}} {\bf G}(s) ds \|_{l^\infty_n L^2_t}\leq \| {\bf G} \|_{l^1_n L^2_t}.$$ By the Krein’s theorem, for every Banach space $X$, the elements of the space $l^1_n(X)$ are weak limits of linear combinations of functions in the form $\delta_{n,n_0} x$, where $x\in X$, $n_0\in \mathbb{Z}$, and $\delta_{n,n_0}$ is Kronecker’s symbol. Thus, to prove the last estimate, we need to check if it holds for $G_n(s) = {\delta}_{n,n_0} g(s)$, where $g\in L^2_t$. By Minkowski’s inequality, the obvious embedding $l^2\hookrightarrow l^\infty$ and the dispersive decay estimate for any $\sigma > \frac{5}{2}$, we have $$\begin{aligned} & & \left\| <n>^{-{\sigma}} \int_0^t e^{- i (t-s)H} P_{a.c.}(H) <n>^{-{\sigma}} {\delta}_{n,n_0} g(s) ds \right\|_{l^\infty_n L^2_t} \\ & & \leq C \left\| <n>^{-{\sigma}} \int_0^t \left\| e^{- i (t-s)H} P_{a.c.}(H) <n>^{-{\sigma}} {\delta}_{n,n_0} \right\|_{l^2_n} |g(s)| ds \right\|_{L^2_t} \\ & & \leq C \left\| \int_0^t \frac{|g(s)| ds}{(1 + t-s)^{3/2}} \right\|_{L^2_t}\leq C \|g\|_{L^2_t},\end{aligned}$$ where in the last step, we have used Hausdorff-Young’s inequality $L^1*L^2 \hookrightarrow L^2$. We show next that , , follow from . Indeed, is simply a dual of and is hence equivalent to . For , we apply the so-called averaging principle, which tells us that to prove , it is sufficient to show it for ${\bf F}(t)= \delta(t-t_0) {\bf f}$, where ${\bf f} \in l^2_n$ and $\delta(t-t_0)$ is Dirac’s delta-function. Therefore, we obtain $$\begin{aligned} \left\|<n>^{- {\sigma}} \int_0^t e^{- i (t-s)H} \delta(s - t_0) P_{a.c.}(H) {\bf f} ds \right\|_{l^\infty_n L^2_t} & = & \|<n>^{-{\sigma}} e^{- i (t-t_0)H} P_{a.c.}(H) {\bf f} \|_{l^\infty_n L^2_t} \\ & \leq & \|<n>^{-3/2} e^{- i (t-t_0)H} P_{a.c.}(H) {\bf f} \|_{l^\infty_n L^2_t} \\ & \leq & C\| {\bf f} \|_{l^2_n},\end{aligned}$$ where in the last step, we have used . For , we argue as follows. Define $$\begin{aligned} T {\bf F}(t) & = & \int_{\mathbb{R}} e^{-i(t-s)H} P_{a.c.}(H) {\bf F}(s)ds \\ & = & e^{-i t H} P_{a.c.}(H) \left( \int_{\mathbb{R}} e^{- i s H} P_{a.c.}(H) {\bf F}(s)ds \right) \\ & = & e^{-i t H} P_{a.c.}(H) {\bf f},\end{aligned}$$ where ${\bf f} = \int_{\mathbb{R}} e^{-i s H} P_{a.c.}(H) {\bf F}(s) ds$. By an application of the Strichartz estimate (\[eq:Strichartz1\]) and subsequently , we obtain $$\begin{aligned} \| T {\bf F} \|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n} \leq C \|{\bf f}\|_{l^2_n} \leq \|<n>^{3/2} {\bf F}\|_{l^1_n L^2_t} \leq C \|<n>^{3} {\bf F}\|_{l^2_n L^2_t} = C \|<n>^{3} {\bf F}\|_{L^2_t l^2_n},\end{aligned}$$ where in the last two steps, we have used Hölder’s inequality and the fact that $l^2_n$ and $L^2_t$ commute. Now, by the Christ-Kiselev’s lemma (e.g. Theorem 1.2 in [@KT]), we conclude that the estimate (\[eq:03\]) applies to $\int_{0}^t e^{-i(t-s)H} P_{a.c.}(H) {\bf F}(s) ds$, similar to $T {\bf F}(t)$. To complete the proof of Lemma \[le:08\], it only remains to prove . Let us write $$\begin{aligned} e^{-i t H} P_{a.c.}(H) = \chi e^{-i t H} P_{a.c.}(H) + \chi_0 e^{-i t H} P_{a.c.}(H)\end{aligned}$$ Take a test function ${\bf g}(t)$ such that $\| {\bf g} \|_{l^1_n L^2_t}=1$ and obtain $$\begin{aligned} \left| {\langle \chi e^{-i t H} P_{a.c.}(H) {\bf f},{\bf g}(t) \rangle}_{n,t} \right| & = & {\frac{1}{\pi}} \left|\int_0^4 {\langle \chi {\rm Im} R_V({\omega}) {\bf f},\int_{\mathbb{R}} e^{-i t {\omega}} {\bf g}(t)dt \rangle}_n d{\omega}\right| \\ & \leq & C \int_0^4 {\langle |\chi R_V({\omega}) {\bf f}|,|\hat{{\bf g}}({\omega})| \rangle}_n d{\omega}\\ & \leq & C \|\chi R^{\pm}_V({\omega}) {\bf f}\|_{l^{\infty}_n L^2_{{\omega}}(0,4)} \|\hat{{\bf g}}\|_{l^1_n L^2_{\omega}(0,4)}.\end{aligned}$$ By Plancherel’s theorem, $\|\hat{{\bf g}}\|_{l^1_n L^2_{\omega}(0,4)} \leq \|\hat{{\bf g}}\|_{l^1_n L^2_{\omega}(\mathbb{R})} \leq \| {\bf g} \|_{l^1_n L^2_t}=1$. Using , we obtain $$\left\| \chi e^{-i t H} P_{a.c.}(H) {\bf f} \right\|_{l^\infty_n L^2_t} = \sup_{ \|{\bf g}\|_{l^1_n L^2_t}=1} \left| {\langle \chi e^{-i t H} P_{a.c.}(H) {\bf f},{\bf g}(t) \rangle}_{n,t} \right| \leq C \|{\bf f} \|_{l^2_n}.$$ Similarly, using instead of , one concludes $$\left\|<n>^{-3/2} \chi_0 e^{-i t H} P_{a.c.}(H) {\bf f} \right\|_{l^\infty_n L^2_t} = \sup_{ \|<n>^{3/2} {\bf g}\|_{l^1_n L^2_t}=1} \left| {\langle \chi_0 e^{-i t H} P_{a.c.}(H) {\bf f},{\bf g}(t) \rangle}_{n,t} \right| \leq C \|{\bf f}\|_{l^2_n}.$$ Combining the two estimates, we obtain (\[eq:01\]). Proof of Theorem \[theorem-main\] ================================= Let ${\bf y}(t) = e^{-i \theta(t)} {\bf z}(t)$ and write the time-evolution problem for ${\bf y}(t)$ in the form $$i \dot{\bf y} = H {\bf y} + {\bf g}_1 + {\bf g}_2 + {\bf g}_3,$$ where $$\begin{aligned} {\bf g}_1 = \left( {\bf N}(\mbox{\boldmath $\phi$} + {\bf y} e^{-i \theta}) - {\bf N}(\mbox{\boldmath $\phi$}) \right) e^{- i \theta}, \;\; {\bf g}_2 = -(\dot{\theta} - \omega) \mbox{\boldmath $\phi$} e^{-i \theta}, \;\; {\bf g}_3 = - i \dot{\omega} \partial_{\omega} \mbox{\boldmath $\phi$}(\omega) e^{-i \theta}.\end{aligned}$$ Let $P_0 = \langle \cdot,\mbox{\boldmath $\psi$}_0 \rangle \mbox{\boldmath $\psi$}_0$, $Q = (I - P_0) \equiv P_{a.c.}(H)$, and decompose the solution ${\bf y}(t)$ into two orthogonal parts $${\bf y}(t) = a(t) \mbox{\boldmath $\psi$}_0 + \mbox{\boldmath $\eta$}(t),$$ where $\langle \mbox{\boldmath $\psi$}_0, \mbox{\boldmath $\eta$} \rangle=0$ and $a(t) = \langle {\bf y}(t), \mbox{\boldmath $\psi$}_0\rangle$. The new coordinates $a(t)$ and $\mbox{\boldmath $\eta$}(t)$ satisfy the time evolution problem $$\left\{ \begin{array}{ccl} i \dot{a} & = & \omega_0 a + \langle {\bf g}, \mbox{\boldmath $\psi$}_0 \rangle, \\ i \dot{\mbox{\boldmath $\eta$}} & = & H \mbox{\boldmath $\eta$} + Q {\bf g} \end{array} \right.$$ where ${\bf g} = \sum_{j=1}^3 {\bf g}_j$. The time-evolution problem for $\mbox{\boldmath $\eta$} \equiv P_{a.c.}(H)\mbox{\boldmath $\eta$}$ can be rewritten in the integral form as $$\label{integral} \mbox{\boldmath $\eta$}(t) = e^{-i t H} Q \mbox{\boldmath $\eta$}(0) - i \int_0^t e^{-i (t-s) H} Q {\bf g}(s) ds,$$ Fix $\sigma > \frac{5}{2}$ and introduce the norms $$\begin{aligned} && M_1 = \| \mbox{\boldmath $\eta$} \|_{L^6_t l^\infty_n}, \quad M_2 = \| \mbox{\boldmath $\eta$} \|_{L^\infty _t l^2_n}, \quad M_3 = \| <n>^{-{\sigma}} \mbox{\boldmath $\eta$} \|_{l^\infty_n L^{2}_t}, \\ && M_4 = \| a \|_{L^2_t}, \quad M_5 = \| a \|_{L^{\infty}_t}, \quad M_6 = \| \omega -\omega(0) \|_{L^{\infty}_t},\end{aligned}$$ where the integration in $L^p_t$ is performed on an interval $[0,T]$ for any $T \in (0,\infty)$. Our goal is to show that $\dot{\omega}$ and $\dot{\theta} -\omega$ are in $L^1_t$, while the norms above satisfy an estimate of the form $$\label{eq:055} {\sum\limits}_{j=1}^5 M_j \leq C \|{\bf y}(0)\|_{l^2_n} + C \left( {\sum\limits}_{j=1}^6 M_j \right)^2$$ and $$\label{eq:055a} M_6 \leq C \epsilon^{2 - \frac{1}{p}} (M_3 + M_4)^2,$$ for some $T$-independent constant $C > 0$ uniformly in ${\sum\limits}_{j=1}^6 M_j \leq C \delta \epsilon^{\frac{1}{2 p}}$, where small positive values of $(\epsilon,\delta)$ are fixed by the initial conditions $\omega(0) = \omega_0 + \epsilon$ and $\|{\bf y}(0)\|_{l^2_n} \leq C_0 \delta \epsilon^{\frac{1}{2 p}}$ for some $C_0 > 0$. The estimate (\[eq:055\]) and (\[eq:055a\]) allow us to conclude, by elementary continuation arguments, that $${\sum\limits}_{j=1}^5 M_j \leq C \|{\bf y}(0)\|_{l^2_n} \leq C \delta \epsilon^{\frac{1}{2 p}}$$ and $|\omega(t) - \omega_0 - \epsilon| \leq C \delta^2 \epsilon^2$ uniformly on $[0,T]$ for any $T \in (0,\infty)$. By interpolation, $a \in L^6_t$ so that ${\bf z}(t) \in L^6([0,T],l^{\infty}_n)$. Theorem \[theorem-main\] then holds for $T = \infty$. In particular, since $\dot{\omega}(t) \in L^1_t(\mathbb{R}_+)$ and $|\omega(t) - \omega_0 - \epsilon| \leq C \delta^2 \epsilon^2$, there exists $\omega_{\infty} := \lim_{t \to \infty} \omega(t)$ so that $\omega_{\infty} \in (\omega_0,\omega_0 + \epsilon_0)$. In addition, since ${\bf z}(t) \in L^6(\mathbb{R}_+,l^{\infty}_n)$, then $$\lim_{t \to \infty} \| {\bf u}(t) - e^{-i \theta(t)} \mbox{\boldmath $\phi$}(\omega(t)) \|_{l^{\infty}_n} = \lim_{t \to \infty} \| {\bf z}(t) \|_{l^{\infty}_n} = 0.$$ [**Estimates for $M_6$:**]{} By the estimate , we have $$\begin{aligned} \int_0^T |\dot{\omega}| dt & \leq & C \epsilon^{2-\frac{1}{p}} \|<n>^{-2 {\sigma}} |{\bf y}|^2\|_{l^\infty_n L^1_t} \left( \| <n>^{2 {\sigma}} \mbox{\boldmath $\psi$}_1 \|_{l^1} + \| <n>^{2 {\sigma}} \mbox{\boldmath $\psi$}_2 \|_{l^1} \right) \\ & \leq & C \epsilon^{2-\frac{1}{p}} \|<n>^{-{\sigma}} {\bf y} \|_{l^\infty_n L^2_t}^2 \\ & \leq & C \epsilon^{2-\frac{1}{p}} (M_3+M_4)^2,\end{aligned}$$ where we have used the fact that $\mbox{\boldmath $\psi$}_1$ and $\mbox{\boldmath $\psi$}_2$ decay exponentially as $|n| \to \infty$. As a result, we obtain $$M_6 \leq \| \dot{\omega} \|_{L^1_t} \leq C \epsilon^{2-\frac{1}{p}} (M_3 + M_4)^2.$$ Similarly, we also obtain that $$\begin{aligned} \int_0^T |\dot{\theta} - \omega| dt \leq C \epsilon^{1-\frac{1}{p}} (M_3+M_4)^2.\end{aligned}$$ [**Estimates for $M_4$ and $M_5$:**]{} We use the projection formula $a = \langle {\bf y}, \mbox{\boldmath $\psi$}_0\rangle$ and recall the orthogonality relation , so that $$\langle {\bf z}, \mbox{\boldmath $\psi$}_0\rangle = \langle {\rm Re}{\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_1\rangle + i \langle {\rm Im} {\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2\rangle.$$ By Lemma \[lemma-bifurcation\] and definitions of $\mbox{\boldmath $\psi$}_{1,2}$ in (\[eigenvectors-normalized\]), we have $$\| <n>^{2 {\sigma}} (\mbox{\boldmath $\psi$}_0- \mbox{\boldmath $\psi$}_{1,2})\|_{l^2_n} \leq C |\omega-\omega_0|$$ for some $C > 0$. Provided $\sigma > \frac{1}{2}$, we obtain $$\begin{aligned} M_4 & = & \|\langle {\bf y}, \mbox{\boldmath $\psi$}_0\rangle\|_{L^2_t} \leq \| \langle {\rm Re} {\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_1\rangle \|_{L^2_t} + \| \langle {\rm Im} {\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2\rangle \|_{L^2_t}\\ & \leq & \|<n>^{-2 {\sigma}} {\bf z}\|_{L^2_t l^2_n} \left( \| <n>^{2 {\sigma}} (\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_1)\|_{L^{\infty}_t l^2_n} + \| <n>^{2 {\sigma}} (\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2) \|_{L^{\infty}_t l^2_n} \right) \\ & \leq & C \|<n>^{-{\sigma}} {\bf y} \|_{l^\infty_n L^2_t} \| \omega -\omega_0 \|_{L^{\infty}_t} \leq C (M_3+M_4) M_6\end{aligned}$$ and, similarly, $$\begin{aligned} M_5 & = & \|\langle {\bf y}, \mbox{\boldmath $\psi$}_0\rangle\|_{L^{\infty}_t} \leq \| \langle {\rm Re} {\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_1\rangle \|_{L^{\infty}_t} + \| \langle {\rm Im} {\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2\rangle \|_{L^{\infty}_t} \\ &\leq & \|{\bf y}\|_{L^\infty_t l^2_n} \left( \|(\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_1)\|_{L^{\infty}_t l^2_n} + \| (\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2) \|_{L^{\infty}_t l^2_n} \right) \leq C(M_2 + M_5) M_6.\end{aligned}$$ [**Estimates for $M_3$:**]{} The free solution in the integral equation (\[integral\]) is estimated by as $$\begin{aligned} \|<n>^{-{\sigma}} e^{- i t H} Q \mbox{\boldmath $\eta$}(0)\|_{l^\infty_n L^2_t} \leq \|<n>^{-3/2} e^{- i t H} Q \mbox{\boldmath $\eta$}(0)\|_{l^\infty_n L^2_t} \leq C \|\mbox{\boldmath $\eta$}(0) \|_{l^2_n}.\end{aligned}$$ Since $\dot{\omega}$ and $\dot{\theta} - \omega$ are $L^1_t$ thanks to the estimates above, we treat the terms of the integral equation (\[integral\]) with ${\bf g}_2$ and ${\bf g}_3$ similarly. By , we obtain $$\begin{aligned} \|<n>^{-{\sigma}} \int_0^t e^{-i (t-s) H} Q {\bf g}_{2,3}(s) ds\|_{l^\infty_n L^2_t} & \leq & C \|{\bf g}_{2,3}\|_{L^1_t l^2_n} \\ & \leq & C \left(\|\dot{\theta}- \omega\|_{L^1_t} \|\mbox{\boldmath $\phi$}(\omega)\|_{L^{\infty}_t l^2_n} + \|\dot{\omega}\|_{L^1_t} \|\partial_{\omega} \mbox{\boldmath $\phi$}(\omega) \|_{L^{\infty}_t l^2_n} \right) \\ & \leq & C \epsilon^{1-\frac{1}{2p}} (M_3 + M_4)^2.\end{aligned}$$ On the other hand, using the bound (\[estimate-N\]) on the vector field ${\bf g}_1$, we estimate by and $$\begin{aligned} && \|<n>^{- {\sigma}} \int_0^t e^{-i (t-s) H} Q {\bf g}_1(s) ds\|_{l^\infty_n L^2_t} \leq C(\|<n>^{{\sigma}} |\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf z}|\|_{l^1_n L^2_t}+ \||{\bf z}|^{2p+1}\|_{L^1_t l^2_n})\\ && \leq C \left(\|<n>^{-{\sigma}} {\bf y}\|_{l^\infty_n L^2_t} \|<n>^{{\sigma}}|\mbox{\boldmath $\phi$}(\omega)|^{2p}\|_{L^{\infty}_t l^1_n} + \|a \|_{L^{2p+1}_t}^{2p+1}\|\mbox{\boldmath $\psi$}_0\|_{l^{2(2p+1)}_n}^{2p+1} + \|\mbox{\boldmath $\eta$}\|_{L^{2p+1}_t l^{2(2p+1)}_n}^{2p+1} \right) \\ && \leq C \left( (M_3+M_4) M_6 + M_4^2 M_5^{2p-1} + \|\mbox{\boldmath $\eta$}\|_{L^{2p+1}_t l^{2(2p+1)}_n}^{2p+1} \right),\end{aligned}$$ where we have $$\|a \|_{L^{2p+1}_t}^{2p+1} \leq \| a \|_{L^{\infty}_t}^{2p-1} \| a \|_{L^2_t}^2.$$ and $$\|<n>^{{\sigma}}|\mbox{\boldmath $\phi$}(\omega)|^{2p}\|_{l^1_n} \leq C \| \omega - \omega_0\|_{L^{\infty}_t},$$ the latter estimate follows from Lemma \[lemma-bifurcation\]. To deal with the last term in the estimate, we use the Gagliardo-Nirenberg inequality, that is, for all $2\leq r,w\leq \infty$ such that $\frac{6}{r} + \frac{2}{w} \leq 1$, there is a $C > 0$ such that $$\|\mbox{\boldmath $\eta$}\|_{L^r_t l^w_n} \leq C \left( \| \mbox{\boldmath $\eta$} \|_{L^6_t l^{\infty}_n} + \| \mbox{\boldmath $\eta$} \|_{L^{\infty}_t l^2_n} \right) = C (M_1+M_2).$$ If $p\geq 3$, then $((2p+1), 2(2p+1))$ is a Strichartz pair satisfying $\frac{6}{2p + 1} + \frac{2}{2(2p+1)} \leq 1$ and hence, combining all previous inequalities, we have $$\begin{aligned} M_3 \leq C\left( \|\mbox{\boldmath $\eta$}(0)\|_{l^2_n}+ \epsilon^{1-\frac{1}{p}} (M_3+M_4)^2 + (M_3+M_4) M_6 + M_4^2 M_5^{2p-1} + (M_1+M_2)^{2p+1}\right),\end{aligned}$$ which agrees with the estimate (\[eq:055\]) for any $p \geq 3$.\ [**Estimates for $M_1$ and $M_2$:**]{} With the help of , the free solution is estimated by $$\|e^{- i t H} Q \mbox{\boldmath $\eta$}(0)\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n} \leq C \|\mbox{\boldmath $\eta$}(0)\|_{l^2_n}.$$ With the help of , the nonlinear terms involving ${\bf g}_{2,3}$ are estimated by $$\begin{aligned} \left\| \int_0^t e^{-i (t-s) H} Q {\bf g}_{2,3}(s) ds \right\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n} & \leq & C \|{ \bf g}_{2,3}\|_{L^1_t l^2_n} \\ & \leq & C \epsilon^{1-\frac{1}{2p}} (M_3 + M_4)^2.\end{aligned}$$ The nonlinear term involving ${\bf g}_1$ is estimated by the sum of two computations thanks to the bound (\[estimate-N\]). The first computation is completed with the help of , $$\begin{aligned} \left\| \int_0^t e^{-i (t-s) H}Q |\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf y}| ds \right\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n} & \leq & C \| <n>^3 |\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf y}|\|_{L^2_t l^2_n} \\ & \leq & \| <n>^{3 + {\sigma}} |\mbox{\boldmath $\phi$}(\omega)|^{2p} \|_{L^{\infty}_t l^2_n} \|<n>^{-{\sigma}} {\bf y}\|_{l^\infty_n L^2_t} \\ & \leq & C (M_3+M_4) M_6,\end{aligned}$$ whereas the second computation is completed with the help of , $$\begin{aligned} \left\| \int_0^t e^{-i (t-s) H}Q |{\bf y}|^{2p+1} ds \right\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n} & \leq & C \||{\bf y}|^{2p+1}\|_{L^1_t l^2_n} \leq C \|{\bf y}\|_{L^{2p+1}_t l^{2(2p+1)}_n}^{2p+1} \\ & \leq & C \left( M_4^2 M_5^{2p-1} + (M_1 + M_2)^{2p+1} \right),\end{aligned}$$ provided $p\geq 3$ holds. We conclude that the estimates for $M_1$ and $M_2$ are the same as the one for $M_3$. Numerical results ================= We now add some numerical computations which illustrate the asymptotic stability result of Theorem \[theorem-main\]. In particular, we shall obtain numerically the rate, at which the localized perturbations approach to the asymptotic state of the small discrete soliton. One advantage of numerical computations is that they are not limited to the case of $p \geq 3$ (which is the realm of our theoretical analysis above), but can be extended to arbitrary $p \geq 1$. In what follows, we illustrate the results for $p=1$ (the cubic DNLS), $p = 2$ (the quintic DNLS), and $p = 3$ (the septic DNLS). Let us consider the single-node external potential with $V_n = - \delta_{n,0}$ for any $n \in \mathbb{Z}$. This potential is known (see Appendix A in [@KKK]) to have only one negative eigenvalue at $\omega_0 < 0$, the continuous spectrum at $[0,4]$, and no resonances at $0$ and $4$, so it satisfies assumptions (V1)–(V3). Explicit computations show that the eigenvalue exists at $\omega_0 = 2 - \sqrt{5}$ with the corresponding eigenvector $\psi_{0,n} = e^{-\kappa |n|}$ for any $n \in \mathbb{Z}$, where $\kappa = {\rm arcsinh}(2^{-1})$. The stationary solutions of the nonlinear difference equation (\[stationaryDNLS\]) exist in a local neighborhood of the ground state of $H = -\Delta + {\bf V}$, according to Lemma \[lemma-bifurcation\]. We shall consider numerically the case $\gamma = -1$, for which the stationary solution bifurcates to the domain $\omega < \omega_0$. Figure \[afig2\] illustrates the stationary solutions for $p = 1$ and two different values of $\omega$, showcasing its increased localization (decreasing width and increasing amplitude), as $\omega$ deviates from $\omega_0$ towards the negative domain. ![Two profiles of the stationary solution of the nonlinear difference equation (\[stationaryDNLS\]) for $V_n = -\delta_{n,0}$, $p = 1$, and for $\omega=-2$ (solid line with circles) and $\omega=-5$ (dashed line with stars).[]{data-label="afig2"}](newat1.eps){height="7cm"} ![Evolution for $p=3$ (top), $2$ (middle), $1$ (bottom) of $\| {\bf u}(t) - e^{-i \theta(t)} \mbox{\boldmath $\phi$}(\omega_{\infty})\|$ as a function of time in a log-log scale (solid) and comparison with a $t^{-3/2}$ power law decay (dashed) as a guide to the eye.[]{data-label="afig4"}](atan_sig3_linf.eps "fig:"){height="6.8cm"} ![Evolution for $p=3$ (top), $2$ (middle), $1$ (bottom) of $\| {\bf u}(t) - e^{-i \theta(t)} \mbox{\boldmath $\phi$}(\omega_{\infty})\|$ as a function of time in a log-log scale (solid) and comparison with a $t^{-3/2}$ power law decay (dashed) as a guide to the eye.[]{data-label="afig4"}](atan_sig2_linf.eps "fig:"){height="6.8cm"} ![Evolution for $p=3$ (top), $2$ (middle), $1$ (bottom) of $\| {\bf u}(t) - e^{-i \theta(t)} \mbox{\boldmath $\phi$}(\omega_{\infty})\|$ as a function of time in a log-log scale (solid) and comparison with a $t^{-3/2}$ power law decay (dashed) as a guide to the eye.[]{data-label="afig4"}](atan_sig1_linf.eps "fig:"){height="6.8cm"} In order to examine the dynamics of the DNLS equation (\[dNLS\]) we consider single-node initial data $u_n=A \delta_{n,0}$ for any $n \in \mathbb{Z}$, with $A=0.75$, and observe the temporal dynamics of the solution ${\bf u}(t)$. The resulting dynamics involves the asymptotic relaxation of the localized perturbation into a discrete soliton after shedding of some “radiation”. This dynamics was found to be typical for all values of $p = 1,2,3$. In Figure \[afig4\], upon suitable subtraction of the phase dynamics, we illustrate the approach of the wave profile to its asymptotic form in the $l^{\infty}$ norm. The asymptotic form is obtained by running the numerical simulation for sufficiently long times, so that the profile has relaxed to the stationary state. Using a fixed-point algorithm, we identify the stationary state with the same $l^2$ norm (as the central portion of the lattice) and confirm that the result of further temporal dynamics is essentially identical to the stationary state. Subsequently the displayed $l^{\infty}$ norm of the deviation from the asymptotic profile is computed, appropriately eliminating the phase by using the gauge invariance of the DNLS equation (\[dNLS\]). We have found from Figure \[afig4\] in the cases $p=3$ (top panel), $p=2$ (middle panel) and $p=1$ (bottom panel) that the approach to the stationary state follows a power law which is well approximated as $\propto t^{-3/2}$. The dashed line on all three figures represents such a decay in each of the cases. We note that the decay rate observed in numerical simulations of the DNLS equation (\[dNLS\]) is faster than the decay rate $\propto t^{-1/6-p}$ for any $p > 0$ in Theorem \[theorem-main\]. Proof of Lemma \[le:08\] ======================== For the proof of Lemma \[le:08\], we will have to show both the “high frequency” estimate and the “low frequency” estimate . To simplify notations, we drop the bold-face font for vectors on $\mathbb{Z}$ in the appendix. Proof of --------- Recall the finite Born series representation of $R_V$ $$\label{eq:012} R(\omega) = R_0(\omega) - R_0(\omega) V R_0(\omega) + R_0(\omega) V R(\omega) V R_0(\omega),$$ which is basically nothing but the resolvent identity iterated twice. We have shown in [@PS] that for the “sandwiched resolvent” $G_{U,W}(\omega) = U R_V(\omega) W$, we have the bounds (see estimate (33) in [@PS]) $$\label{eq:011} \sup_{\theta\in [-\pi, \pi]} \sum_m |G_m(\omega)|+ \left| {\frac{d}{d\theta}} G_m(\omega) \right| \leq C\|U\|_{\l^2_{\sigma}}|W\|_{l^2_{\sigma}}.$$ for any fixed ${\sigma}> \frac{5}{2}$, where $\omega = 2 - 2\cos(\theta)$. For the three pieces arising from , similar arguments apply. Starting with the free resolvent term, we have $$\begin{aligned} & & \sup_{n \in \mathbb{Z}} \int_0^4 \chi |(R_0^{\pm}({\omega}) f)_n|^2 d{\omega}\leq C \sup_{n \in \mathbb{Z}} \int_{-\pi}^{\pi} {\frac{\chi}{\sin(\theta)}} \left|\sum_{m \in \mathbb{Z}} e^{i \theta |m-n|} f_m \right|^2 d\theta \leq \\ & & \leq C \sup_{n \in \mathbb{Z}} \int_{|\theta| \in [\theta_0/2, \pi-\theta_0/2]} \left( \left| \sum_{m\geq n} e^{i \theta m} f_m \right|^2 + \left| \sum_{m< n} e^{-i \theta m} f_m \right|^2 \right) d\theta.\end{aligned}$$ Introducing the sequence $$(g^n)_m:=\left\{\begin{array}{l l} f_m & m\geq n \\ 0 & m<n \end{array}\right.$$ we see that the last expression is simply $C(\|\widehat{g^n}\|_{L^2[\theta_0/2, \pi-\theta_0/2]}^2 + \|\widehat{f-g^n}\|_{L^2[\theta_0/2, \pi-\theta_0/2]}^2)$, which is equal by Plancherel’s identity to $$C\|g^n\|_{l^2}^2+\|f-g^n\|_{l^2}^2\leq 2C \|f\|_{l^2}^2.$$ For the second piece in , we use that $\|R_0^{\pm}({\omega})\|_{l^1\to l^\infty}\leq C/\sin(\theta)$ and $|\sin(\theta)| \geq C_0$ on $[\theta_0/2, \pi-\theta_0/2]$ for some $C_0 > 0$, to conlcude $$\begin{aligned} \sup_{n \in \mathbb{Z}} \| \chi R^{\pm}_0(\omega) V R^{\pm}_0(\omega) f\|_{L^2_{\lambda}(0,4)}^2 & \leq & \int_{-\pi}^{\pi} {\frac{\chi}{\sin^3(\theta)}} \left(\sum_{n \in \mathbb{Z}} |V_n| |R^{\pm}_0(\omega) f_n| \right)^2 d\theta \\ & \leq & C \| V \|_{l^1} \sup_{n \in \mathbb{Z}} \int_{-\pi}^{\pi} \chi \left| (R^{\pm}_0(\omega) f)_n \right|^2 d\theta,\end{aligned}$$ by the triangle inequality. At this point, we have reduced the estimate to the previous case, provided that $V\in l^1$. For the third piece in , we make use of . We have, similar to the previous estimate, $$\begin{aligned} & & \sup_{n \in \mathbb{Z}} \| \chi R^{\pm}_0(\omega) V R^{\pm}_V(\omega) V R^{\pm}_0(\omega) f\|_{L^2_{\omega}(0,4)}^2= \\ & & \sup_{n \in \mathbb{Z}} \| \chi R^{\pm}_0(\omega) V R_V^{\pm}(\omega) |V|^{1/2} {\rm sgn}(V) V^{1/2} R^{\pm}_0(\omega) f\|_{L^2_{\omega}(0,4)}^2= \\ & & \sup_{n \in \mathbb{Z}} \| \chi R^{\pm}_0(\omega) G_{V, |V|^{1/2} {\rm sgn}(V) }[ |V|^{1/2} R^{\pm}_0(\omega) f]\|_{L^2_{\omega}(0,4)}^2 \\ & & \leq C \|V\|_{l^2_{\sigma}}^2 \||V|^{1/2}\|_{l^2_{\sigma}}^2 \||V|^{1/2}\|_{l^1}^2 \sup_{n \in \mathbb{Z}} \int_{-\pi}^{\pi} \chi |(R^{\pm}_0(\omega) f)_n|^2 d\theta,\end{aligned}$$ where in the last inequality, we have again reduced the estimate to the first case. Proof of --------- We only consider the interval $[-\theta_0,\theta_0]$ in the compact support of $\chi_0(\theta)$ since the arguments for other intervals are similar. Following the algorithm in [@Miz] and the formalism in [@PS], we let $\psi^{\pm}(\theta)$ be two linearly independent solutions of $$\label{Schrodinger-scattering} \psi_{n+1} + \psi_{n-1} + ({\omega}- 2) \psi_n = V_n \psi_n, \quad n \in \mathbb{Z},$$ according to the boundary conditions $\left| \psi^{\pm}_n - e^{\mp i n \theta} \right| \to 0$ as $n \to \pm \infty$. Let $\psi^{\pm}_n(\theta) = e^{\mp i n \theta} \Psi_n^{\pm}(\theta)$ for all $n \in \mathbb{Z}$. Using the Green function representation, we obtain $$\begin{aligned} \Psi^+_n(\theta) & = & 1 - \frac{i}{2 \sin \theta} \sum_{m = n}^{\infty} \left( 1 - e^{-2 i \theta (m-n)} \right) V_m \Psi^+_m(\theta), \\ \Psi^-_n(\theta) & = & 1 - \frac{i}{2 \sin \theta} \sum_{m = -\infty}^{n} \left( 1 - e^{-2i \theta (n-m)} \right) V_m \Psi^-_m(\theta).\end{aligned}$$ The discrete Green function for the resolvent operators $R^{\pm}(\omega)$ has the kernel $$\left[ R_V^{\pm}(\omega) \right]_{n,m} = \frac{1}{W(\theta_{\pm})} \left\{ \begin{array}{cc} \psi_n^+(\theta_{\pm}) \psi_m^-(\theta_{\pm}) \;\; \mbox{for} \;\; n \geq m \\ \psi_m^+(\theta_{\pm}) \psi_n^-(\theta_{\pm}) \;\; \mbox{for} \;\; n < m \end{array} \right.$$ where $\theta_- = -\theta_+$, $\theta_- \in [0,\pi]$ for $\omega \in [0,4]$, and $W(\theta) = W[\psi^+,\psi^-] = \psi_n^+ \psi^-_{n+1} - \psi^+_{n+1} \psi^-_n$ is the discrete Wronskian, which is independent of $n \in \mathbb{Z}$. We need to estimate $$\| \chi_0 R_V^{\pm}(\omega) f \|^2_{L^2_{\omega}(0,4)} = \int_{-\pi}^{\pi} \frac{2 \chi^2_0 \sin \theta d \theta}{W^2(\omega)} \left( \sum_{m = -\infty}^{n-1} \psi_n^+(\theta) \psi_m^-(\theta) f_m + \sum_{m = n}^{\infty} \psi_n^-(\theta) \psi_m^+(\theta) f_m \right)^2.$$ We may assume that $n \geq 1$ for definiteness and split $$\sum_{m = -\infty}^{n-1} \psi_m^-(\theta) f_m = \sum_{m=0}^{n-1} \psi_m^-(\theta) f_m + \sum_{m=-\infty}^{-1} e^{i m \theta} f_m + \sum_{m=-\infty}^{-1} e^{i m \theta} (\Psi_m^- - 1) f_m := I_1 + I_2 + I_3$$ and $$\sum_{m = n}^{\infty} \psi_m^+(\theta) f_m = \sum_{m = n}^{\infty} e^{-i m \theta} f_m + \sum_{m = n}^{\infty} e^{-i m \theta} \left( \Psi_m^+(\theta) - 1 \right) f_m := I_4 + I_5$$ We are using the scattering theory from [@PS] to claim that $$\label{properties-limiting-functions} \sup_{\theta \in [-\theta_0,\theta_0]} \left( \| \Psi^{\pm}(\theta) \|_{l^{\infty}(\mathbb{Z}_{\pm})} + \| \langle n \rangle^{-1} \Psi^{\pm}(\theta) \|_{l^{\infty}(\mathbb{Z}_{\mp})} \right) < \infty,$$ where $\langle n \rangle = (1 + n^2)^{1/2}$. Then, we have $$\begin{aligned} | I_1 | & \leq & \left( \sum_{m=0}^{n-1} |\Psi_m^-(\theta)|^2 \right)^{1/2} \left( \sum_{m=0}^{n-1} |f_m|^2 \right)^{1/2} \leq C_1 \langle n \rangle^{3/2} \| f \|_{l^2}, \\ | I_3 | & \leq & \left( \sum_{m=-\infty}^{-1} |\Psi_m^-(\theta) - 1|^2 \right)^{1/2} \left( \sum_{m=-\infty}^{-1} |f_m|^2 \right)^{1/2} \leq C_3 \left\| \sum_{k =-\infty}^m |m-k| |V_k| \right\|_{l^2_m(\mathbb{Z}_-)} \| f \|_{l^2},\\ | I_5 | & \leq & \left( \sum_{m=n}^{\infty} |\Psi_m^+(\theta) - 1|^2 \right)^{1/2} \left( \sum_{m=n}^{\infty} |f_m|^2 \right)^{1/2} \leq C_5 \left\| \sum_{l = m}^{\infty} |k-m| |V_k| \right\|_{l^2_m(\mathbb{Z}_+)} \| f \|_{l^2},\end{aligned}$$ for some $C_1,C_3,C_5 > 0$. We note that $$\left\| \sum_{k =-\infty}^m |m-k| |V_k| \right\|_{l^2_m(\mathbb{Z}_-)} \leq \left\| \sum_{k =-\infty}^m |m-k| |V_k| \right\|_{l^1_m(\mathbb{Z}_-)} \leq C_4 \| V \|_{l^1_2},$$ for some $C_4 > 0$. Therefore, the brackets in $I_3$ and $I_5$ are bounded if $V \in l^1_{2{\sigma}}$ for ${\sigma}> \frac{5}{2}$. Since $I_2$ and $I_4$ are given by the discrete Fourier transform, Parseval’s equality implies that $$\int_{-\pi}^{\pi} \left( I^2_2 + I^2_4 \right) d \theta \leq C_2 \| f \|_{l^2}^2,$$ for some $C_2 > 0$. Using now the fact that $|W(\theta)| \geq W_0$ and $|\sin \theta| \leq C_0$ uniformly in $[-\theta_0,\theta_0]$, the support of $\chi_0(\theta)$, and using the property (\[properties-limiting-functions\]), we obtain $$\| \chi_0 R_V^{\pm}(\omega) f \|^2_{L^2_{\omega}(0,4)} \leq C \left( 1 + \langle n \rangle^2 + \langle n \rangle^3 \right) \| f \|^2_{l^2},$$ which gives (\[eq:06\]). [99]{} Buslaev V.S; Perelman G.S. “Scattering for the nonlinear Schrödinger equation: states close to a soliton”, St. Petersburg Math. J. [**4**]{} (1993), 1111–1142. Buslaev V.S; Perelman G.S. “On the stability of solitary waves for nonlinear Schrödinger equations”, Amer. Math. Soc. Transl. [**164**]{} (1995), 75–98. Buslaev V.S; Sulem C. “On asymptotic stability of solitary waves for nonlinear Schrödinger equations”, Ann. Inst. H. Poincaré Anal. Non Lineare [**20**]{} (2003), 419–475. Cuccagna, S. “A survey on asymptotic stability of ground states of nonlinear Schrödinger equations” in *Dispersive nonlinear problems in mathematical physics*, pp. 21–57 (Quad. Mat., 15, Dept. Math., Seconda Univ. Napoli, Caserta, 2004) Cuccagna S. “On asymptotic stability in energy space of ground states of NLS in 1D", preprint, arXiv:0711.4192v2 (November, 2007) Cuccagna S.; Tarulli, M. “On asymptotic stability of standing waves of discrete Schrödinger equation in $\mathbb{Z}$", preprint, arXiv:0808.2024v1 (August, 2008) H.S. Eisenberg, Y. Silberberg, R. Morandotti, A.R. Boyd and J.S. Aitchison, “Discrete spatial optical solitons in waveguide arrays”, Phys. Rev. Lett. [**81**]{}, 3383-3386 (1998). Gang Z; Sigal, I.M. “Asymptotic stability of nonlinear Schrödinger equations with potential”, Rev. Math. Phys. [**17**]{} (2005), 1143–1207. Gang Z; Sigal, I.M. “Relaxation of solitons in nonlinear Schrödinger equations with potential”, Adv. Math. 216 (2007), 443–490. M. Keel, T. Tao, *Endpoint Strichartz estimates*, [*Amer. J. Math.*]{}, [**120**]{} (1998), 955–980. Kevrekidis, P.G.; Espinola–Rocha, J.A.; Drossinos, Y.; Stefanov, A. “Dynamical barrier for the formation of solitary waves in discrete lattices”, Phys. Lett. A [**372**]{} (2008), 2237–2253. Komech, A; Kopylova, E.; Kunze, M. “Dispersive estimates for 1D discrete Schrödinger and Klein-Gordon equations”, [*Appl. Anal.*]{} [**85**]{} (2006), 1487–1508. M. Matuszewski, C.R. Rosberg, D.N. Neshev, A.A. Sukhorukov, A. Mitchell, M. Trippenbach, M.W. Austin, W. Krolikowski and Yu.S. Kivshar, “Crossover from self-defocusing to discrete trapping in nonlinear waveguide arrays”, Opt. Express [**14**]{}, 254-259 (2006). Nirenberg, L. [*Topics in nonlinear functional analysis*]{}, Courant Lecture Notes in Mathematics [**6**]{} (AMS, New York, 2001). Mizumachi T. “Asymptotic stability of small solitons to 1D NLS with potential", preprint, arXiv:math/0605031v2 (May, 2008). F. Palmero, R. Carretero-Gonz[á]{}lez, J. Cuevas, P.G. Kevrekidis and W. Kr[ó]{}likowski, “Solitons in one-dimensional nonlinear Schr[ö]{}dinger lattices with a local inhomogeneity”, Phys. Rev. E [**77**]{}, 036614 (2008). Panayotaros, P.; Pelinovsky, D; “Periodic oscillations of discrete NLS solitons in the presence of diffraction management”, Nonlinearity [**21**]{} (2008), 1265–1279. Pelinovsky, D; Stefanov, A. “On the spectral theory and dispersive estimates for a discrete Schrödinger equation in one dimension”, J. Math. Phys., to be printed (November, 2008). Pillet, C.A.; Wayne, C.E. “Invariant manifolds for a class of dispersive, Hamiltonian, partial differential equations”, J. Diff. Eqs. [**141**]{} (1997), 310–326. Soffer, A; Weinstein, M.I. “Multichannel nonlinear scattering theory for nonintegrable equations”, Comm. Math. Phys. [**133**]{} (1990), 119–146. Soffer, A; Weinstein, M.I. “Multichannel nonlinear scattering theory for nonintegrable equations II: The case of anisotropic potentials and data”, J. Diff. Eqs. [**98**]{} (1992), 376–390. Soffer, A; Weinstein, M.I. “Selection of the ground state for nonlinear Schrödinger equations”, Rev. Math. Phys. [**16**]{} (2004), 977–1071. Stefanov, A.; Kevrekidis, P. “Asymptotic behaviour of small solutions for the discrete nonlinear Schrödinger and Klein-Gordon equations”, [*Nonlinearity*]{} [**18**]{} (2005), 1841–1857. Weinstein, M. I. “Excitation thresholds for nonlinear localized modes on lattices”, *Nonlinearity* 12 (1999), 673–691. Yau, H.T.; Tsai, T.P. “Asymptotic dynamics of nonlinear Schrödinger equations: resonance dominated and radiation dominated solutions”, Comm. Pure Appl. Math. [**55**]{} (2002), 1–64. Yau, H.T.; Tsai, T.P. “Stable directions for excited states of nonlinear Schrödinger equations”, Comm. Part. Diff. Eqs. [**27**]{} (2002), 2363–2402. Yau, H.T.; Tsai, T.P. “Relaxation of excited states in nonlinear Schrödinger equations”, Int. Math. Res. Not. (2002), 1629–1673.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Knowledge-based program are programs with explicit tests for knowledge. They have been used successfully in a number of applications. Sanders has pointed out what seem to be a counterintuitive property of knowledge-based programs. Roughly speaking, they do not satisfy a certain monotonicity property, while standard programs (ones without tests for knowledge) do. It is shown that there are two ways of defining the monotonicity property, which agree for standard programs. Knowledge-based programs satisfy the first, but do not satisfy the second. It is further argued by example that the fact that they do not satisfy the second is actually a feature, not a problem. Moreover, once we allow the more general class of [*knowledge-based specifications*]{}, standard programs do not satisfy the monotonicity property either.' author: - | Joseph Y. Halpern[^1]\ Computer Science Dept.\ Cornell University\ Ithaca, NY 14853\ [email protected]\ http://www.cs.cornell.edu/home/halpern bibliography: - 'z.bib' - 'joe.bib' date: nocite: '[@FHMV]' title: 'A Note on Knowledge-Based Programs and Specifications' --- defn Introduction ============ Consider a simple program such as $$\begin{array}{l} {\bf do\ forever}\\ \ \ \ \ \ \mbox{{\bf if} $x=0$ {\bf then} $y := y+1$ {\bf end}}\\ {\bf end}. \end{array}$$ This program, denoted ${{\sf Pg}}_1$ for future reference, describes an action that a process (or agent—I use the two words interchangeably here) should take, namely, setting $y$ to $y+1$, under certain conditions, namely, if $x=0$. One way to way to provide formal semantics for such a program is to assume that each agent is in some [*local state*]{}, which, among other things, describes the value of the variables of interest. For this simple program, we need to assume that the local state contains enough information to determine the truth of the test $x=0$. We can then associate with the program a [*protocol*]{}, that is, a function describing what action the agent should in each local state. Note that a program is a [*syntactic*]{} object, given by some program text, while a protocol is a function, a [*semantic*]{} object. [*Knowledge-based programs*]{}, introduced in [@FHMV; @FHMV94] (based on the [*knowledge-based protocols*]{} of [@HF87]) are intended to provide a high-level framework for the design and specification of protocols. The idea is that, in knowledge-based programs, there are explicit tests for knowledge. Thus, a knowledge-based program might have the form $$\begin{array}{l} {\bf do\ forever}\\ \ \ \ \ \ \mbox{{\bf if} $K(x=0)$ {\bf then} $y := y+1$ {\bf end}}\\ {\bf end}, \end{array}$$ where $K(x=0)$ should be read as “you know $x=0$”. We can informally view this knowledge-based program, denoted ${{\sf Pg}}_2$, as saying “if you know that $x=0$, then set $y$ to $y+1$”. Roughly speaking, an agent knows $\phi$ if, in all situations consistent with the agent’s information, $\phi$ is true. Knowledge-based programs are an attempt to capture the intuition that what an agent does depends on what it knows. They have already met with some degree of success, having been used in papers such as [@DM; @Had; @HMW; @HZ; @Maz; @Maz90; @MT; @NT] both to help in the design of new protocols and to clarify the understanding of existing protocols. However, Sanders [-@Sanders] has pointed out what seems to be a counterintuitive property of knowledge-based programs. Roughly speaking, she claims that knowledge-based programs do not satisfy a certain monotonicity property: a knowledge-based program can satisfy a specification under a given initial condition, but fail to satisfy it if we strengthen the initial condition. On the other hand, standard programs (ones without tests for knowledge) do satisfy the monotonicity property. In this paper, I consider Sanders’ claim more carefully. I show that it depends critically on what it means for a program to satisfy a specification. There are two possible definitions, which agree for standard programs. If we use the one closest in spirit to the ideas presented in [@HF87], the claim is false, although it is true for the definition used by Sanders. But, even in the case of Sanders’ definition, rather than being a defect of knowledge-base programs, this lack of monotonicity is actually a feature. In general, we do not want monotonicity. Moreover, once we allow a more general class of [*knowledge-based specifications*]{}, then standard programs do not satisfy the monotonicity property either. The rest of this paper is organized as follows: In the next section, there is an informal review of the semantics of standard and knowledge-based programs. In Section \[specs\], I discuss standard and knowledge-based specifications. In Section \[monotonicity\], I consider the monotonicity property described by Sanders, and show in what sense it is and is not satisfied by knowledge-based programs. I give some examples in Section \[examples\] showing why monotonicity is not always desirable. I conclude in Section \[conclusion\] with some discussion of knowledge-based programs and specifications. Standard and knowledge-based programs: an informal review ========================================================= Formal semantics for standard and knowledge-based programs are provided in [@FHMV; @FHMV94]. To keep the discussion in this paper at an informal level, I simplify things somewhat here, and review what I hope will be just enough of the details so that, together with the examples given here, the reader will be able to follow the main points; the interested reader should refer to [@FHMV; @FHMV94] for further discussion and all the formal details. Informally, we view a distributed system as consisting of a number of interacting agents. We assume that, at any given point in time, each agent in the system is in some [*local state*]{}. A [*global state*]{} is just a tuple consisting of each agent’s local state, together with the state of the [*environment*]{}, where the environment consists of everything that is relevant to the system that is not contained in the state of the processes. The agents’ local states typically change over time, as a result of actions that they perform. A [*run*]{} is a function from time to global states. Intuitively, a run is a complete description of what happens over time in one possible execution of the system. A [*point*]{} is a pair $(r,m)$ consisting of a run $r$ and a time $m$. At a point $(r,m)$, the system is in some global state $r(m)$. For simplicity, time here is taken to range over the natural numbers (so that time is viewed as discrete, rather than continuous). A [*system*]{} $\R$ is a set of runs; intuitively, these runs describe all the possible executions of the system. For example, in a poker game, the runs could describe all the possible deals and bidding sequences. Of major interest in this paper are the systems that we can associate with a program. To do this, we must first associate a system with a [*joint protocol*]{}. As was said in the introduction, a protocol is a function from local states to actions. (This function may be nondeterministic, so that in a given local state, there is a set of actions that may be performed.) A joint protocol is just a set of protocols, one for each process. While the joint protocol describes what each process does, it does not give us enough information to generate a system. It does not tell us what the legal behaviors of the environment are, the effects of the actions, or the initial conditions. We specify these in the [*context*]{}. Formally, a context $\gamma$ is a tuple $(P_e,\Gz,\tau,\Psi)$, where $P_e$ is a protocol for the environment, $\Gz$ is a set of initial global states, $\tau$ is a [*transition function*]{}, and $\Psi$ is a set of [*admissible*]{} runs. The environment is viewed as running a protocol just like the agents; its protocol is used to capture features of the setting like “all messages are delivered within 5 rounds” or “messages may be lost”. Given a joint protocol $P = (P_1, \ldots, P_n)$ for the agents, an environment protocol $P_e$, and a global state $(s_e, s_1, \ldots, s_n)$, there is a set of possible [*joint actions*]{} $({\sf a}_e, {\sf a}_1, \ldots, {\sf a}_n)$ that can be performed in this global state according to the protocols of the agents and the environment. (It is a set since the protocols may be nondeterministic.) The transition function $\tau$ describes how these joint actions change the global state by associating with each joint action a [*global state transformer*]{}, that is, a mapping from global states to global states. The set $\Psi$ of admissible runs is used to characterize notions like fairness. For the simple programs considered in this paper, the transition function will be almost immediate from the description of the global states and $\Psi$ will typically consist of all runs (so that it effectively plays no interesting role). What will change as we vary the context is the set of possible initial global states. A run $r$ is [*consistent with joint protocol $P$ in context $\gamma$*]{} if (1) $r(0)$, the initial global state of $r$, is one of the initial global states in $\Gz$, (2) for all $m$, the transition from global state $r(m)$ to $r(m+1)$ is the result of applying $\tau$ to a joint action that can be performed by $(P_e,P)$ in the global state $r(m)$, and (3) $r \in \Psi$. A system $\R$ [*represents*]{} a joint protocol $P$ in context $\gamma$ if it consists of all runs consistent with $P$ in $\gamma$. Assuming that each test in a standard program run by process $i$ can be evaluated in each local state, we can derive a protocol from the program in an obvious way: to find out what process $i$ does in a local state $\ell$, we evaluate the tests in ${{\sf Pg}}$ in $\ell$ and perform the appropriate action.[^2] A run is [*consistent with ${{\sf Pg}}$ in context $\gamma$*]{} if it is consistent with the protocol derived from ${{\sf Pg}}$. Similarly, A system [*represents ${{\sf Pg}}$ in context $\gamma$*]{} if it represents the protocol derived from ${{\sf Pg}}$. We use $ {{\bf R}}({{\sf Pg}},\gamma)$ to denote the system representing ${{\sf Pg}}$ in context $\gamma$. \[pgxam\] Consider the simple standard program ${{\sf Pg}}_1$ in Figure \[fig1\] and suppose there is only one agent in the system. Further suppose the agent’s local state is a pair of natural numbers $(a,b)$, where $a$ is the current value of variable $x$ and $b$ is the current value of $y$. The protocol derived from ${{\sf Pg}}_1$ increments the value of $b$ by 1 precisely if $a=0$. In this simple case, we can ignore the environment state, and just identify the global state of the system with the agent’s local state. Suppose we consider the context $\gamma$ where the initial states consist of all possible local states of the form $(a,0)$ for $a \ge 0$ and the transition function is such that the action $y := y+1$ transforms $(a,b)$ to $(a,b+1)$. We ignore the environment protocol (or, equivalently, assume that $P_e$ performs the action ${\mbox{{\sf no--op}}}$ at each step) and assume $\Psi$ consist of all runs. A run $r$ is then consistent with ${{\sf Pg}}_1$ in context $\gamma$ if either (1) $r(0)$ is of the form $(0,b)$ and $r(m)$ is of the form $(0,b+m)$ for all $m \ge 1$, or (2) $r(m)$ is of the form $(a,b)$ for all $m$ and $a > 0$. That is, either the $x$ component is originally 0, in which case the $y$ component is continually increased by 1, or else nothing happens. Now we turn to knowledge-based programs. Here the situation is somewhat more complicated. In a given context, a process can determine the truth of a test such as “$x=0$” by simply looking at its local state. However, in a knowledge-based program, there are tests for knowledge. According to the definition of knowledge in systems, an agent $i$ knows a fact $\phi$ at a given point $(r,m)$ in system $\R$ if $\phi$ is true at all points in $\R$ in which $i$ has the same local state as it does at $(r,m)$. Thus, $i$ knows $\phi$ at the point $(r,m)$ if $\phi$ holds at all points consistent with $i$’s information at $(r,m)$. The truth of a test for knowledge cannot in general be determined simply by looking at the local state in isolation. We need to look at the whole system. As a consequence, given a run, we cannot in general determine if it is consistent with a knowledge-based program in a given context. This is because we cannot tell how the tests for knowledge turn out without being given the other possible runs of the system; what a process knows at one point will depend in general on what other points are possible. This stands in sharp contrast to the situation for standard programs. This means it no longer makes sense to talk about a run being consistent with a knowledge-based program in a given context. However, notice that, given a system $\R$, we can derive a protocol from a knowledge-based program ${{\sf Pg}_{{\it kb}}}$ for process $i$ by using $\R$ to evaluate the knowledge tests in ${{\sf Pg}_{{\it kb}}}$. That is, a test such as $K \phi$ holds in a local state $l$ if $\phi$ holds at all points in $\R$ where process $i$ has local state $l$. In general, different protocols can be derived from a given knowledge-based program, depending on what system we use to evaluate the tests. Let ${{\sf Pg}_{{\it kb}}}^\R$ denote the protocol derived from ${{\sf Pg}_{{\it kb}}}$ given system $\R$. We say that a system $\R$ [*represents*]{} a knowledge-based program ${{\sf Pg}_{{\it kb}}}$ in context $\gamma$ if $\R$ represents the protocol ${{\sf Pg}_{{\it kb}}}^\R$. That is, $\R$ represents ${{\sf Pg}_{{\it kb}}}$ if $\R = {{\bf R}}({{\sf Pg}_{{\it kb}}}^\R,\gamma)$. Thus, a system represents ${{\sf Pg}_{{\it kb}}}$ if it satisfies a certain fixed-point equation. This definition is somewhat subtle, and determining the system representing a given knowledge-based program may be nontrivial. Indeed, as shown in [@FHMV; @FHMV94], in general, there may be no systems representing a knowledge-based program ${{\sf Pg}_{{\it kb}}}$ in a given context, only one, or more than one, since the fixed-point equation may have no solutions, one solution, or many solutions. Moreover, computing the solutions may be a difficult task, even if we have only finitely many possible global states. There are conditions sufficient to guarantee that there is exactly one system representing ${{\sf Pg}_{{\it kb}}}$, and these conditions are satisfied by many knowledge-based programs of interest, and, in particular, by the programs discussed in this paper. If ${{\sf Pg}_{{\it kb}}}$ has a unique system representing it in context $\gamma$, then we again denote this system ${{\bf R}}({{\sf Pg}_{{\it kb}}},\gamma)$. \[pgxam2\] The knowledge-based program ${{\sf Pg}}_2$ in Figure \[fig2\], with the test $K(x=0)$, is particularly simple to analyze. If we consider the context $\gamma$ discussed in Example \[pgxam\], then whether or not $x=0$ holds is determined by the process’ local state. Thus, in context $\gamma$, $x=0$ holds iff $K(x=0)$ holds, and the knowledge-based program reduces to the standard program. On the other hand, consider the context $\gamma'$ where the agent’s local state just consists just of the value of $y$, while the value of $x$ is part of the environment state. Again, we can identify the global state with a pair $(a,b)$, where $a$ is the current value of $x$ and $b$ is the current value of $y$, but now $a$ represents the environment’s state, while $b$ represents the agent’s state. We can again assume the environment performs the ${\mbox{{\sf no--op}}}$ action at each step, $\Psi$ consists of all runs, the transition function is as in Example \[pgxam\], and the initial states are all possible global states of the form $(a,0)$. In this context, there is a also unique system representing ${{\sf Pg}}_2$: The agent never knows whether $x=0$, so there is a unique run corresponding to each initial state $(a,0)$, in which the global state is $(a,0)$ throughout the run. Finally, let $\gamma''$ be identical to $\gamma'$ except that the only initial state is $(0,0)$. Again, there will be a unique system representing ${{\sf Pg}}_2$ in $\gamma''$, but it is quite different from ${{\bf R}}({{\sf Pg}}_2,\gamma')$. In ${{\bf R}}({{\sf Pg}}_2,\gamma'')$, the agent knows that $x=0$ at all times. There is only one run, where the value of $y$ is augmented at every step. This discussion suggests that a knowledge-based program can be viewed as specifying a set of systems, the ones that satisfy a certain fixed-point property, while a standard program can be viewed as specifying a set of runs, the ones consistent with the program. Standard and knowledge-based specifications {#specs} =========================================== Typically, we think of a protocol being designed to satisfy a [*specification*]{}, or set of properties. Although a specification is often written in some specification language (such as temporal logic), many specifications can usefully be viewed as predicates on runs. This means that we can associate a set of runs with a specification; namely, all the runs that satisfy the required properties. Thus, a specification such as “all processes eventually decide on the same value” would be associated with the set of runs in which the processes do all decide the same value.[^3] Researchers have often focused attention on two types of specifications: [*safety properties*]{}—these are invariant properties that have the form “a particular bad thing never happens”—and [*liveness properties*]{}—these are properties that essentially say “a particular good thing eventually does happen” [@OL]. Thus, a run $r$ has a safety property $p$ if $p$ holds at all points $(r,m)$, while $r$ has the liveness property $q$ if $q$ holds at some point $(r,m)$. Suppose we are interested in a program that guarantees that all the processes eventually decide on the same value. We model this by assuming that each process $i$ has a decision variable $x_i$, initially undefined, in its local state (we can assume a special “undefined” value in the domain), which is set once in the course of a run, when the decision is made. Given the way we have chosen to model this problem, we would expect this program to satisfy two safety properties: (1) each process’ decision variable is changed at most once (so that it is never the case that it is set more than once); and (2) if neither $x_i$ nor $x_j$ has value “undefined”, then they are equal. We also expect it to satisfy one liveness property: each decision variable is eventually set. We say that a standard program ${{\sf Pg}}$ [*satisfies*]{} a specification $\sigma$ in a context $\gamma$ if every run consistent with ${{\sf Pg}}$ in $\gamma$ (that is, every run in the system representing ${{\sf Pg}}$ in $\gamma$) satisfies $\sigma$. Similarly, we can say that a knowledge-based program ${{\sf Pg}_{{\it kb}}}$ satisfies specification $\sigma$ in context $\gamma$ if every run in every system representing ${{\sf Pg}_{{\it kb}}}$ satisfies $\sigma$. The notion of specification we have considered so far can be thought of as being [*run based*]{}. A specification $\sigma$ is a predicate on (i.e., set of) runs and a program satisfies $\sigma$ if each run consistent with the program is in $\sigma$. Although run-based specifications arise often in practice, there are reasonable specifications that are not run based. There are times that it is best to think of a specification as being, not a predicate on runs, but a predicate on entire [*systems*]{}. For example, consider a knowledge base (KB) that responds to queries by users. We can imagine a specification that says “To a query of $\phi$, answer ‘Yes’ if you know $\phi$, answer ‘No’ if you know $\neg \phi$, otherwise answer ‘I don’t know’.” This specification is given in terms of the KB’s knowledge, which depends on the whole system and cannot be determined by considering individual runs in isolation. We call such a specification a [*knowledge-based specification*]{}. Typically, we think of a knowledge-based specification being given as a formula involving operators for knowledge and time. Formally, it is simply a predicate on (set of) systems. (Intuitively, it consists of all the systems where the formula is valid—i.e., true at every point in the system.)[^4] We can think of a run-based specification $\sigma$ as a special case of a knowledge-based specification. It consists of all those systems all of whose runs satisfy $\sigma$. A (standard or knowledge-based) program ${{\sf Pg}}$ satisfies a knowledge-based specification $\sigma$ in context $\gamma$ if every system representing ${{\sf Pg}}$ in $\gamma$ satisfies the specification. Notice that knowledge-based specifications bear the same relationship to (standard) specifications as knowledge-based programs bear to standard programs. A knowledge-based specification/program in general defines a set of systems; a standard specification/program defines a set of runs (i.e., a single system). Monotonicity ============ Sanders [-@Sanders] focuses on a particular monotonicity property of specifications. To understand this property, and Sanders’ concerns, we first need some definitions. Given contexts $\gamma = (P_e,\Gz,\tau,\Psi)$ and $\gamma' = (P_e',\Gz',\tau',\Psi')$, we write $\gamma' \sqsubseteq \gamma$ if $P_e = P_e'$, $\Gz' \subseteq \Gz$, $\tau = \tau'$, and $\Psi' \subseteq \Psi$. That is, in $\gamma'$ there may be fewer initial states and fewer admissible runs, but otherwise $\gamma$ and $\gamma'$ are the same. The following lemma is almost immediate from the definitions. \[subset\] If $\gamma' \sqsubseteq \gamma$, then for all protocols $P$, every run consistent with $P$ in $\gamma'$ is also consistent with $P$ in $\gamma$, so ${{\bf R}}(P,\gamma') \subseteq {{\bf R}}(P,\gamma)$. Similarly, for every standard program ${{\sf Pg}}$, we have ${{\bf R}}({{\sf Pg}},\gamma') \subseteq {{\bf R}}({{\sf Pg}},\gamma)$. The restriction in Lemma \[subset\] to [*standard*]{} programs is necessary. It is not true for knowledge-based programs. The set of systems consistent with a knowledge-based program can be rather arbitrary, as Example \[pgxam2\] shows. This example also shows that safety and liveness properties need not be preserved when we restrict the context. The safety property “$y$ is never equal to 1” is satisfied by ${{\sf Pg}}_2$ in context $\gamma'$ but not in context $\gamma''$. On the other hand, the liveness property “$y$ is eventually equal to 1” is satisfied by ${{\sf Pg}}_2$ in context $\gamma''$ but not $\gamma'$. Sanders suggests that this behavior is somewhat counterintuitive. To quote [@Sanders]: > \[A\] [*knowledge-based protocol need not be monotonic with respect to the initial conditions*]{} …\[In particular,\] [*safety and liveness properties of knowledge-based protocols need not be preserved by strengthening the initial conditions*]{}, thus violating one of the most intuitive and fundamental properties of standard programs \[italics Sanders’\].[^5] It is certainly true that the system representing a knowledge-based program in a restricted context is not necessarily a subset of the system representing it in the original context. However, under what is arguably the most natural interpretation of what it means for a program to satisfy a specification with respect to an initial condition, a knowledge-based program [*is*]{} monotonic with respect to initial conditions. To understand why this should be so, we need to make precise what it means for a (knowledge-based) program to satisfy a specification with respect to an initial condition. Formally, we can take an initial condition to be a predicate on global states (so that an initial condition corresponds to a set of global states). An initial condition ${\mbox{{\em INIT}$\,'$}}$ is a [*strengthening*]{} of ${{\em INIT}}$ if ${\mbox{{\em INIT}$\,'$}}$ is a subset of ${{\em INIT}}$. (In logical terms, this means that ${\mbox{{\em INIT}$\,'$}}$ can be thought of as implying ${{\em INIT}}$.) A set $G$ of global states satisfies an initial condition ${{\em INIT}}$ if $G \subseteq {{\em INIT}}$. Suppose that we fix $P_e$, $\tau$, and $\Psi$, that is, all the components of a context except the set of initial global states, and consider the family $\Gamma = \Gamma(P_e,\tau,\Psi)$ of contexts of the form $(P_e,\Gz,\tau,\Psi)$, where the set $\Gz$ varies over all subsets of global states. Now it seems reasonable to say that program ${{\sf Pg}}$ [*satisfies specification $\sigma$ (with respect to $\Gamma$) given initial condition INIT*]{} if ${{\sf Pg}}$ satisfies $\sigma$ in every context in $\Gamma$ whose initial global states satisfy ${{\em INIT}}$. With this definition, it is clear that if ${{\sf Pg}}$ satisfies $\sigma$ given ${{\em INIT}}$, and ${\mbox{{\em INIT}$\,'$}}$ is a strengthening of ${{\em INIT}}$, then ${{\sf Pg}}$ must also satisfy $\sigma$ with respect to ${\mbox{{\em INIT}$\,'$}}$, since every context whose initial global states are in ${\mbox{{\em INIT}$\,'$}}$ also has its initial global states in ${{\em INIT}}$. Thus, under this definition of what it means for a program to satisfy a specification, Sanders’ observation is incorrect. However, Sanders used a somewhat different definition. Suppose that rather than considering all contexts in $\Gamma$ whose initial global states satisfy ${{\em INIT}}$, we consider the maximal one, that is, the one whose set of initial global states consists of all global states in $\Sigma$ that satisfy ${{\em INIT}}$. We say that ${{\sf Pg}}$ [*maximally*]{} satisfies specification $\sigma$ (with respect to $\Gamma$) given ${{\em INIT}}$ if ${{\sf Pg}}$ satisfies $\sigma$ in the context in $\Gamma$ whose set of initial global states consists of all global states satisfying ${{\em INIT}}$. It is almost immediate from Lemma \[subset\] and the definitions that for standard programs and standard specifications, “satisfaction with respect to $\Gamma$” coincides with “maximal satisfaction with respect to $\Gamma$”. On the other hand, they can be quite different for knowledge-based programs and knowledge-based specifications, as the following examples show. \[differ1\] For the knowledge-based program ${{\sf Pg}}_2$, if we take $\Gamma$ to consist of all contexts $(P_e,\Gz,\tau,\Psi)$, where $P_e$, $\tau$, and $\Psi$ are as discussed in Example \[pgxam2\] and $\Gz$ is some subset of the global states, then, as we observed above, ${{\sf Pg}}_2$ satisfies the specification “$y$ is never equal to 1” for the initial condition ${{\em INIT}}_1$ which can be characterized by the formula $y=0$ but not for the initial condition ${{\em INIT}}_2$ characterized by $x=0 \land y=0$. Similarly, if ${{\sf Pg}}_3$ is the result of replacing the test $K(x=0)$ in ${{\sf Pg}}_2$ by $\neg K(x=0)$, then ${{\sf Pg}}_3$ satisfies the liveness condition “$y$ is eventually equal to 1” for ${{\em INIT}}_1$ but not for ${{\em INIT}}_2$. This shows that a standard specification (in particular, one involving safety or liveness) may not be monotonic with respect to maximal specification for a knowledge-based program. \[differ2\] Consider the standard program ${{\sf Pg}}_1$ again, but now consider a context where there are two agents. Intuitively, the second agent never learns anything and plays no role. Formally, this is captured by taking the second agent’s local state to always be $\lambda$. Thus, a global state now has the form $(\<a,b\>,\lambda)$. We can again identify the global state with the local state of the first agent (the one performing all the actions). Thus, abusing notation somewhat, we can consider the same set of contexts as in Example \[differ1\]. Now consider the knowledge-based specification $K_2 (y = 0)$. This is true with respect to $\Gamma$ for the initial condition ${{\em INIT}}_1$ but not for ${{\em INIT}}_2$. This shows that even for a standard program, a knowledge-based specification may not be monotonic with respect to maximal satisfaction. \[differ3\] In the muddy children problem discussed in [@HM1], the father of the children says “Some \[i.e., one or more\] of you have mud on your forehead.” The father then repeatedly asks the children “Do you know that you have mud on your own forehead?” Thus, the children can be viewed as running a knowledge-based program according to which a child answers “Yes” iff she knows that she has mud on her forehead. The father’s initial statement is taken to restrict the possible initial global states to those where one or more children have mud on their foreheads. It is well known that, under this initial condition, the knowledge-based program satisfies the liveness property “all the children with mud on their foreheads eventually know it”. On the other hand, if the father instead gives the children more initial information, by saying “Child 1 has mud on his forehead” (thus restricting the set of initial global states to those where child 1 has mud on his forehead), none of the children that have mud on their forehead besides child 1 will be able to figure out that they have mud on their forehead. Roughly speaking, this is because the information available to the children from child 1’s “No” answer in the original version of the story is no longer available once the father gives the extra information. (See [@FHMV Example 7.25].) This problem is not an artifact of using knowledge-based programs or specifications. Rather, it is really the case in the original puzzle that if the father had said “Child 1 has mud on his forehead” rather than “Some of you have mud on your foreheads”, the children with mud on their foreheads would never be able to figure out that they had mud on their foreheads. Sometimes extra knowledge can be harmful![^6] As should be clear from the preceding discussion, there are two notions of monotonicity, which happen to coincide (and hold) for standard programs and specifications, but differ if we consider knowledge-based programs or knowledge-based specifications. For knowledge-based programs and specifications, the first notion of monotonicity holds, while the second (monotonicity with respect to maximal satisfaction) does not. Monotonicity is certainly a desirable property—for a monotonic specification and program, once we prove that the specification holds for the program for a given initial condition, then we can immediately conclude that it holds for all stronger specifications. Without monotonicity, one may have to reprove the property for all stronger initial conditions. Maximal satisfaction also certainly seems like a reasonable generalization from the standard case. Thus, we should consider to what extent it is a problem that we lose monotonicity for maximal satisfaction when we consider knowledge-based programs and specifications. Of course, whether something is problematic is, in great measure, in the eye of the beholder. Nevertheless, I would claim that, in the case of maximal satisfaction, the only properties that are lost when the initial condition is strengthened are either unimportant properties, or properties that, roughly speaking, [*ought*]{} to be lost. More precisely, they are properties that happen to be true of a particular context, but are not intrinsic properties of the program. The examples and the technical discussion below should help to make the point clearer. Thus, this lack of monotonicity should not be viewed as a defect of knowledge-based programs and specifications. Rather, it correctly captures the subtleties of knowledge acquisition in certain circumstances. Some examples {#examples} ============= Consider again the program ${{\sf Pg}}_2$. It can be viewed as saying “perform a sequence of actions (continually increasing $y$) if you know that $x=0$”. In the system ${{\bf R}}({{\sf Pg}}_2,\gamma')$, the initial condition guarantees that the agent does not know the value of $x$, and thus nothing is done. The strengthening of the initial condition to $x=0 \land y=0$ described by $\gamma''$ guarantees that the agent does know that $x=0$, and thus actions are performed. In this case, we surely do not want a safety condition like “$y$ is never equal to 1”, which holds if the sequence of actions is not performed, to be preserved when we strengthen the initial condition in this way. Similarly, for the program ${{\sf Pg}}_3$ defined in Example \[differ1\], where the action is performed if the agent does not know that $x=0$, we would not expect a liveness property like “$y$ is eventually equal to 1” to be preserved. Clearly, there are times when we would like a safety or a liveness property to be preserved when we strengthen initial conditions. But these safety or liveness properties are typically ones that we want to hold of [*all*]{} systems consistent with the knowledge-based program, not just the ones representing the program in certain maximal contexts. The tests in a well-designed knowledge-based program are often there precisely to ensure that desired safety properties do hold in all systems consistent with the program. For example, there may be a test for knowledge to ensure that an action is performed only if it is known to be safe (i.e., it does not violate the safety property). It is often possible to prove that such safety properties hold in all systems consistent with the knowledge-based program; thus, the issue of needing to reprove the property if we strengthen the initial conditions does not arise. (See [@FHMV pp. 259–270] for further discussion of this issue.) In the case of liveness properties, we often want to ensure that a given action is eventually performed. It is typically the case that an action in a knowledge-based program is performed when a given fact is known to be true. Thus, the problem reduces to ensuring that the knowledge is eventually obtained. As a consequence, the knowledge-based approach often makes it clearer what is required for the liveness property to hold. One example of how safety properties can be ensured by appropriate tests for knowledge and how liveness properties reduce to showing that a certain piece of knowledge is eventually obtained is given by the knowledge-based programs of [@HZ]. I illustrate these points here using a simpler example. Suppose we have a network of $n$ processes, connected via a communication network. The network is connected, but not necessarily completely connected. For simplicity, assume each communication link is bidirectional. We assume that all messages arrive within one time unit. Each process knows which processes it is connected to; formally, this means that the local state of each process includes a mapping associating each outgoing link with the identity of the neighbor at the other end. We also assume that each process records in its local state the messages it has sent and received. We want a program for process 1 to broadcast a binary value to all the processes in the network. Formally, we assume that each process $i$ has a local variable, say $x_i$, which is intended to store the value. The specification that the program must satisfy consists of three properties. For every run, and for all $i = 1, \ldots, n$, we require the following: 1. $x_i$ changes value at most once, 2. $x_1$ never changes value, and 3. eventually the value of $x_i$ is equal to that of $x_1$. Note that the first two properties are safety properties, and the last is a liveness property. A simple standard program that satisfies this specification is for process 1 to send $v$, the value of $x_1$, to all its neighbors; then the first time process $i$ ($i \ne 1$) gets the value $v$, it sets $x_i$ to $v$ and sends $v$ to all its neighbors except the one from which it received the message. Process $i$ does nothing if it later gets the value $v$ again. This program is easily seen to satisfy the specification in the context implicitly described above. We remark that, in principle, we could modify the first property to allow $x_1$ to change value a number of times before finally “stabilizing” on a final value. However, allowing this would only complicate the description of the property, since we would have to modify the third property to guarantee that the value of $x_i$ after stabilizing is equal to that of $x_1$. We return to this point below. The behavior of each process can easily be captured in terms of knowledge: When a process knows the value of $x_1$, it sends the value to all its neighbors except those that it knows already know the value of $x_1$. Let $K_i(x_1)$ be an abbreviation for “process $i$ knows the value of $x_1$”. (Thus, $K_i(x_1)$ is an abbreviation for $K_i(x_1 = 0) \lor K_i(x_1 = 1)$.) Similarly, let $K_i K_j (x_1)$ be an abbreviation for “process $i$ knows that process $j$ knows the value of $x_1$.” Then we have the joint knowledge-based program ${{\sf DIFFUSE}}= ({{\sf DIFFUSE}}_1, \ldots, {{\sf DIFFUSE}}_n)$, where ${{\sf DIFFUSE}}_i$, the program followed by process $i$, is $$\begin{array}{l} {\bf do\ forever}\\ \ \ \ \ \ {\bf if}\ K_i(x_1) \\ \ \ \ \ \ {\bf then} \\ \ \ \ \ \ \ \ \ \ \ x_i := x_1;\\ \ \ \ \ \ \ \ \ \ \ \mbox{{\bf for} each neighbor $j$ of $i$}\\ \ \ \ \ \ \ \ \ \ \ {\bf do}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\bf if} \ \neg K_i K_j (x_1) \ {\bf then} \mbox{ send the value of $x_1$ to $j$ {\bf end}}\\ \ \ \ \ \ \ \ \ \ \ {\bf end}\\ \ \ \ \ \ {\bf end}\\ {\bf end}. \end{array}$$ By considering this knowledge-based program, we abstract away from the details of how $i$ gains knowledge of the value of $x_1$. If $i=1$, then presumably the value was known all along; otherwise it was perhaps acquired through the receipt of a message. Similarly, the fact that $i$ sends the value of $x_1$ to a neighbor $j$ only if $i$ doesn’t know that $j$ knows the value of $x_1$ handles two of the details of the standard program: (1) it guarantees that $i$ does not send the value of $x_1$ to $j$ if $i$ received the value of $x_1$ from $j$, and (2) it guarantees that $i$ does not send the value of $x_1$ to its neighbors more than once.[^7] Finally, observe that ${{\sf DIFFUSE}}$ is correct even if messages can be lost, as long as the system satisfies an appropriate fairness assumption (if a message is sent infinitely often, it will eventually be delivered).[^8] In this case process $i$ would keep sending the value of $x_1$ to $j$ until $i$ knows (perhaps by receiving an acknowledgment from $j$) that $j$ knows the value of $x_1$. The fact that ${{\sf DIFFUSE}}$ is correct “even if messages can be lost” or “no matter what the network topology” means that the program meets its specification in a number of different contexts. This knowledge-based program has another advantage: it suggests ways to design more efficient standard programs. For example, process $i$ does not have to send the value of $x_1$ to all its neighbors (except the one from which it received the value of $x_1$) if it has some other way of knowing that a neighbor already knows the value of $x_1$. This may happen if the value of $x_1$ has a header describing to which processes it has already been sent. It might also happen if the receiving process has some knowledge of the network topology (for example, there is no need to rebroadcast the value of $x_1$ if communication is reliable and all processes are neighbors of process 1). Returning to our main theme, notice that in every context $\gamma$ consistent with our assumptions, in the system(s) representing ${{\sf DIFFUSE}}$ in $\gamma$, the three properties described above are satisfied: $x_i$ changes value at most once in any run, $x_1$ never changes value, and eventually the value of $x_i$ is equal to that of $x_1$. Notice also the role of the test $K_i(x_1)$ in ensuring that the safety properties hold. As a result of the test, we know that $x_i$ is not updated until the value of $x_1$ is known; when it is updated, it is set to $x_1$. This guarantees that $x_1$ never changes value, and that $x_i$ changes value at most once and, when it does, it is set to $x_1$. All that remains is to guarantee that $x_i$ is eventually set to $x_1$. What the knowledge-based program makes clear is that this amounts to ensuring that all processes eventually know the value of $x_1$. It is easy to prove that this is indeed the case. It is also easy to see that there are other properties that do not hold in all contexts. For a simple example, suppose that $n=3$, so there are three processes in the network. Suppose that there is a link from process 1 to process 2, and a link from process 2 to process 3, and that these are the only links in the network. Moreover, suppose that the network topology is common knowledge. Given these simplifying assumptions, a process $i$’s initial state consists of an encoding of the network topology, its name, and the value of $x_i$. Now consider two contexts: in context $\gamma_1$, there are 8 initial global states, in which $(x_1,x_2,x_3)$ take on all values in $\{0,1\}^3$; in $\gamma_2$, there are 4 initial global states, in which $(x_1,x_2,x_3)$ take on all values in $\{0,1\}^3$ such that $x_1 = x_3$. Intuitively, in context $\gamma_2$, process 3 knows the value of $x_1$ (since it is the same as the value of $x_3$, which is part of process 3’s initial state), while in $\gamma_1$, neither process 2 nor process 3 know the value of $x_1$. Let $\R_1 = {{\bf R}}({{\sf DIFFUSE}},\gamma_1)$ and let $\R_2 = {{\bf R}}({{\sf DIFFUSE}},\gamma_2)$. It is not hard to see that $\R_1$ has eight runs, one corresponding to each initial global state. In each of these runs, process 1 sends the value of $x_1$ to process 2 in round 1; process 2 sets $x_2$ to this value in round 2 and forwards the value to process 3; in round 3, process 3 sets $x_3$ to $i$ (and sends no messages). (Note that, formally, [*round*]{} $k$ takes place between times $k-1$ and $k$.) Similarly, $\R_2$, has four runs, one corresponding to each initial global state. In these runs, process 3 initially knows the value of $x_1$, although process 2 does not. Moreover, process 2 knows this. Thus, in the round of the runs in $\R_2$, both process 1 and process 3 send the value of $x_1$ to process 2. But now, process 2 does not send a message to process 3 in the second round. As expected, we can observe that not all liveness properties are preserved as we move from $\R_1$ to $\R_2$. For example, the runs in $\R_1$ all satisfy the liveness property “eventually process 2 sends a message to process 3”. Clearly the runs in $\R_2$ do not satisfy this liveness property. This should be seen as a feature, not a bug! There is no reason to preserve the sending of unnecessary messages. The extra knowledge obtained when the initial conditions are strengthened may render sending the message unnecessary. Discussion {#conclusion} ========== When designing programs, we often start with a specification and try to find an (easily-implementable) standard program that satisfies it. The process of going from a specification to an implementation is often a difficult one. I would argue that quite often it is useful to express the properties we desire using a knowledge-based specification, proceed from there to construct a knowledge-based program, and then go from the knowledge-based program to a standard program. While this approach may not always be helpful (indeed, if a badly designed knowledge-based program is used, then it may actually be harmful), there is some evidence showing that it can help. The first examples of going from knowledge-based specifications to (standard) programs can be found in [@APP; @DM; @Kurki] (although the formal model used in [@APP; @Kurki] is somewhat different from that described here). The approach described here was used in [@HZ] to derive solutions to the [*sequence transmission problem*]{} (the problem of transmitting a sequence of bits reliably over a possibly faulty communication channel). All the programs derived in [@HZ] are (variants of) well-known programs that solved the problem. While I would argue that the knowledge-based approach shows the commonality in the approaches used to solve the problem, and allows for easier and more uniform proofs of correctness, certainly this example by itself is not convincing evidence of the power of the knowledge-based approach. Perhaps more convincing evidence is provided by the results of [@DM; @HMW; @MT], where this approach is used to derive programs that are optimal (in terms of number of rounds required) for Byzantine Agreement and Eventual Byzantine Agreement. In this case, the programs derived were new, and it seems that it would have been quite difficult to derive them directly from the original specifications. Knowledge-based specifications are more prevalent than it might at first seem. We are often interested in constructing programs that not only satisfy some safety and liveness conditions, but also use a minimal number of messages or rounds. As we have already observed, specifications of the form “do not send unnecessary messages” are not standard specifications; the same is true for a specification of the form “halt as soon as possible”. Such specifications can be viewed as knowledge-based specifications. The results of [@DM; @HMW; @MT] can be viewed as showing how knowledge-based specifications arise in the construction of round-efficient programs. The tests for knowledge in the knowledge-based programs described in these papers explicitly embody the intuition that a process decides as soon as it is safe to do so. Similar sentiments about the importance of knowledge-based specifications are expressed by Mazer [-@Maz91] (although the analogy between knowledge-based programs and knowledge-based specifications is not made in that paper): > Epistemic \[i.e., knowledge-based\] specifications are surprisingly common: a problem specification that asserts that a property or value is private to some process [*is*]{} an epistemic specification (e.g., “each database site knows whether it has committed the transaction”). We are also interested in epistemic properties to capture assertions on the extent to which a process’s local state accurately reflects aspects of the system state, such as “each database site knows whether the others have committed the transaction”. For another example of the usefulness of knowledge-based specifications, recall our earlier discussion of the specification of the program for broadcasting a message through a network. If we replace the liveness requirements by the simple knowledge-based requirement “eventually process $i$ knows the value of $x_1$”, we can drop the first property (that $x_i$ changes value at most once) altogether. Indeed, we do not have to mention $x_i$, $i \ne 1$, at all. The knowledge-based specification thus seems to capture our intuitive requirements for the program more directly and elegantly than the standard specification given. A standard specification can be viewed as a special case of a knowledge-based specification, one in which the set of systems satisfying it is closed under unions and subsets. It is because of these closure properties that we have the property if a standard program satisfies a standard specification $\sigma$ in a context $\gamma$, then it satisfies it in any restriction of $\gamma$. Clearly, this is not a property that holds of standard programs once we allow knowledge-based specifications. Nevertheless, as the examples above suggest, there is something to be gained—and little to be lost—by allowing the greater generality of knowledge-based specifications. In particular, although we do lose monotonicity, there are other ways of ensuring that safety and liveness properties do hold in the systems of interest. By forcing us to think in terms of systems, rather than of individual runs, both knowledge-based programs and knowledge-based specifications can be viewed as requiring more “global” thinking than their standard counterparts. The hope is that thinking at this level of abstraction makes the design and specification of programs easier to carry out. We still need more experience using this framework before we can decide whether this hope will be borne out and whether the knowledge-based approach as described here is really useful. Sanders has other criticisms of the use of knowledge-based programs that I have not addressed here. Very roughly, she provides pragmatic arguments that suggest that we use predicates that have some of the properties of knowledge (for example $K \phi \rimp \phi$), but not necessarily all of them. This theme is further pursued in [@EMM98]. While I believe that using predicates that satisfy some of the properties of knowledge will not prove to be as useful as sticking to the original notion of knowledge, we clearly need more examples to better understand the issues. Besides more examples, as pointed out by Sanders [-@Sanders], it would also be useful to have techniques for reasoning about knowledge-based programs without having to construct the set of runs generated by the program. In [@FHMV], a simple knowledge-based programming language is proposed. Perhaps standard techniques for proving program correctness can be applied to it (or some variant of it). A first step along these lines was taken by Sanders [-@Sanders], who extended [UNITY]{} [@CM88] in such a way as to allow the definition of knowledge predicates (although it appears that the resulting knowledge-based programs are somewhat less general than those described here), and then used proof techniques developed for [UNITY]{} to prove the correctness of another knowledge-based protocol for the sequence transmission problem. (We remark that techniques for reasoning about knowledge obtained in CSP programs, but not for knowledge-based programs, were given in [@KT].) Once we have a number of examples and better techniques in hand, we shall need to carry out a careful evaluation of the knowledge-based approach, and a comparison of it and other approaches. I believe that once the evidence is in, it will show that there are indeed significant advantages that can be gained by thinking at the knowledge level. [**Acknowledgments:**]{} I would like to thank Ron Fagin, Yoram Moses, Beverly Sanders, and particularly Vassos Hadzilacos, Murray Mazer, Moshe Vardi, and Lenore Zuck for their helpful comments on earlier drafts of the paper. Moshe gets the credit for the observation that knowledge-based protocols do satisfy monotonicity. Finally, I would like to thank Karen Seidel for asking a question at PODC ’91 that inspired this paper. [^1]: Much of this work was carried out while the author was at the IBM Almaden Research Center. IBM’s support is gratefully acknowledged. The work was also supported in part by NSF under grant IRI-96-25901, and by the Air Force Office of Scientific Research under contract F49620-91-C-0080 and grant F49620-96-1-0323. [^2]: Strictly speaking, to evaluate the tests, we need an [*interpretation*]{} that assigns truth values to formulas in each global state. For the programs considered here, the appropriate interpretation will be immediate from the description of the system, so I ignore interpretations here for ease of exposition. [^3]: Of course, there are useful specifications that cannot be viewed as predicates on runs. While [*linear time*]{} temporal logic assertions are predicates on runs, [*branching time*]{} temporal logic assertions are best viewed as predicates on trees. (See [@EH2; @Lam80] for a discussion of the differences between linear time and branching time.) For example, Koo and Toueg’s notion of [*weak termination*]{} [@KooToueg] requires that at every point there is a possible future where everyone terminates. In the notation used in this paper, this would mean that for every point $(r,m)$, there must be another point $(r',m)$ such that $r$ and $r'$ are identical up to time $m$, and at some point $(r',m')$ with $m' \ge m$, every process terminates. This assertion is easily expressed in branching time logic. Probabilistic assertions such as “all processes terminate with probability .99” also cannot be viewed as predicates on individual runs. Other examples of specifications that cannot be viewed as a predicate on runs are discussed later in this section. Nevertheless, specifications that are predicates on runs are sufficiently prevalent that it seems reasonable to give them special attention. [^4]: As the examples discussed in Footnote 2 show, not all predicates on systems can be expressed in terms of formulas involving knowledge and time. I will not attempt to characterize here the ones that can be so expressed. It is not even clear that such a characterization is either feasible or useful. [^5]: In [@HF87], a notion of knowledge-based [*protocol*]{} was introduced, and Sanders is referring to that notion, rather than the notion of knowledge-based [*program*]{} that I am using here. See [@FHMV94] for a discussion of the difference between the two notions. Sanders’ comments apply without change to knowledge-based programs as defined here. [^6]: Another example of the phenomenon that extra knowledge can be harmful can be found in [@MDH]. This is also a well-known phenomenon in the economics/game theory literature [@Neyman]. [^7]: This argument depends in part on our assumption that process $i$ is keeping track of the messages it sends and receives. If $i$ forgets the fact that it received the value of $x_1$ from $j$ then (if $i$ follows ${{\sf DIFFUSE}}_i$), it would send the value of $x_1$ back to $j$. Similarly, if $i$ receives the value of $x_1$ a second time and forgets that it has already sent it once to its neighbors, then according to ${{\sf DIFFUSE}}_i$, it would send it again. In addition, the assumption that there are no process failures is crucial. [^8]: Note that this fairness assumption can be captured by using an appropriate set $\Psi$ (consisting only of runs where the fairness condition is satisfied) in the context.
{ "pile_set_name": "ArXiv" }
--- abstract: 'If $b$ is an inner function, then composition with $b$ induces an endomorphism, $\beta$, of $L^\infty({\mathbb{T}})$ that leaves $H^\infty({\mathbb{T}})$ invariant. We investigate the structure of the endomorphisms of $B(L^2({\mathbb{T}}))$ and $B(H^2({\mathbb{T}}))$ that implement $\beta$ through the representations of $L^\infty({\mathbb{T}})$ and $H^\infty({\mathbb{T}})$ in terms of multiplication operators on $L^2({\mathbb{T}})$ and $H^2({\mathbb{T}})$. Our analysis, which is based on work of R. Rochberg and J. McDonald, will wind its way through the theory of composition operators on spaces of analytic functions to recent work on Cuntz families of isometries and Hilbert $C^*$-modules.' address: | Department of Mathematics\ University of Iowa\ Iowa City, IA 52242 author: - Dennis Courtney - 'Paul S. Muhly' - 'Samuel W. Schmidt' title: Composition Operators and Endomorphisms --- [^1] Introduction ============ Our objective in this note is to link the venerable theory of composition operators on spaces of analytic functions to the representation theory of $C^{*}$-algebras. The theory of composition operators is full of equations that involve operators that intertwine various types of representations. In certain situations the equations can be recast in terms of “covariance equations” that are familiar from the theory of $C^{*}$-algebras, their endomorphisms and their representations; doing this yields both new theorems and new understanding of known results. We are inspired in particular by papers by Richard Rochberg [@rR73] and John McDonald [@jMcD03]. In [@rR73 Theorem 1], Rochberg performs calculations which may be seen from a more contemporary perspective as identifying certain Cuntz families of isometries and Hilbert $C^{*}$-modules at the heart of what he is studying. In [@jMcD03], McDonald built upon Rochberg’s work and proved, among other things, that the canonical transfer operator associated to composition with a finite Blaschke product leaves the Hardy space $H^{2}({\mathbb{T}})$ invariant. This note is in large part the result of trying to recast [@rR73 Theorem 1] in the setting of $C^{*}$-algebras and endomorphisms using McDonald’s observation on transfer operators [@jMcD03 Lemma 2]. The classical Lebesgue and Hardy spaces on the unit circle $\mathbb{T}$ will be denoted by $L^{p}(\mathbb{T})$ and $H^{p}(\mathbb{T})$ respectively. Normalized Lebesgue measure on ${\mathbb{T}}$ will be denoted $m$. The orthogonal projection from $L^{2}(\mathbb{T})$ onto $H^{2}(\mathbb{T})$ will be denoted by $P$. The usual exponential orthonormal basis for $L^{2}(\mathbb{T})$ will be denoted by $\{e_{n}\}_{n\in\mathbb{Z}}$, i.e., $e_{n}(z):=z^{n}$. We write $(\cdot, \cdot)$ for the inner product of $L^2(\mathbb{T})$. The multiplication operator on $L^{2}(\mathbb{T})$ determined by a function $\varphi\in L^{\infty}(\mathbb{T})$ will be denoted $\pi(\varphi)$ and the Toeplitz operator on $H^{2}(\mathbb{T})$ determined by $\varphi$ will be denoted by $\tau(\varphi)$, i.e., $\tau(\varphi)$ is the restriction of $P\pi(\varphi)P$ to $H^{2}(\mathbb{T})$. Our use of the notation $\pi$ and $\tau$ is nonstandard. More commonly, one writes $M_{f}$ for the multiplication operator determined by $f$ and $T_{f}$ for the Toeplitz operator determined by $f$, but for the purposes of this note, we have found the standard notation to be a bit awkward. In any case, the map $\pi$ is a $C^{*}$-representation of $L^{\infty}(\mathbb{T})$ on $L^{2}(\mathbb{T})$ that is continuous with respect to the weak-$*$ topology on $L^{\infty}(\mathbb{T})$ and the weak operator topology on $B(L^{2}(\mathbb{T}))$, and $\tau$ is a (completely) positive linear map from $L^{\infty}(\mathbb{T})$ to $B(H^{2}(\mathbb{T}))$ with similar continuity properties. We fix throughout an inner function $b$ which at times will further be assumed to be a finite Blaschke product. Composition with $b$, that is, the map $\varphi \mapsto \varphi \circ b$, is known to induce a $*$-endomorphism $\beta$ of $L^{\infty}(\mathbb{T})$ that is continuous with respect to the weak-$*$ topology on $L^{\infty}({\mathbb{T}})$. When $b$ is a finite Blaschke product this statement is fairly elementary; if $b$ is an arbitrary inner function, it is somewhat more substantial. We give an operator-theoretic proof in Corollary \[Cor:Well defined beta\]. When $\beta$ leaves a subspace of $L^{\infty}(\mathbb{T})$ invariant, we will continue to use the notation $\beta$ for its restriction to the subspace. The central focus of our analysis is \[Problem: Central problem\] Describe all $*$-endomorphisms $\alpha$ of $B(L^{2}(\mathbb{T}))$ such that $$\alpha\circ\pi=\pi\circ\beta\label{eq:cov1}$$ and describe all $*$-endomorphisms $\alpha_{+}$ of $B(H^{2}(\mathbb{T}))$ such that $$\alpha_{+}\circ\tau=\tau\circ\beta. \label{eq:cov2}$$ If an endomorphism $\alpha$ of $B(L^{2}({\mathbb{T}}))$ satisfies , the pair $(\pi,\alpha)$ is called a *covariant representation* of the pair $(L^{\infty}({\mathbb{T}}),\beta)$. As $\pi$ will be fixed throughout this note, the first part of our problem is thus to identify all endomorphisms $\alpha$ of $B(L^{2}({\mathbb{T}}))$ that yield a covariant representation $(\pi,\alpha)$ of $(L^{\infty}({\mathbb{T}}),\beta)$. Equation  is a hybrid version of , but as we shall see, it may be interpreted as describing certain covariant representations of the Toeplitz algebra, i.e., of the $C^{*}$-algebra $\mathfrak{T}$ generated by all the Toeplitz operators $\tau(\varphi)$, $\varphi\in L^{\infty}({\mathbb{T}})$. It is not clear *a priori* that *any* endomorphisms satisfying or exist. They do, however, as we shall show in Theorem \[Thm: Main1\], where Rochberg’s work plays a central role. Then, in Corollary \[Thm: Solution2\], we show how Rochberg’s analysis yields a complete description of all solutions to . Identifying all solutions to is more complicated, and it is here that we must assume that $b$ is a finite Blaschke product. The set of solutions to is described in Theorem \[Thm:Solution 1\] under this restriction. In solving Problem \[Problem: Central problem\] we obtain many new proofs of known results. We do not take any position on the matter of which proofs are simpler or more elementary. Our more modest goal is to separate what can be derived through elementary Hilbert space considerations from what requires more specific function-theoretic analysis. In this respect, we were inspired by the work of Helson and Lowdenslager [@HL61], Halmos and others who cast Hardy space theory in Hilbert space terms and, in particular, showed that Beurling’s theorem about invariant subspaces of the shift operator can be proved with elementary Hilbert space methods. Indeed, as we shall see, our main Theorem \[Thm: Main1\] is a straightforward corollary of Beurling’s theorem and requires no more technology than Helson and Lowdenslager’s approach to that result. This paper, therefore, has something of a didactic component. When we reprove or reinterpret a known result, we call attention to it and give references to alternative approaches. Preliminaries and Background {#sec:preliminaries} ============================ It is well known that when $H$ is a Hilbert space, $B(H)$ is the dual space of the space of trace class operators on $H$. The weak-$*$ topology on $B(H)$ is often called the *ultraweak* topology. We adopt that terminology here. The ultraweak topology is different from the weak operator topology, but the two coincide on bounded subsets of $B(H)$. It follows that our representation $\pi$ is continuous with respect to the weak-$*$ topology on $L^{\infty}({\mathbb{T}})$ and either the weak operator topology or the ultraweak topology on $B(L^{2}({\mathbb{T}}))$. As indicated earlier, it is straightforward to see that composition with a *finite* Blaschke product induces an endomorphism of $L^{\infty}({\mathbb{T}})$. It is less clear that composition with an arbitrary inner function has this property. There are two reasons for this. The first is that the boundary values of a general inner function $b$ are only defined on a set $F\subseteq{\mathbb{T}}$ with $m({\mathbb{T}}\backslash F)=0$. The second is that an element of $L^{\infty}({\mathbb{T}})$ is an *equivalence class* of measurable functions containing a bounded representative, where two functions are equivalent if and only if they differ on a null set. Thus we want to know that if we extend $b$ arbitrarily on ${\mathbb{T}}\backslash F$, mapping to ${\mathbb{T}}$, and if $\varphi$ and $\psi$ differ at most on a null set, then so do $\varphi\circ b$ and $\psi\circ b$. A little reflection reveals that for this to happen, it is necessary and sufficient that the following assertion be true: > If $b$ is an inner function whose domain on ${\mathbb{T}}$ is the measurable set $F$, then for every null set $E$ of ${\mathbb{T}}$, $b^{-1}(E)$ is a null set of $F$. This fact is well known, but exactly who deserves credit for first proving it is unclear to us. The short note by Kametani and Ugaheri [@KU42] proves it in the case that $b(0)=0$. This implies the general case, as Lebesgue null sets of ${\mathbb{T}}$ are preserved by conformal maps of the disc, and every inner function $b$ can be written $b=\alpha\circ b_{1}$ with $b_{1}$ an inner function fixing the origin and $\alpha$ a conformal map of the disc. In Corollary \[Cor:Well defined beta\], we will give a proof of this assertion from the abstract Hilbert space perspective that we are promoting. We will need the following lemma. To emphasize the distinction between a measurable function $f$ and its equivalence class modulo the relation of being equal almost everywhere, we *temporarily* write $[f]$ for the latter. \[lem: Null set\] Let $\theta$ be a Lebesgue measurable function from ${\mathbb{T}}$ to ${\mathbb{T}}$. Suppose $\Theta$ is defined on trigonometric polynomials $p$ by the formula $\Theta(p)=p\circ\theta$. Then 1. \[nullsetone\] $\Theta$ has a unique extension to a $*$-homomorphism from $C({\mathbb{T}})$ into $L^{\infty}({\mathbb{T}})$, and it is given by the formula $\Theta(\varphi)=[\varphi\circ\theta]$, $\varphi\in C({\mathbb{T}})$. 2. \[nullsettwo\] If $\Theta$ is continuous with respect to the weak-$*$ topology of $L^{\infty}({\mathbb{T}})$ restricted to $C({\mathbb{T}})$ and the weak-$*$topology on $L^{\infty}({\mathbb{T}})$, then for each Lebesgue null set $E$ of ${\mathbb{T}}$, $m(\theta^{-1}(E))=0$, and thus $\Theta$ extends uniquely to a $*$-endomorphism of $L^{\infty}({\mathbb{T}})$ satisfying $\Theta([\varphi])=[\varphi\circ\theta]$ for all $[\varphi]\in L^{\infty}({\mathbb{T}})$. The map $\Theta$ is completely determined by $[\theta]$. For the first assertion it suffices to note that if $p$ is a trigonometric polynomial, then since $\theta$ is assumed to map ${\mathbb{T}}$ to ${\mathbb{T}}$, $\Vert[p\circ\theta]\Vert_{L^{\infty}}=\sup_{z\in{\mathbb{T}}}\vert p(\theta(z))\vert\leq\sup_{z\in{\mathbb{T}}}\vert p(z)\vert=\Vert p\Vert_{C({\mathbb{T}})}$. For the second assertion, fix the Lebesgue null set $E$, and choose a $G_{\delta}$ set $E_{0}$ containing $E$ such that $E_{0}\backslash E$ has measure zero. So, if $\{f_{n}\}_{n\geq0}$ is a sequence in $C({\mathbb{T}})$ such that $f_{n}\downarrow1_{E_{0}}$[^2] pointwise, then $[f_{n}]$ converges to $[1_{E_{0}}]=[1_{E}]$ weak-$*$. But also, $f_{n}\circ\theta\downarrow1_{E_{0}}\circ\theta=1_{\theta^{-1}(E_{0})}$ pointwise. Therefore, $[f_{n}\circ\theta]$ converges to $[1_{E_{0}}\circ\theta]=[1_{\theta^{-1}(E_{0})}]$ weak-$*$. As $E$ is a null set, so is $E_{0}$, and the $[f_{n}]$ converge to $0$ weak-$*$. Our hypothesis then implies that the $[\Theta(f_{n})]=[f_{n}\circ\theta]$ converge to $0$ weak-$*$, proving that $m(\theta^{-1}(E_{0}))=0$. As $\theta^{-1}(E)\subseteq\theta^{-1}(E_{0})$ it follows that $\theta^{-1}(E)$ is also a null set, as desired. The remaining assertions are immediate. Because of this lemma, if $b$ is an inner function that may be defined only on a subset $F$ of ${\mathbb{T}}$ with $m({\mathbb{T}}\backslash F)=0$, it does no harm to extend $b$ to all of ${\mathbb{T}}$ by setting $b(z)=1$ for all $z\in{\mathbb{T}}\backslash F$. Next, we want to say a few words about $*$-endomorphisms of $B(H)$, where $H$ is a separable Hilbert space. Our discussion largely follows Section 2 of [@wA89]. A *Cuntz family* on $H$ is an $N$-tuple of isometries $\{S_{i}\}_{i=1}^{N}$ on $H$ with mutually orthogonal ranges that together span $H$; here the number $N$ may be a positive integer or $\infty$. A Cuntz family $S=\{S_{i}\}_{i=1}^{N}$ on $H$ determines a map $\alpha_{S}:B(H)\to B(H)$ via $$\label{cuntzinduce} \alpha_{S}(T)=\sum_{i=1}^{N}S_{i}TS_{i}^{*},\qquad T\in B(H).$$ (If $N=\infty$, this sum is convergent in the strong operator topology.) The map $\alpha_{S}$ is readily seen to be a $*$-endomorphism of $B(H)$; multiplicativity is deduced from the fact that a tuple $S=\{S_{i}\}_{i=1}^{N}$ is a Cuntz family if and only if the *Cuntz relations* $$S_{i}^{*}S_{j}=\delta_{ij}I,\qquad1\leq i,j\leq N,\label{eq:Cuntz 1}$$ and $$\sum_{i=1}^{N}S_{i}S_{i}^{*}=I\label{eq:Cuntz 2}$$ are satisfied. (These relations are named after J. Cuntz, who made a penetrating analysis of them in [@jC77].) Significantly, *every* $*$-endomorphism $\alpha$ of $B(H)$, with $H$ separable, is of the form $\alpha_{S}$ for some Cuntz family $S$. We recall the details. Fixing a $*$-endomorphism $\alpha$, define $E=\{S\in B(H)\mid ST=\alpha(T)S,\, T\in B(H)\}$. A short calculation shows that for any $S_{1}$ and $S_{2}$ in $E$ the product $S_{2}^{*}S_{1}$ commutes with all elements of $B(H)$, and is hence a scalar. We may thus define an inner product ${\langle}\cdot,\cdot{\rangle}$ on $E$ by the formula $${\langle}S_{1},S_{2}{\rangle}I=S_{2}^{*}S_{1},\qquad S_{1},S_{2}\in E,$$ and $E$ with this inner product is a Hilbert space. It is readily checked that any orthonormal basis $S=\{S_{i}\}_{i=1}^{N}$ for $E$ is a Cuntz family satisfying $\alpha=\alpha_{S}$, so it is enough to know that $E$ *has* an orthonormal basis - that is, that $E\neq\{0\}$. This follows from the fact that a $*$-endomorphism of $B(H)$, when $H$ is separable, is necessarily ultraweakly continuous[^3], and that an ultraweakly continuous unital representation of $B(H)$ is necessarily unitarily equivalent to a multiple of the identity representation of $B(H)$. That multiple is the dimension of $E$. The correspondence between endomorphisms and Cuntz families is not quite one-to-one. However, as Laca observed [@mL93 Proposition 2.2], if $S=\{S_{i}\}_{i=1}^{N}$ and $\tilde{S}=\{\tilde{S}_{i}\}_{i=1}^{N}$ are two Cuntz families such that $\alpha_{S}=\alpha_{\tilde{S}}$, then there is a unitary matrix $(u_{ij})$ so that $\widetilde{S}_{i}=\sum_{j}u_{ij}S_{j}$, and conversely. The reason is that $S$ and $\tilde{S}$ are both orthonormal bases for the same Hilbert space $E$. (More concretely, one may just check that the scalars $u_{ij}=S_{j}^{*}\widetilde{S}_{i}$ have the desired properties.) Our goal, then, is to describe the collection of Cuntz families $S=\{S_{i}\}_{i=1}^{N}$ on $L^{2}(\mathbb{T})$ and $R=\{R_{i}\}_{i=1}^{N}$ on $H^{2}(\mathbb{T})$ such that $(\pi,\alpha_{S})$ is a covariant representation of $(L^{\infty}({\mathbb{T}}),\beta)$ and $(\tau,\alpha_{R})$ is a covariant representation of $(\mathfrak{T},\beta)$ in the sense of equations  and : $$\sum_{i=1}^{N}S_{i}\pi(\varphi)S_{i}^{*}=\pi(\beta(\varphi))\label{eq:Cuntz1a}, \qquad \varphi \in L^{\infty}(\mathbb{T}),$$ and $$\sum_{i=1}^{N}R_{i}\tau(\varphi)R_{i}^{*}=\tau(\beta(\varphi))\label{eq:Cuntz2a}, \qquad \varphi \in L^{\infty}(\mathbb{T}).$$ Finally, we adopt the following notation for Blaschke products. If $w$ is a nonzero point of the open unit disc $\mathbb{D}$ then $b_{w}$ will denote the function $$b_{w}(z):=\frac{|w|}{w}\frac{w-z}{1-\overline{w}z},$$ and $b_{0}(z):=z$. If $a_{1},a_{2},\ldots,a_{N}$ is a finite list of not-necessarily-distinct numbers in $\mathbb{D}$, then we will write $b=\Pi_{j=1}^{N}b_{a_{j}}$ for the Blaschke product with zeros at $a_{1},a_{2},\ldots,a_{N}$, i.e., multiplicity will be taken into account. Rochberg’s Observation ====================== Our analysis hinges on an observation that we learned from R. Rochberg’s paper [@rR73]. A preliminary remark on isometries in abstract Hilbert space is useful. If $V$ is an isometry on a Hilbert space $H$, and $D$ is the subspace $H \ominus VH$, it is easy to check that the spaces $D, VD, V^2 D, \dots$ are mutually orthogonal, and that $(\bigoplus_{k \geq 0} V^k D)^{\perp} = \bigcap_{j \geq 0} V^j H$, so that $H = \bigoplus_{k \geq 0} V^k D$ if and only if $V$ is *pure* in the sense that $\bigcap_{j \geq 0} V^j H = \{0\}$. If $H = H^2({\mathbb{T}})$ and $V$ is the isometry $\tau(b) = \pi(b)|_{H^2({\mathbb{T}})}$ induced by a nonconstant inner function $b$, it turns out that $V$ is pure, and that in fact $D$ is a complete wandering subspace for the unitary $\pi(b)$ in the sense of below. This is a minor modification of a point made in [@rR73 Theorem 1]. \[Rochbergs Lemma\] Let $b$ be a nonconstant inner function, and let $\mathcal{D}:=H^{2}(\mathbb{T})\ominus\pi(b)H^{2}(\mathbb{T})$. Then $$\label{h2span} H^2({\mathbb{T}}) = \bigoplus_{k \geq 0} \pi(b)^k \mathcal{D},$$ and $$\label{l2span} L^{2}(\mathbb{T})=\bigoplus_{n \in {\mathbb{Z}}} \pi(b)^{n}\mathcal{D}.$$ As we have just observed, equation follows once we know that the space $K:=\bigcap_{n=0}^{\infty}\pi(b)^{n}H^{2}(\mathbb{T})$ is the zero subspace. But as $\pi(b)$ commutes with $\pi(z)$, the space $K$ is invariant for the unilateral shift $\tau(z) = \pi(z)|_{H^2({\mathbb{T}})}$. If $K \neq \{0\}$, by Beurling’s theorem there is an inner function $\theta$ with $K = \pi(\theta) H^2({\mathbb{T}})$. As $\pi(b) K = K$ by definition, we see that $\pi(b) \pi(\theta) H^2({\mathbb{T}}) = \pi(\theta) H^2({\mathbb{T}})$, and applying $\pi(\theta^{-1})$ to both sides we conclude that $\pi(b) H^2({\mathbb{T}}) = H^2({\mathbb{T}})$. But $b$ is nonconstant, so by the uniqueness assertion in Beurling’s theorem (see [@hH64 Theorem 3]), $\pi(b) H^2({\mathbb{T}})$ is a proper subspace of $H^2({\mathbb{T}})$. This contradiction shows that $K = \{0\}$, and follows. Since $\pi(b)$ is a unitary on $L^2({\mathbb{T}})$, it is immediate from that the spaces $\pi(b)^n \mathcal{D}$, $n \in {\mathbb{Z}}$, are mutually orthogonal. Letting $L = \bigvee_{k \in {\mathbb{Z}}} \pi(b)^k \mathcal{D}$, it is clear from that $L = \bigvee_{k \geq 0} \pi(b)^{-k} H^2({\mathbb{T}})$, and thus that $L$ is invariant under $\pi(z)$. By Helson and Lowdenslager’s generalization of Beurling’s theorem (see [@HL61 Section 1] or [@hH64 Theorem 3]), either there is a unimodular $\theta \in L^{\infty}({\mathbb{T}})$ with $L =\pi(\theta)H^{2}(\mathbb{T})$ or there is a measurable $E \subseteq {\mathbb{T}}$ satisfying $L = \pi(1_{E})L^{2}(\mathbb{T})$. In the first case, as clearly $\pi(b) L = L$, we conclude that $\pi(\theta) \pi(b) H^2({\mathbb{T}}) = \pi(\theta) H^2({\mathbb{T}})$, and applying $\pi(\theta^{-1})$ to both sides we conclude that $\pi(b) H^2({\mathbb{T}}) = H^2({\mathbb{T}})$, which contradicts the fact that $b$ is not constant. Thus there is $E \subseteq {\mathbb{T}}$ with $L = \pi(1_E) L^2({\mathbb{T}})$, and the fact that $L$ contains $H^2({\mathbb{T}})$ implies $E = {\mathbb{T}}$, so $L = L^2({\mathbb{T}})$ as desired. \[Cor:Well defined beta\] If $b$ is an arbitrary inner function and if $\beta$ is defined on trigonometric polynomials $p$ by the formula $\beta(p):=p\circ b$, then $\beta$ extends to a $*$-endomorphism of $L^{\infty}({\mathbb{T}})$ that is continuous with respect to the weak-$*$ topology. Lemma \[Rochbergs Lemma\] implies that $\pi(b)$ is unitarily equivalent to a multiple of the bilateral shift - the multiple being $\dim(\mathcal{D})$. Thus there is a Hilbert space isomorphism $W$ from $L^{2}({\mathbb{T}})$ to $L^{2}({\mathbb{T}})\otimes\mathcal{D}$ such that $\pi(b)=W^{-1}(\pi(z)\otimes I_{\mathcal{D}})W$. So, for every trigonometric polynomial $p$, $$\pi(\beta(p))=p(\pi(b))=W^{-1}p(\pi(z)\otimes I_{\mathcal{D}})W.$$ Since $\pi$ is a homeomorphism with respect to the weak-$*$ topology on $L^{\infty}({\mathbb{T}})$ and the ultraweak topology restricted to the range of $\pi$, it is evident that $b$ and $\beta$ satisfy the hypotheses of Lemma \[lem: Null set\], and the desired result follows. Of course, the proof just given recapitulates parts of the well-known theory of the functional calculus for unitary operators. \[Thm: Main1\] Let $b$ be a non-constant inner function. If $\{v_{i}\}_{i=1}^{N}$ is an orthonormal basis for $\mathcal{D}=H^{2}(\mathbb{T})\ominus\pi(b)H^{2}(\mathbb{T})$, then there is a unique Cuntz family $S = \{S_i\}_{i=1}^N$ on $L^2({\mathbb{T}})$ satisfying $$\label{cuntzexist} S_i(e_n) = v_i b^n, \qquad 1 \leq i \leq N, \quad n \in {\mathbb{Z}}.$$ The endomorphism $\alpha_{S}$ determined by $S$ as in satisfies $ \alpha_S \circ\pi=\pi\circ\beta$, where $\beta$ is the endomorphism $\varphi \mapsto \varphi \circ b$ of $L^{\infty}({\mathbb{T}})$. Each $S_{i}$ is reduced by $H^{2}(\mathbb{T})$, and if $R_{i}$ is the restriction of $S_{i}$ to $H^{2}(\mathbb{T})$, then $R = \{R_{i}\}_{i=1}^{N}$ is a Cuntz family on $H^{2}(\mathbb{T})$ with the property that $\alpha_{R}\circ\tau=\tau\circ\beta$. The proof of Lemma \[Rochbergs Lemma\] showed that $\mathcal{D} = H^2({\mathbb{T}}) \ominus \pi(b) H^2({\mathbb{T}})$ is nonzero, so it has an orthonormal basis; its dimension $N$ may be finite or infinite. It is well known that $N$ is finite if and only if $b$ is a finite Blaschke product. (See Remark \[Canonical Basis\].) Lemma \[Rochbergs Lemma\] implies that if $v$ is any unit vector in $\mathcal{D}$, the set $\{v b^n: n \in {\mathbb{Z}}\}$ is an orthonormal set of vectors in $L^2({\mathbb{T}})$. It follows that for any $1 \leq i \leq N$, there is a unique isometry $S_i$ on $L^2$ satisfying $S_i(e_n) = v_i b^n$ for all $n \in {\mathbb{Z}}$. Lemma \[Rochbergs Lemma\] also implies that if $v$ and $w$ are any orthogonal unit vectors in $\mathcal{D}$, the closed linear spans of $\{v b^n: n \in {\mathbb{Z}}\}$ and $\{w b^n: n \in {\mathbb{Z}}\}$ are orthogonal. It follows that the isometries in the tuple $S = \{S_i\}_{i=1}^N$ just defined have orthogonal ranges. Let $K$ denote the closed linear span of the ranges of the operators $\{S_i\}_{i=1}^N$. By construction, for all $n \in {\mathbb{Z}}$ we have $v_i b^n \in K$ for all $1 \leq i \leq N$, and thus $K \supseteq \pi(b)^n \mathcal{D}$ for all $n \in {\mathbb{Z}}$. By Lemma \[Rochbergs Lemma\] we conclude that $K = L^2({\mathbb{T}})$ and $S$ is a Cuntz family of isometries. Viewing each $e_{n}$ as an element of $L^{\infty}(\mathbb{T})$, it is evident that $$S_{i}\pi(e_{n})=\pi(b^{n})S_{i}=\pi(\beta(e_{n}))S_{i}, \qquad 1 \leq i \leq N, \quad n \in {\mathbb{Z}}. \label{eq:PrimitiveCovariant}$$ Since this equation is linear in the $e_{n}$, we conclude that $S_{i}\pi(p)=\pi(\beta(p))S_{i}$ for every $i$ and every trigonometric polynomial $p$. Consequently, $$\pi(\beta(p)) = \pi(\beta(p))\sum_{i=1}^{N}S_{i}S_{i}^{*} = \sum_{i=1}^{N}S_{i}\pi(p)S_{i}^{*} = \alpha_S(\pi(p))$$ is satisfied for every trigonometric polynomial $p$. It follows from Corollary \[Cor:Well defined beta\] that equation  is satisfied for all $\varphi\in L^{\infty}(\mathbb{T})$. The fact that $H^2({\mathbb{T}})$ is invariant under each $S_i$ is immediate from the definition . As Lemma \[Rochbergs Lemma\] implies that $\{v_i b^n: 1 \leq i \leq N, n < 0\}$ is an orthonormal basis of $H^2({\mathbb{T}})^{\perp}$, it is also clear from that $H^2({\mathbb{T}})^{\perp}$ is invariant under each $S_i$, so each $S_i$ is reduced by $H^2({\mathbb{T}})$. The fact that $R$ is a Cuntz family on $H^{2}(\mathbb{T})$ satisfying $\alpha_{R}\circ\tau=\tau\circ\beta$ is then immediate. Recall that $\mathfrak{T}$ is the $C^{*}$-algebra generated by all the Toeplitz operators $\{\tau(\varphi)\mid\varphi\in L^{\infty}({\mathbb{T}})\}$. We shall write $\mathfrak{T}(C({\mathbb{T}}))$ for $C^{*}$-subalgebra generated by the Toeplitz operators with continuous symbols, i.e., $\mathfrak{T}(C({\mathbb{T}}))$ is the $C^{*}$-subalgebra of $B(H^{2}(\mathbb{T}))$ generated by $\{\tau(\varphi)\mid\varphi\in C({\mathbb{T}})\}$. It is well known that $\mathfrak{T}(C(\mathbb{T}))=\{\tau(\varphi)+k\mid\varphi\in C(\mathbb{T}),\, k\in\mathfrak{K}\}$, where $\mathfrak{K}$ denotes the algebra of compact operators on $H^{2}(\mathbb{T})$ [@rD98 7.11 and 7.12]. \[Cor:Extend to Toeplitz Algebra\] If $b$ is an inner function, then the map $\tau(\varphi)\to\tau(\varphi\circ b)$, $\varphi\in L^{\infty}({\mathbb{T}})$, extends to a $*$-endomorphism of $\mathfrak{T}$ that we will continue to denote by $\beta$. Further, $\beta$ leaves $\mathfrak{T}(C({\mathbb{T}}))$ invariant if and only if $b$ is a finite Blaschke product. Thus, if $\iota$ denotes the identity representation of $\mathfrak{T}$ on $H^{2}({\mathbb{T}})$, then any solution $\alpha_{+}$ of equation  (equivalently, any solution $R:=\{R_{i}\}_{i=1}^{N}$ to equation ) yields a covariant representation $(\iota,\alpha_{+})$ of $(\mathfrak{T},\beta)$ and $(\iota,\alpha_{+})$ preserves $(\mathfrak{T}(C({\mathbb{T}})),\beta)$ if and only if $b$ is a finite Blaschke product. As elementary as this result seems to be, we do not know how to prove it without recourse to Theorem \[Thm: Main1\]. The existence of a solution $\alpha_{+}$ to equation guarantees that the map $\tau(\varphi)\to\tau(\varphi\circ b)$, $\varphi\in L^{\infty}({\mathbb{T}})$, extends to a $*$-endomorphism of $\mathfrak{T}$, because $\alpha_{+}$ is a $C^{*}$-endomorphism of a larger $C^{*}$-algebra, namely $B(H^{2}({\mathbb{T}}))$. Thus Theorem \[Thm: Main1\] shows that composition with $b$ extends to $\mathfrak{T}$. If $b$ is a finite Blaschke product then composition with $b$ leaves $C({\mathbb{T}})$ invariant, i.e., $\beta$ leaves $C({\mathbb{T}})$ invariant. Since the solution $\alpha_{+}$ to equation  is of the form $\alpha_{R}$ where the Cuntz family $R$ is *finite*, $\alpha_{+}$ leaves $\mathfrak{K}$ invariant and, therefore, it leaves $\mathfrak{T}(C({\mathbb{T}}))$ invariant when $b$ is a finite Blaschke product. Conversely, if $\beta$ leaves $\mathfrak{T}(C(\mathbb{T}))$ invariant, then letting $\varphi(z)=z$, we see that $\tau(b)=\alpha_{+}(\tau(\varphi))=\tau\circ\beta(\varphi)$ must be of the form $\tau(f)+k$, for some compact operator $k$ and some continuous function $f$. But then $\tau(b-f)=k$, and so, by [@rD98 7.15], $b=f$ is continuous, and hence a finite Blaschke product. Rochberg’s analysis and Laca’s result [@mL93 Proposition 2.2] together yield the following. \[Thm: Solution2\] A Cuntz family $R=\{R_{i}\}_{i=1}^{N}$ in $B(H^{2}({\mathbb{T}}))$ satisfies the equation $\alpha_{R}\circ\tau=\tau\circ\beta$ if and only if there is an orthonormal basis $\{v_{i}\}_{i=1}^{N}$ for $\mathcal{D} = H^{2}({\mathbb{T}})\ominus\pi(b)H^{2}({\mathbb{T}})$ so that the $R_{i}$ may be expressed in terms of it as in Theorem \[Thm: Main1\]. Theorem \[Thm: Main1\] asserts that if $R$ is a Cuntz family in $B(H^{2}({\mathbb{T}}))$ of the indicated form, then $\alpha_{R}\circ\tau=\tau\circ\beta$. For the converse, suppose $R:=\{R_{i}\}_{i=1}^{N}$ is a Cuntz family in $B(H^{2}({\mathbb{T}}))$ so that $\alpha_{R}\circ\tau=\tau\circ\beta$. Then, as we saw in Corollary \[Cor:Extend to Toeplitz Algebra\], $\alpha_{R}$ leaves the Toeplitz algebra $\mathfrak{T}$ invariant. Choose any orthonormal basis $\{v_{i}\}_{i=1}^{N}$ for $\mathcal{D}$ and let $\widetilde{R}=\{\widetilde{R}_{i}\}_{i=1}^{N}$ be corresponding Cuntz family on $H^{2}({\mathbb{T}})$ obtained from Theorem \[Thm: Main1\]. Then the equation $\alpha_{\widetilde{R}}\circ\tau=\tau\circ\beta$ is also satisfied, by Theorem \[Thm: Main1\]. It follows that $\alpha_{R}$ and $\alpha_{\widetilde{R}}$ agree on $\mathfrak{T}$. Since $\mathfrak{T}$ is ultraweakly dense in $B(H^{2}({\mathbb{T}}))$ (because $\mathfrak{T}$ contains $\mathfrak{K}$) and since $\alpha_{R}$ and $\alpha_{\widetilde{R}}$ are ultraweakly continuous maps of $B(H^{2}({\mathbb{T}})$), $\alpha_{R}=\alpha_{\widetilde{R}}$ on all of $B(H^{2}({\mathbb{T}}))$. Thus by [@mL93 Proposition 2.2], there is a unitary $N\times N$ *scalar* matrix $\left(u_{ij}\right)$ such that $R_{i}=\sum_{j}u_{ij}\widetilde{R}_{j}$. But $\{(\sum_{j}u_{ij}v_{j})\}_{i=1}^{N}$ is also an orthonormal basis of $\mathcal{D}$, and so the $R_{i}$’s have the desired form. \[Canonical Basis\] It was previously remarked that the nonconstant inner function $b$ is a finite Blaschke product if and only if the space $\mathcal{D}=H^{2}(\mathbb{T})\ominus\pi(b)H^{2}(\mathbb{T})$ has finite dimension. In fact, if $b$ is a finite Blaschke product, then $\mathcal{D}$ has dimension equal to the number of zeros of $b$ and its elements are rational functions with poles located in a finite set outside the closed unit disc. This may be seen by writing $$b(z)=\prod_{j=1}^{N}b_{\alpha_{j}},\label{eq:Blaschke Product}$$ where the $\alpha_{i}$ are the not-necessarily-distinct zeros of $b$. One can check that the functions $\{w_{i}\}_{i=1}^{N}$ constructed from partial products of $b$ by way of $$w_{j}(z)=\frac{(1-\vert\alpha_{j}\vert^{2})^{1/2}}{1-\overline{\alpha_{j}}z}\prod_{k=1}^{j-1}b_{\alpha_{k}}, \qquad 1 \leq j \leq N,$$ (the product $\prod_{k=1}^{j-1}b_{\alpha_{k}}$ is interpreted as $1$ when $j=1$), form an orthonormal basis for $\mathcal{D}$ (see [@jW56 p. 305]). We call this the *canonical* orthonormal basis for $\mathcal{D}$. Note that the elements of the canonical basis are nonzero on ${\mathbb{T}}$ and hence invertible elements of $C({\mathbb{T}})$. The analysis in [@jW56] shows that if $b$ is not a finite Blaschke product, then $\mathcal{D}$ is infinite dimensional. Alternatively, one may use the simple corollary of Beurling’s theorem that asserts that $\pi(\theta_1)H^2({\mathbb{T}}) \subseteq \pi(\theta_2)H^2({\mathbb{T}})$ if and only if the quotient $\theta_1/\theta_2$ is an inner function. (See [@hH64 page 11 ff.].) The point from this perspective is: if $b$ is not a finite Blaschke product, then $b$ has infinitely many inner factors, say $b=\Pi_{n=1}^{\infty} b_n$, and from these, one can construct an infinite increasing sequence of closed subspaces of $\mathcal{D}$. To identify all the solutions to equation  in Problem \[Problem: Central problem\], we need to restrict attention to finite Blaschke products. For this reason and to get a clearer picture of the Cuntz isometries implementing $\alpha$ and $\alpha_{+}$ we emphasize: > *From now on, $b$ will denote a* **finite** *Blaschke product.* In [@jR66 Theorem 1], Ryff shows that if $\varphi$ is analytic on the disc $\mathbb{D}$ and maps $\mathbb{D}$ into $\mathbb{D}$, then composition with $\varphi$ induces a bounded operator on all the $H^{p}$ spaces. The the principal ingredient in his proof is Littlewood’s subordination theorem. In [@jR66 Theorem 3], Ryff shows further that composition with $\varphi$ is an isometry on $H^{p}$ if and only if $\varphi$ is an inner function that vanishes at the origin. The following consequence of Theorem \[Thm: Main1\] is a variation on this theme with a very elementary proof. \[cor: boundedness gamma-b\]Let $b$ be a finite Blaschke product and define $\Gamma_{b}$ on trigonometric polynomials $p$ by $\Gamma_{b}(p):=p\circ b$. Then $\Gamma_{b}$ extends in a unique way to a bounded operator on $L^{2}(\mathbb{T})$ that leaves $H^{2}(\mathbb{T})$ invariant. Moreover, letting $\Gamma_{b}$ now denote the extension, the following are equivalent: 1. \[gammaiso\] $\Gamma_b$ is an isometry. 2. \[bzero\] $b(0)=0$. 3. \[gammareduce\] $\Gamma_b$ is reduced by $H^2({\mathbb{T}})$. Fix an element $w$ of the canonical basis for $\mathcal{D} = H^2({\mathbb{T}}) \ominus \pi(b) H^2({\mathbb{T}})$. By Theorem \[Thm: Main1\] there is a unique isometry $S$ on $L^2({\mathbb{T}})$ satisfying $$\label{cuntzprop} S(e_n) = w b^n, \qquad n \in {\mathbb{Z}}.$$ As observed in Remark \[Canonical Basis\], $w$ is an invertible element of $C({\mathbb{T}})$, so the operator $\pi(w)$ is invertible. The relation then implies that for any trigonometric polynomial $p$ we have $$\pi(w^{-1}) S (p) = \Gamma_b (p),$$ so the bounded operator $\pi(w^{-1}) S$ is an extension of $\Gamma_b$ to all of $L^2({\mathbb{T}})$. Uniqueness of the extension follows from the density of the trigonometric polynomials in $L^2({\mathbb{T}})$. The fact that $\Gamma_b(e_n) = b^n$ is in $H^2({\mathbb{T}})$ for every $n \geq 0$ implies that this extension leaves $H^2({\mathbb{T}})$ invariant. If $b(0)=0$, then $b(z)=zb_{1}(z)$, where $b_{1}$ is in $H^2({\mathbb{T}})$. It follows that for any $n > m$ we have $(b^{n},b^{m})=(z^{n-m}b_{1}^{n-m},1)=0$, so that the family $\{b^n\}_{n \in {\mathbb{Z}}}$ is orthonormal. Since $\Gamma_b(e_{n})=b^n$ for all $n \in {\mathbb{Z}}$, we conclude that $\Gamma_{b}$ is an isometry. Conversely, if $\Gamma_{b}$ is an isometry, $$b(0)=(b,e_{0})=(\Gamma_{b}(e_{1}),\Gamma_{b}(e_{0}))=(e_{1},e_{0})=0.$$ This establishes the equivalence of and . It will be useful later to deduce the equivalence of and from the assertion that if a vector $\xi \in L^2({\mathbb{T}})$ has the property that the pointwise product $\xi b$ is in $H^2({\mathbb{T}})$, then $\Gamma_b^* \xi$ is in $H^2({\mathbb{T}})$ if and only if $(\xi b)(0) = 0$. To prove this assertion, note that $\Gamma_b^* \xi$ is in $H^2({\mathbb{T}})$ if and only if $(\Gamma_b^* \xi, z^{-n}) = 0$ for all $n > 0$, and this is equivalent to $$0 = (\xi, \Gamma_b(z^{-n})) = (\xi, b^{-n}) = (\xi b^n, 1) = ((\xi b) b^{n-1}, 1) = (\xi b)(0) b^{n-1}(0), \qquad n > 0,$$ which is equivalent to $(\xi b)(0) = 0$. It follows from this assertion that $\Gamma_b^* \xi \in H^2({\mathbb{T}})$ for all $\xi \in H^2({\mathbb{T}})$ if and only if $b(0) = 0$. All of our proofs to this point have used only elementary operator theory. To go further, we require more detailed information about finite Blaschke products. The Master Isometry =================== The zeros of our finite Blaschke product $b$ will be written $\alpha_{1},\alpha_{2},\ldots,\alpha_{N}$, and we abbreviate $b_{\alpha_{j}}$ by $b_{j}$. As it was in Corollary \[cor: boundedness gamma-b\], the bounded operator of composition by $b$ on $L^2({\mathbb{T}})$ is denoted $\Gamma_b$. Although all Cuntz families $\{S_{i}\}_{i=1}^{N}$ that we constructed in Theorem \[Thm: Main1\] are closely linked to $\Gamma_{b}$, $\Gamma_{b}$ is not quite the operator we want to work with. It turns out that there is a single *isometry* $C_{b}$, built canonically from $\Gamma_{b}$, that has the property that *every* Cuntz family $S=\{S_{i}\}_{i=1}^{N}$ satisfying equation  can be expressed in terms of $C_{b}$. More remarkably, $C_{b}$ is *reduced* by $H^{2}({\mathbb{T}})$. We call this isometry the *master isometry determined by* $b$ (or by the endomorphism $\beta$ induced by $b$.) Much of the material below is contained in results already in the literature (see in particular [@jMcD03] and [@HW08]). But many calculations are done under the additional hypothesis that $b(0)=0$, which we want specifically to avoid. In the interest of keeping our treatment self-contained, we present all of the details. \[Lem: Postive\] Define $$J_{0}(z):=\frac{b^{\prime}(z)z}{Nb(z)}.\label{eq:J-zero}$$ Then the restriction of $J_{0}$ to $\mathbb{T}$ is a positive continuous function and, in particular, is bounded away from zero. Of course, $J_{0}$ is a rational function. What needs proof is that on ${\mathbb{T}}$, $J_{0}$ is positive, non-vanishing and has no poles. If $\alpha_{j}\neq0$, then $$\frac{b_{j}'(z)}{b_{j}(z)}=\frac{1}{z}\frac{1-|\alpha_{j}|^{2}}{|\alpha_{j}-z|^{2}},$$ while if $\alpha_{j}=0$, $\frac{b_{j}'}{b_{j}}(z)=\frac{1}{z}$. In either case, $\frac{zb_{j}'(z)}{b_{j}(z)}$ is strictly positive on ${\mathbb{T}}$. A short calculation shows that $J_{0}(z)=\frac{1}{N}\sum_{j=1}^{N}\frac{zb_{j}^{'}(z)}{b_{j}(z)}$, so the result follows. The next lemma follows [@jMcD03 Lemma 1] closely. \[Lem: Local homeo\]There is an increasing homeomorphism $\theta:[0,2\pi]\to[\theta(0),\theta(0)+N\cdot2\pi]$, where $e^{i\theta(0)}=b(1)$, such that 1. $b(e^{it})=e^{i\theta(t)}$. 2. The derivative of $\theta$ on $(0,2\pi)$ is $\frac{b'(e^{it})}{b(e^{it})}e^{it}\gneq0$. 3. If $(t_{j-1},t_{j})=\theta^{-1}(\theta(0)+(j-1)\cdot2\pi,\theta(0)+j\cdot2\pi)$, $j=1,2,\ldots,N$, and if $A_{j}:=\{e^{it}\mid t_{j-1}<t<t_{j}\}$, then $\cup_{j=1}^{N}A_{j}={\mathbb{T}}$, except for a finite set of points, and $b$ maps each $A_{j}$ diffeomorphically onto $\mathbb{T}\backslash\{b(1)\}$. 4. If $\sigma_{j}:{\mathbb{T}}\backslash\{b(1)\}\to A_{j}$ denotes the inverse of the restriction of $b$ to $A_{j}$, then as $s$ ranges over $(\theta(0)+2\pi(j-1),\theta(0)+2\pi j)$, $e^{is}$ ranges over ${\mathbb{T}}\backslash\{b(1)\}$ and $$\sigma_{j}(e^{is})=e^{i\theta^{-1}(s)}.$$ Each $b_{j}$ is analytic in a neighborhood of the closed unit disc and maps $\mathbb{T}$ homeomorphically onto $\mathbb{T}$ in an orientation preserving fashion. If the plane is slit along the ray through the origin and $b_{j}(1)$, then one can define an analytic branch of $\log z$ in the resulting region. On $\mathbb{T}\backslash\{b_{j}(1)\}$, $\log b_{j}(e^{it})=i\theta_{j}(t)$ for a smooth function $\theta_{j}(t)$ defined initially on $(0,2\pi)$, and mapping to $(\theta_{j}(0),\theta_{j}(0)+2\pi)$. Further, if one differentiates the defining equation for $\theta_{j}$, one finds that $i\theta_{j}'(t)=\frac{b_{j}'(e^{it})}{b_{j}(e^{it})}e^{it}i$, so $\theta_{j}^{'}$ is strictly positive, as was shown in the preceding lemma. Hence $\theta_{j}$ is strictly increasing. Since $b_{j}(e^{i0})=b_{j}(1)=b_{j}(e^{i2\pi})$, $\theta_{j}$ extends to a homeomorphism from $[0,2\pi]$ *onto* $[\theta_{j}(0),\theta_{j}(0)+2\pi]$. If $\theta$ is defined on $[0,2\pi]$ by the formula $\theta(t):=\sum_{j=1}^{N}\theta_{j}(t)$, then $\theta$ is a strictly increasing homeomorphism from $[0,2\pi]$ onto $[\theta(0),\theta(0)+N\cdot2\pi]$ such that $b(e^{it})=e^{i\theta(t)}$. The remaining assertions are now clear. \[definition: canonical transfer operator\] The *(canonical) transfer operator* determined by the Blaschke product $b$ is defined on measurable functions $\xi$ by the formula $${\mathcal{L}}(\xi)(z):=\frac{1}{N}\sum_{b(w)=z}\xi(w).$$ Of course, an alternate formula for ${\mathcal{L}}$ is ${\mathcal{L}}(\xi)(z)=\frac{1}{N}\sum_{j=1}^{N}\xi(\sigma_{j}(z))$, when $z\in{\mathbb{T}}\backslash\{b(1)\}$. It is clear that ${\mathcal{L}}$ carries measurable functions to measurable functions, preserves order, and is unital. Because $b$ is a local homeomorphism, ${\mathcal{L}}$ carries $C({\mathbb{T}})$ into itself. It is not difficult to see that ${\mathcal{L}}$ is a bounded linear operator on $L^{2}({\mathbb{T}})$. However, we present a proof of this that connects ${\mathcal{L}}$ with the adjoint of $\Gamma_{b}$. For this purpose, note that by Lemma \[Lem: Postive\], $\pi(J_{0})$ is a bounded, positive, invertible operator on $L^{2}({\mathbb{T}})$ with inverse $\pi(J_{0}^{-1})$. \[Theorem: transfer\]$${\mathcal{L}}\pi(J_{0})^{-1}=\Gamma_{b}^{*}\label{eq:Transfer}$$ For $\xi$ and $\eta$ in $L^{2}({\mathbb{T}})$, $$(\Gamma_{b}^{*}\xi,\eta)=(\xi,\Gamma_{b}\eta)=\int_{0}^{2\pi}\xi(z)\overline{\eta(b(z))}\, dm(z)=\sum_{j=1}^{N}\int_{A_{j}}\xi(z)\overline{\eta(b(z))}\, dm(z).$$ From the first and third assertions of Lemma \[Lem: Local homeo\], $$\int_{A_{j}}\xi(z)\overline{\eta(b(z))}\, dm(z)=\int_{t_{j-1}}^{t_{j}}\xi(e^{it})\overline{\eta(e^{i\theta(t)})}\, dt.$$ Changes the variable to $s=\theta(t)$, the third and fourth assertions of Lemma \[Lem: Local homeo\] imply $$\int_{t_{j-1}}^{t_{j}}\xi(e^{it})\eta(e^{i\theta(t)})\, dt=\int_{\theta(t_{j-1})}^{\theta(t_j)}\xi(e^{i\theta^{-1}(s)})\eta(e^{is})(\theta^{-1})'(s)\, ds.$$ Calculating $(\theta^{-1})'(s)=(\theta'(t))^{-1}=(\theta'(\theta^{-1}(s)))^{-1}$ and using the second assertion of Lemma \[Lem: Local homeo\], we deduce $$\int_{\theta(t_{j-1})}^{\theta(t_j)}\xi(e^{i\theta^{-1}(s)})\eta(e^{is})(\theta^{-1})'(s)\, ds=\int_{\theta(t_j)}^{\theta(t_j)}\xi(e^{i\theta^{-1}(s)})\eta(e^{is})\frac{b(e^{i\theta^{-1}(s)})}{b'(e^{i\theta^{-1}(s)})e^{i\theta^{-1}(s)}}\, ds.$$ But by the fourth statement of Lemma \[Lem: Local homeo\] $e^{i\theta^{-1}(s)}=\sigma_{j}(e^{is})$, when $s\in(\theta(0)+2\pi(j-1),\theta(0)+2\pi j)$. So the last integral is $$\int_{\theta(t_{j-1})}^{\theta(t_j)}\xi(\sigma_{j}(e^{is}))\eta(e^{is})\frac{b(\sigma_{j}(e^{is}))}{b'(\sigma_{j}(e^{is}))\sigma_{j}(e^{is})}\, ds.$$ As $e^{is}$ sweeps out $\mathbb{T}\backslash\{b(1)\}$ as $s$ ranges over each interval $(\theta(t_{j-1}), \theta(t_j)) = (\theta(0)+2\pi(j-1),\theta(0)+2\pi j)$, we conclude that $$\begin{aligned} (\Gamma_{b}^{*}\xi,\eta) & = & \sum_{j=1}^{N}\int_{\theta(0)+2\pi(j-1)}^{\theta(0)+2\pi j}\xi(\sigma_{j}(e^{is}))\eta(e^{is})\frac{b(\sigma_{j}(e^{is}))}{b'(\sigma_{j}(e^{is}))\sigma_{j}(e^{is})}\, ds\\ & = & \frac{1}{N}\sum_{j=1}^{N}\int_{{\mathbb{T}}}\xi(\sigma_{j}(z))\overline{\eta(z)}\frac{Nb(\sigma_{j}(z))}{b'(\sigma_{j}(z))\sigma_{j}(z)}\, dm(z)\\ & = & \frac{1}{N}\sum_{j=1}^{N}\int_{{\mathbb{T}}}\xi(\sigma_{j}(z))(J_{0}(\sigma_{j}(z)))^{-1}\overline{\eta(z)}\, dm(z)\\ & = & ({\mathcal{L}}(\pi(J_{0})^{-1}\xi),\eta),\end{aligned}$$ showing that $\Gamma_{b}^{*}={\mathcal{L}}\pi(J_{0})^{-1}$. \[notation: J\]$$J(z):=\exp\left[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{e^{it}+z}{e^{it}-z}\ln(J_{0}(e^{it}))dt\right]$$ Of course $J$ is the unique outer function that is positive at $0$ and satisfies the equation $\vert J(z)\vert=J_{0}(z)$ for all $z\in{\mathbb{T}}$. (See [@hH64 Theorem 5] and the surrounding discussion.) Significantly, $J$ does not vanish on ${\mathbb{D}}$ and $J$ is in $H^{\infty}({\mathbb{T}})$; note that $J_0$ is not even in $H^2({\mathbb{T}})$ except in trivial cases. We will work primarily with $J^{\frac{1}{2}}$, which is $\exp\left[\frac{1}{4\pi}\int_{-\pi}^{\pi}\frac{e^{it}+z}{e^{it}-z}\ln(J_{0}(e^{it}))dt\right]$. Note that $J^{1/2}$ and $J^{-1/2}$ are both in $H^{\infty}({\mathbb{T}})$. \[lemma:Proto-Conjugate\]For all $\varphi\in L^{\infty}({\mathbb{T}})$,$${\mathcal{L}}\pi(\varphi)\Gamma_{b}=\pi({\mathcal{L}}(\varphi)).$$ In particular, ${\mathcal{L}}$ is a left inverse for $\Gamma_{b}$. Take $\xi\in L^{2}({\mathbb{T}})$ and $\varphi\in L^{\infty}({\mathbb{T}})$ and calculate: $$\begin{aligned} {\mathcal{L}}(\pi(\varphi)\Gamma_{b}(\xi))(z)& =\frac{1}{N}\sum_{b(w)=z}(\pi(\varphi)\Gamma_{b}(\xi))(w)=\frac{1}{N}\sum_{b(w)=z}\varphi(\omega)\xi(b(w))\\ & =\frac{1}{N}\sum_{b(w)=z}\varphi(\omega)\xi(z)={\mathcal{L}}(\varphi)(z)\xi(z)\\ & =(\pi({\mathcal{L}}(\varphi))\xi)(z).\end{aligned}$$ \[lemma:Cbiso\]Set $$C_{b}:=\pi(J^{\frac{1}{2}})\Gamma_{b}.$$ Then $C_{b}$ is an isometry on $L^{2}({\mathbb{T}})$ and $$C_{b}^{*}={\mathcal{L}}\pi(J^{-\frac{1}{2}}).$$ Further, if $\{S_{i}\}_{i=1}^{N}$ is the Cuntz family constructed in Theorem \[Thm: Main1\] using an orthonormal basis $\{v_{i}\}_{i=1}^{N}$ for $H^{2}({\mathbb{T}})\ominus\pi(b)H^{2}({\mathbb{T}})$, then $S_{i}=\pi(v_{i}J^{-\frac{1}{2}})C_{b}$ for all $1 \leq i \leq N$. The key is the relation ${\mathcal{L}}\pi(J_0^{-1}) = \Gamma_b^*$ from Theorem \[Theorem: transfer\]. We just compute: $$C_{b}^{*}C_{b} = \Gamma_{b}^{*}\pi(\overline{J^{\frac{1}{2}})}\pi(J^{\frac{1}{2}})\Gamma_{b} = \Gamma_{b}^{*}\pi(\vert J\vert)\Gamma_{b} = \Gamma_{b}^{*}\pi(J_{0})\Gamma_{b} = {\mathcal{L}}\Gamma_{b}=I.$$ and $$C_{b}^{*} = \Gamma_{b}^{*}\pi(\overline{J^{\frac{1}{2}}}) = {\mathcal{L}}\pi(J_{0})^{-1}\pi(\overline{J^{\frac{1}{2}}}) = {\mathcal{L}}\pi(J^{-\frac{1}{2}}).$$ For the final assertion, simply observe that the definition of $S_{i}$ (using $\{v_{i}\}_{i=1}^{N})$ shows that $S_{i}=\pi(v_{i})\Gamma_{b}$. As $C_{b}=\pi(J^{\frac{1}{2}})\Gamma_{b}$, we conclude $$\begin{aligned} S_{i} & = & \pi(v_{i})\pi(J^{-\frac{1}{2}})\pi(J^{\frac{1}{2}})\Gamma_{b}=\pi(v_{i}J^{-\frac{1}{2}})C_{b}.\\ \\\end{aligned}$$ \[Reducing and implementing L\]$H^{2}({\mathbb{T}})$ reduces $C_{b}$ and $C_{b}$ implements ${\mathcal{L}}$ in the sense that $$C_{b}^{*}\pi(\varphi)C_{b}=\pi({\mathcal{L}}(\varphi)),\label{eq: Implement L}$$ for all $\varphi\in L^{\infty}({\mathbb{T}})$. Since $\Gamma_{b}$ and $\pi(J^{\frac{1}{2}})$ leave $H^{2}({\mathbb{T}})$ invariant, so does $C_{b}=\pi(J^{\frac{1}{2}})\Gamma_{b}$. On the other hand, $C_{b}^{*}={\mathcal{L}}\pi(J^{-\frac{1}{2}})$ by Lemma \[lemma:Cbiso\], so one way to show that $H^2({\mathbb{T}})$ reduces $C_b$ is to show that ${\mathcal{L}}$ leaves $H^{2}({\mathbb{T}})$ invariant. McDonald did this in [@jMcD03 Lemma 2]. We can also prove this directly: fixing $\eta \in H^2({\mathbb{T}})$, we must show that ${\mathcal{L}}\eta \in H^2({\mathbb{T}})$. By Theorem \[Theorem: transfer\] we have that ${\mathcal{L}}= \Gamma_b^* \pi(J_0)$, so it suffices to show that the vector $\xi = \pi(J_0) \eta \in L^2({\mathbb{T}})$ is mapped into $H^2({\mathbb{T}})$ by $\Gamma_b^*$. By the definition of $J_0$ we have $$b(z) \xi(z) = b(z) J_0(z) \eta(z) = z b'(z) \eta(z),$$ showing that $b \xi$ is in $H^2({\mathbb{T}})$ and that $(b \xi)(0) = 0$. Thus $\Gamma_b^* \xi \in H^2({\mathbb{T}})$ by the argument given in the proof of Corollary \[cor: boundedness gamma-b\]. Equation  follows from Lemmas \[lemma:Cbiso\] and \[lemma:Proto-Conjugate\] because $$C_{b}^{*}\pi(\varphi)C_{b}={\mathcal{L}}\pi(J^{-\frac{1}{2}})\pi(\varphi)\pi(J^{\frac{1}{2}})\Gamma_{b}={\mathcal{L}}\pi(\varphi)\Gamma_{b}=\pi({\mathcal{L}}(\varphi)).$$ We shall denote the restriction of $C_{b}$ to $H^{2}({\mathbb{T}})$ by $C_{b+}$. \[Thm: Intertwine\] 1. \[first\] If $T$ is a bounded operator on $L^{2}({\mathbb{T}})$, then $T$ satisfies $$T\pi(\varphi)=\pi(\beta(\varphi))T\label{eq:Intertwine V}, \qquad \varphi \in L^{\infty}({\mathbb{T}}),$$ if and only if $T=\pi(m)C_{b}$ for some function $m\in L^{\infty}({\mathbb{T}})$. 2. \[second\] If $T$ is a bounded operator on $H^{2}({\mathbb{T}})$, then $$T\tau(\varphi)=\tau(\beta(\varphi))T\label{eq: Intertwine+}, \qquad \varphi \in H^{\infty}({\mathbb{T}}),$$ if and only if $T=\tau(m)C_{b+}$ for some function $m\in H^{\infty}({\mathbb{T}})$. Further, if $T=\pi(m)C_{b}$, $m\in L^{\infty}({\mathbb{T}})$, (resp. if $T=\tau(m)C_{b+}$, $m\in H^{\infty}({\mathbb{T}})$) then $\Vert T\Vert=\left(\Vert\mathcal{L}(|m|^{2})\Vert_{\infty}\right)^{\frac{1}{2}}$, and $T$ is an isometry if and only if $\mathcal{L}(|m|^{2})=1$ a.e. We begin by proving assertion . If $T=\pi(m)C_{b}$ for some $m\in L^{\infty}({\mathbb{T}})$, a short calculation shows that $T$ satisfies . The formula  then implies$$T^{*}T=C_{b}^{*}\pi(\overline{m})\pi(m)C_{b}=\pi({\mathcal{L}}(|m|^{2})),$$ and $\Vert T\Vert=\left(\Vert\mathcal{L}(|m|^{2})\Vert_{\infty}\right)^{\frac{1}{2}}$ follows as $\pi$ is faithful. The fact that $T$ is isometric if and only if $\mathcal{L}(|m|^{2})=1$ a.e. is immediate. Suppose conversely that $T$ is an operator on $L^2({\mathbb{T}})$ satisfying . Define $m:=\pi(J^{-1/2})T(\mathbf{1})$, where $\mathbf{1}$ is the constant function that is identically equal to $1$. Note that a priori we have that $m \in L^2({\mathbb{T}})$, but not that $m \in L^{\infty}({\mathbb{T}})$. The hypothesis and the definition of $m$ imply that $$T\varphi=T\pi(\varphi)\mathbf{1}=\pi(\beta(\varphi))\pi(J^{1/2})m=mC_{b}(\varphi),\label{eq:InitPropsofm} \qquad \varphi \in L^{\infty}({\mathbb{T}}).$$ If more generally $\varphi \in L^2({\mathbb{T}})$, there is a sequence $\varphi_n$ in $L^{\infty}({\mathbb{T}})$ such that $\varphi_n \to \varphi$ in $L^2({\mathbb{T}})$. Boundedness of $C_b$ implies that the sequence of vectors $C_b \varphi_n$ is convergent in $L^2({\mathbb{T}})$ with limit $C_b \varphi$, and boundedness of $T$ together with implies that the sequence $m C_b \varphi_n$ is convergent in $L^2({\mathbb{T}})$ with limit $T \varphi$. By passing to a subsequence as necessary we may assume that $C_b \varphi_n \to C_b \varphi$ pointwise a.e., and $m C_b \varphi_n \to T \varphi$ pointwise a.e, and deduce $T \varphi = m C_b \varphi$. We conclude that $$T\varphi=mC_{b}(\varphi),\label{eq:l2holds} \qquad \varphi \in L^2({\mathbb{T}}).$$ Fix an orthonormal basis $\{v_{i}\}_{i=1}^{N}$ for $H^{2}({\mathbb{T}})\ominus\pi(b)H^{2}({\mathbb{T}})$ and set $S_{i}=\pi(v_{i}J^{-\frac{1}{2}})C_{b}$. By Lemma \[lemma:Cbiso\] and Theorem \[Thm: Main1\], $\{S_{i}\}_{i=1}^{N}$ is a Cuntz family, so for any $\xi \in L^2({\mathbb{T}})$ we have $$\begin{aligned} m\xi = m\sum_{j=1}^{N}S_{j}S_{j}^{*}\xi & = m\sum_{j=1}^{N}\pi(v_{j}J^{-1/2})C_{b} S_j^{*}\xi \\ & = \sum_{j=1}^{N} v_{j}J^{-1/2} m C_{b} S_j^{*}\xi \\ & = \sum_{j=1}^N v_j J^{-1/2} T S_j^* \xi & \text{by \eqref{eq:l2holds}.}\end{aligned}$$ Thus multiplication by $m$ is the operator $\sum_{j=1}^N \pi(v_j J^{-1/2}) T S_j^*$ on $L^2({\mathbb{T}})$. As this operator is bounded we deduce that $m \in L^{\infty}({\mathbb{T}})$ as desired. The proof of assertion is similar, but it is important to keep track of the differences. If $T=\tau(m)C_{b+}$ for some $m\in H^{\infty}({\mathbb{T}})$, it is easily seen that is satisfied, since $\tau(m)$ and $\tau(\varphi)$ commute when $m$ and $\varphi$ are in $H^{\infty}({\mathbb{T}})$, and since $C_{b+}$ is the restriction of $C_{b}$ to a reducing subspace. Furthermore, $$T^{*}T=C_{b+}^{*}\tau(m)^{*}\tau(m)C_{b+}=PC_{b}^{*}P\pi(\overline{m})P\pi(m)PC_{b}P\vert_{H^{2}({\mathbb{T}})}.$$ Since $H^{2}({\mathbb{T}})$ is invariant under $\pi(m)$ and reduces $C_{b}$, we deduce $$T^* T = PC_{b}^{*}\pi(\overline{m})\pi(m)C_{b}P\vert_{H^{2}({\mathbb{T}})}=P\pi({\mathcal{L}}(\vert m\vert^{2}))P\vert_{H^{2}({\mathbb{T}})}=\tau({\mathcal{L}}(\vert m\vert^{2})).$$ Thus $\Vert T^{*}T\Vert=\Vert\tau({\mathcal{L}}(\vert m\vert^{2}))\Vert=\Vert{\mathcal{L}}(\vert m\vert^{2})\Vert_{\infty}$, which proves the formula for the norm of $T$. Also, it shows that $T$ is an isometry if and only if $\mathcal{L}(|m|^{2})=1$ a.e. Suppose conversely that $T$ on $H^{2}({\mathbb{T}})$ satisfies equation  and set $m:=\tau(J^{-1/2})T(\mathbf{1})$; we know $m \in H^2({\mathbb{T}})$ and wish to deduce that $m \in H^{\infty}({\mathbb{T}})$. The fact that $J^{-\frac{1}{2}}\in H^{\infty}({\mathbb{T}})$ and the properties of $C_{b+}$ show that $$T\varphi=T\tau(\varphi)\mathbf{1}=\tau(\beta(\varphi))\tau(J^{1/2})m=mC_{b+}(\varphi)$$ for all $\varphi\in H^{\infty}({\mathbb{T}})$ and hence all $\varphi \in H^2({\mathbb{T}})$. With $S_{i}=\pi(v_{i}J^{-\frac{1}{2}})C_{b}$ as before, we note that $H^{2}({\mathbb{T}})$ reduces $S_{i}$, by Theorem \[Thm: Main1\], and we set $R_{i}:=S_{i}\vert_{H^{2}({\mathbb{T}})}$. Theorem \[Thm: Main1\] asserts that $\{R_{i}\}_{i=1}^{N}$ is a Cuntz family of isometries on $H^{2}({\mathbb{T}})$. Since $H^{2}({\mathbb{T}})$ reduces $C_{b}$, we have for any $\xi \in H^2({\mathbb{T}})$ $$\begin{aligned} m\xi & = m\sum_{j=1}^{N}R_{j}R_{j}^{*}\xi = m\sum_{j=1}^{N} \pi(v_{j}J^{-1/2})C_{b}R_j^* \xi = \sum_{j=1}^{N} v_{j}J^{-1/2} mC_{b+}R_{j}^{*}\xi \\ & = \sum_{j=1}^{N} \pi(v_{j}J^{-1/2}) T R_{j}^{*}\xi. \end{aligned}$$ As $v_j J^{-1/2} \in H^{\infty}$ for each $j$, the conclusion is that multiplication by $m$ is the bounded operator $\sum_{j=1}^n \tau(v_j J^{-1/2}) T R_j^*$ on $H^2({\mathbb{T}})$. Thus $m\in H^{\infty}({\mathbb{T}})$. We have called $C_{b}$ *the* master isometry. One explanation of our use of the definite article is that when one builds the Deaconu-Renault groupoid $G$ determined by $b$, viewed as a local homeomorphism of ${\mathbb{T}}$, then $C_{b}$ appears as the image of a special isometry $S$ in the groupoid $C^{*}$-algebra $C^{*}(G)$ under a representation that gives rise to the Cuntz familes we consider here. We have not seen any compelling reason to bring this technology into this note - nevertheless, $C^{*}(G)$ and $S$ are lying in the background and may prove useful in the future. For further information about the use of groupoids and $C^{*}$-algebras generated by local homeomorphisms, see [@IM08]. One should not infer from our use of the definite article that $C_b$ is uniquely determined by the abstract properties that we have shown it has. More precisely, suppose that $V$ is an isometry on $L^2({\mathbb{T}})$ that implements ${\mathcal{L}}$ in the sense of , interacts with $\pi$ as in , and is reduced by $H^2({\mathbb{T}})$. By Theorem \[Thm: Intertwine\], $V$ must be of the form $V=\pi(m)C_{b}$ for some $m\in L^{\infty}({\mathbb{T}})$ satisfying ${\mathcal{L}}(|m|^{2})=1$. The assumption that $V$ is reduced by $H^{2}({\mathbb{T}})$, together with the assumption that $V$ implements ${\mathcal{L}}$ imply that $|m|=1$ a.e.; it may further be shown that $m$ is an inner function with the property that ${\mathcal{L}}(\overline{m})$ is constant. But in general we have been unable to deduce more about $m$ than this. (We note that $m$ need not be constant. If $b(z)=z^{2}$, then $V=\pi(m)C_{b}$ will have all of the indicated properties if $m(z) = z^k$ for any odd positive integer $k$. What happens when $b$ is a more general Blaschke product remains to be investigated.) Hilbert modules and orthonormal bases ===================================== The endomorphism $\beta$ of $L^{\infty}({\mathbb{T}})$ and the transfer operator ${\mathcal{L}}$ may be used to endow $L^{\infty}({\mathbb{T}})$ with the structure of a Hilbert $C^{*}$-module over $L^{\infty}({\mathbb{T}})$. We will exploit this structure in order to solve Problem \[Problem: Central problem\]. We do not need much of the general theory about these modules. Rather, we only need to expose enough so that the formulas we use make good sense. Excellent references for the basics of the theory are [@cL95; @MT05]. Suppose $A$ is a $C^{*}$-algebra and that $E$ is a right $A$-module. Then $E$ is called a *Hilbert $C^{*}$-module* over $A$ in case $E$ is endowed with an $A$-valued sesquilinear form ${\langle}\cdot,\cdot{\rangle}:E\times E\to A$ that is subject to the following conditions. 1. ${\langle}\cdot,\cdot{\rangle}$ is conjugate linear in the first variable, so ${\langle}\xi\cdot a,\eta\cdot b{\rangle}=a^{*}{\langle}\xi,\eta{\rangle}b$. 2. For all $\xi\in E$, ${\langle}\xi,\xi{\rangle}$ is a positive element in $A$ that is $0$ if and only if $\xi=0$. 3. $E$ is complete in the norm defined by the formula $\Vert\xi\Vert_{E}:=\Vert{\langle}\xi,\xi{\rangle}\Vert_{A}^{\frac{1}{2}}$. Of course, it takes a little argument to prove that $\Vert\cdot\Vert_{E}$ is a norm on $E$. In the application of Hilbert modules that we have in mind, our $C^{*}$-algebra $A$ will be unital, and we will denote the unit by $\mathbf{1}$. A vector $v\in E$ is called a *unit vector* if ${\langle}v,v{\rangle}=\mathbf{1}$. Note that this says more than simply $\Vert v\Vert=1$. A family $\{v_{i}\}_{i\in I}$ of vectors in $E$ is called is called an *orthonormal set* if ${\langle}v_{i},v_{j}{\rangle}=\delta_{ij}\mathbf{1}$. Further, if linear combinations of vectors from $\{v_{i}\}_{i\in I}$ (where the coefficients are from $A$) are dense in $E$ then we say that $\{v_{i}\}_{i\in I}$ is an *orthonormal basis* for $E$. In this event, every vector $\xi\in E$ has the representation $$\xi=\sum_{i\in I}v_{i}\cdot{\langle}v_{i},\xi{\rangle},\label{eq:FourierExpansion}$$ where the sum converges in the norm of $E$. In general, a Hilbert $C^{*}$-module need not have an orthonormal basis. Also, in general, two orthonormal bases need not have the same cardinal number. Nevertheless, two orthonormal bases $\{v_{i}\}_{i\in I}$ and $\{w_{j}\}_{j\in J}$ are linked by a unitary matrix over $A$ in the usual way:$$w_{j}=\sum_{i\in I}v_{i}\cdot{\langle}v_{i},w_{j}{\rangle}=\sum_{i\in I}v_{i}\cdot u_{ij}.$$ So if the cardinality of $I$ is $n$ and the cardinality of $J$ is $m$, then $U=(u_{ij})$ is a unitary matrix in $M_{nm}(A)$, i.e., $UU^{*}=\mbox{\textbf{1}}_{n}$ in $M_{n}(A)$, while $U^{*}U=\mathbf{1}_{m}$ in $M_{m}(A)$. And conversely, any such matrix transforms the orthonormal basis $\{v_{i}\}_{i\in I}$ for $E$ into an orthonormal basis $\{w_{j}\}_{j\in J}$ for $E$ via this formula. In our application of these notions, the coefficient algebra $A$ will be commutative so, as is well known, all unitary matrices are square and, therefore, any two orthonormal bases have the same number of elements. We shall view $L^{\infty}({\mathbb{T}})$ as right module over $L^{\infty}({\mathbb{T}})$ via the formula $$\xi\cdot a:=\xi\beta(a),\label{eq:module} \qquad a, \xi \in L^{\infty}({\mathbb{T}}),$$ where the product on the right hand side is the usual pointwise product in $L^{\infty}({\mathbb{T}})$. Also, we shall use ${\mathcal{L}}$ to endow $L^{\infty}({\mathbb{T}})$ with the $L^{\infty}({\mathbb{T}})$-valued inner product defined by the formula $${\langle}\xi,\eta{\rangle}:={\mathcal{L}}(\overline{\xi}\eta),\label{eq:innerproduct} \qquad \xi,\eta\in L^{\infty}({\mathbb{T}}).$$ Using the fact that ${\mathcal{L}}\circ\beta={\operatorname{id}}_{L^{\infty}({\mathbb{T}})}$ (Lemma \[lemma:Proto-Conjugate\]), it is straightforward to see that $L^{\infty}({\mathbb{T}})$ is a Hilbert $C^{*}$-module over $L^{\infty}({\mathbb{T}})$, which we shall denote by ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. The only thing that may be seem problematic is the fact that ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ is complete in the norm defined by the inner product. However, a moment’s reflection reveals that the norm is equivalent to the $L^{\infty}({\mathbb{T}})$-norm, which is complete. We remark that and make sense when the functions in $L^{\infty}({\mathbb{T}})$ are restricted to lie in $C({\mathbb{T}})$, and $C({\mathbb{T}})$ also is a Hilbert module over $C({\mathbb{T}})$ in this structure, but we will focus on the $L^{\infty}({\mathbb{T}})$ case in what follows. Vectors $\{m_{i}\}_{i=1}^{N}$ in ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ form an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ if and only if $${\mathcal{L}}(\overline{m_{i}}m_{j})={\langle}m_{i},m_{j}{\rangle}=\delta_{ij}\mathbf{1},$$ where $\mathbf{1}$ is the constant function $1$, and $$f=\sum_{i=1}^{N}m_{i}\cdot{\langle}m_{i},f{\rangle}=\sum_{i=1}^{N}m_{i}\beta({\mathcal{L}}(\overline{m_{i}}f)), \qquad f \in {L^{\infty}(\mathbb{T})_{\mathcal{L}}}.$$ We have intentionally used $N$, the order of the Blaschke product $b$, as the upper limit in these sums because ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ has an orthonormal basis with $N$ elements, viz. $\{\sqrt{N}1_{A_{i}}\}_{i=1}^{N}$, where the $A_{i}$’s are the arcs in Lemma \[Lem: Local homeo\], and because any two orthonormal bases for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ have the same number of elements, as we noted above. \[rem:ConditionalExpectations\] As a map on $L^{\infty}({\mathbb{T}})$, $\mathbb{E}:=\beta\circ{\mathcal{L}}$ is the conditional expectation onto the range of $\beta$. Indeed, $\mathbb{E}$ is a weak-$*$ continuous, positivity preserving, idempotent unital linear map on $L^{\infty}({\mathbb{T}})$. So it is the restriction to $L^{\infty}({\mathbb{T}})$ of an idempotent and contractive linear map on $L^{1}({\mathbb{T}})$ that preserves the constant functions. Hence $\mathbb{E}$ is a conditional expectation by the corollary to [@rD65 Theorem 1]. Of course, the range of $\mathbb{E}$ consists of functions in the range of $\beta$ by definition. On the other hand, if $f=\beta(g)$ for some function $g\in L^{\infty}({\mathbb{T}})$, then $\mathbb{E}(f)=\beta\circ{\mathcal{L}}\circ\beta(g)=\beta(g)=f$, since ${\mathcal{L}}$ is a left inverse for $\beta$. Thus, to say that $\{m_{i}\}_{i=1}^{N}$ is an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ is to say that $$f=\sum_{i=1}^{N}m_{i}\mathbb{E}(\overline{m}_{i}f), \qquad f \in L^{\infty}({\mathbb{T}}).$$ In light of the discussion in Section \[sec:preliminaries\], the following describes all solutions to . \[Thm:Solution 1\] If a Cuntz family $S = \{S_i\}_{i=1}^N$ on $B(L^2({\mathbb{T}}))$ gives rise to a covariant representation $(\pi,\alpha_{S})$ of $(L^{\infty}({\mathbb{T}}),\beta)$, then there is an orthonormal basis $\lbrace m_{i}\rbrace_{i=1}^{N}$ for $L^{\infty}({\mathbb{T}})_{\mathcal{L}}$ such that $$S_{i}=\pi(m_{i})C_{b},\label{eq:DefS_i} \qquad 1 \leq i \leq N.$$ Further, if $\{m_{i}\}_{i=1}^{N}$ is any family of functions in $L^{\infty}({\mathbb{T}})$ such that the operators $S_i$ defined by form a Cuntz family $S$ such that $(\pi, \alpha_S)$ is a covariant representation of $(L^{\infty}({\mathbb{T}}), \beta)$, then $\{m_{i}\}_{i=1}^{N}$ is an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. Conversely, if $\{m_{i}\}_{i=1}^{N}$ is any orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ and $S_i$ is defined by for $1 \leq i \leq N$, then $S=\{S_{i}\}_{i=1}^{N}$ is a Cuntz family and $(\pi, \alpha_S)$ is a covariant representation of $(L^{\infty}({\mathbb{T}}), \beta)$. Suppose $\lbrace S_{i}\rbrace_{i=1}^{N}$ is a Cuntz family on $L^{2}({\mathbb{T}})$ that satisfies equation . If both sides of this equation are multiplied on the right by $S_{j}$, then one finds from equation  that $S_{j}\pi(\cdot)=\pi\circ\beta(\cdot)S_{j}$ for each $j$. By Theorem \[Thm: Intertwine\], for each $j$ there is $m_j \in L^{\infty}({\mathbb{T}})$ satisfying $S_{j}=\pi(m_{j})C_{b}$ and $\mathbf{1}={\mathcal{L}}(|m_{j}|^{2})={\langle}m_{j},m_{j}{\rangle}$. The fact that $S$ satisfies equation  then yields $$\delta_{i,j}I_{L^{2}({\mathbb{T}})} = S_{i}^{*}S_{j} = C_{b}^{*}\pi(\overline{m_{i}}m_{j})C_{b} = \pi(\mathcal{L}(\overline{m_{i}}m_{j})) = \pi({\langle}m_{i},m_{j}{\rangle}).$$ Since $\pi$ is faithful, ${\langle}m_{i},m_{j}{\rangle}=\delta_{i,j}\mathbf{1}$, where $\mathbf{1}$ is the constant function $1$. Thus, $\lbrace m_{i}\rbrace_{i=1}^{N}$ is an orthonormal set in $L^{\infty}({\mathbb{T}})_{\mathcal{L}}$. We now show that the $\{m_i\}_{i=1}^N$ span ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. If $f\in{L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ satisfies ${\langle}f,m_{i}{\rangle}=0$ for all $i$, then we have $$\begin{aligned} (\pi(f)C_{b})^{*} = C_{b}^{*}\pi(\overline{f})\left(\sum_{i=1}^{N}S_{i}S_{i}^{*}\right) & = C_{b}^{*}\pi(\overline{f})\left(\sum_{i=1}^{N}\pi(m_{i})C_{b}S_{i}^{*}\right)\\ & = \sum_{i=1}^{N}C_{b}^{*}\pi(\overline{f})\pi(m_{i})C_{b}S_{i}^{*}\\ & = \sum_{i=1}^{N}\pi({\langle}f,m_{i}{\rangle})S_{i}^{*} & \text{by \eqref{eq: Implement L}} \\ & = 0,\end{aligned}$$ and thus $\pi(f)C_{b}=0$, which in turn implies $fJ^{\frac{1}{2}}=\pi(f)J^{\frac{1}{2}}=\pi(f)C_{b}\mathbf{1}=0$, and thus $f=0$. This shows that $\{m_{i}\}_{i=1}^{N}$ is an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. For the converse assertion, suppose $\lbrace m_{i}\rbrace_{i=1}^{N}$ is any orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$, and set $S_{i}:=\pi(m_{i})C_{b}$. Then from we deduce $$\begin{aligned} S_{i}^{*}S_{j} & = & C_{b}^{*}\pi(\overline{m_{i}}m_{j})C_{b}\\ & = & \pi({\langle}m_{i},m_{j}{\rangle})\\ & = & \delta_{i,j}\pi(\mathbf{1})=\delta_{i,j}I_{L^{2}({\mathbb{T}})}.\end{aligned}$$ So the relations are satisfied. To verify the Cuntz identity , note first that equation  shows that the sum $\sum_{i=1}^{N}S_{i}S_{i}^{*}$ is a projection. To show that $\sum_{i=1}^{N}S_{i}S_{i}^{*}=I$, it suffices to show that $\sum_{i=1}^{N}S_{i}S_{i}^{*}$ acts as the identity operator on a dense subset of $L^{2}({\mathbb{T}})$. So fix $f\in L^{\infty}({\mathbb{T}})$ and observe that we may write $$\sum_{i=1}^{N}S_{i}S_{i}^{*}f=\sum_{i=1}^{N}S_{i}S_{i}^{*}\pi(f)1=\sum_{i=1}^{N}S_{i}S_{i}^{*}\pi(f)\Gamma_{b}1=\sum_{i=1}^{N}S_{i}S_{i}^{*}\pi(fJ^{-\frac{1}{2}})C_{b}1.\label{eq:ONB1}$$ Since $S_{i}=\pi(m_{i})C_{b}$, the last sum in is $$\sum_{i=1}^{N}\pi(m_{i})C_{b}C_{b}^{*}\pi(\overline{m_{i}})\pi(fJ^{-\frac{1}{2}})C_{b}1=\sum_{i=1}^{N}\pi(m_{i})C_{b}\pi({\mathcal{L}}(\overline{m_{i}}fJ^{-\frac{1}{2}}))1,$$ where we have used . But by Theorem \[Thm: Intertwine\] the right hand side of this equation is $$\sum_{i=1}^{N}\pi(m_{i})\pi(\beta({\mathcal{L}}(\overline{m_{i}}fJ^{-\frac{1}{2}})))C_{b}1=\pi\left(\sum_{i=1}^{N}m_{i}{\langle}m_{i},fJ^{-\frac{1}{2}}{\rangle}\right)C_{b}1=\pi(fJ^{-\frac{1}{2}})C_{b}1,$$ because $\{m_{i}\}_{i=1}^{N}$ is an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$, by hypothesis. As $C_{b}1=\pi(J^{\frac{1}{2}})\Gamma_{b}1=\pi(J^{\frac{1}{2}})1$ it follows that $\pi(fJ^{-\frac{1}{2}})C_{b}1=f$, and thus $\sum_{i=1}^{N}S_{i}S_{i}^{*}f=f$. We conclude that $S = \{S_i\}_{i=1}^N$ is a Cuntz family. To see that this family implements $\beta$, simply note that$$\pi(\beta(\varphi))=\pi(\beta(\varphi))\sum_{i=1}^{N}S_{i}S_{i}^{*}=\sum_{i=1}^{N}S_{i}\pi(\varphi)S_{i}^{*}$$ since the $S_{i}$ satisfy equation . \[Cor: module basis\] If $\{v_{i}\}_{i=1}^{N}$ is an orthonormal basis for the Hilbert *space* $H^{2}({\mathbb{T}})\ominus\pi(b)H^{2}({\mathbb{T}})$, then the functions $\{v_{i}J^{-\frac{1}{2}}\}_{i=1}^{N}$ form an orthonormal basis for the Hilbert *module* ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. By Lemma \[lemma:Cbiso\], the Cuntz isometries coming from $\{v_i\}_{i=1}^N$ via Theorem \[Thm: Main1\] have the form $\pi(v_{i}J^{-\frac{1}{2}})C_{b}$. Therefore by Theorem \[Thm:Solution 1\] the functions $\{v_{i}J^{-\frac{1}{2}}\}_{i=1}^{N}$ form an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. \[cor: Uniqueness S\] If $S^{(1)}$ and $S^{(2)}$ are two Cuntz families in $B(L^{2}({\mathbb{T}}))$ satisfying $$\alpha_{S^{(i)}}\circ\pi=\pi\circ\beta, \qquad i = 1, 2,$$ then there is a unitary matrix $(u_{ij})$ in $M_{N}(L^{\infty}({\mathbb{T}}))$ such that $$S_{j}^{(2)}=\sum_{i=1}^{N}S_{i}^{(1)}\pi(u_{ij}),\label{eq:Uniqueness S}$$ $j=1,2,\cdots,N$. Conversely, if $S^{(1)}$ and $S^{(2)}$ are Cuntz families on $L^{2}({\mathbb{T}})$ that are linked by equation , then $\alpha_{S^{(1)}}$ implements $\beta$ if and only if $\alpha_{S^{(2)}}$ implements $\beta$. Further, $\alpha_{S^{(1)}}=\alpha_{S^{(2)}}$ on $B(L^{2}({\mathbb{T}}))$ if and only if $(u_{ij})$ is a unitary matrix of constant functions. By Theorem \[Thm:Solution 1\], we may suppose there are orthonormal bases $\{m_{i}^{(1)}\}_{i=1}^{N}$ and $\{m_{i}^{(2)}\}_{i=1}^{N}$ for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ that define $S^{(1)}$ and $S^{(2)}$. In this event, there is a unitary matrix $(u_{ij})$ in $M_{N}(L^{\infty}({\mathbb{T}}))$ so that $$m_{j}^{(2)}=\sum_{i=1}^{N}m_{i}^{(1)}\cdot u_{ij}.$$ But then we may use to derive as follows: $$\begin{aligned} S_{j}^{(2)} & = \pi(m_{j}^{(2)})C_{b} = \sum_{i=1}^{N}\pi(m_{i}^{(1)})\pi(\beta(u_{ij}))C_{b} = \sum_{i=1}^{N}\pi(m_{i}^{(1)})C_{b}\pi(u_{ij})\\ & = \sum_{i=1}^{N}S_{j}^{(1)}\pi(u_{ij}).\end{aligned}$$ The same equation proves the converse assertion and the last assertion follows from Laca’s Proposition 2.2 in [@mL93]. We conclude with a new look at Rochberg’s [@rR73 Theorem 1] and related work of McDonald [@jMcD03]. Because of the complex conjugates that appear in the formula for the inner product on ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$, it is somewhat surprising that ${\langle}m_{i},f{\rangle}\in H^{\infty}({\mathbb{T}})$ whenever $f\in H^{\infty}({\mathbb{T}})$ and $m_{i}$ comes from an orthonormal basis for $H^{2}({\mathbb{T}})\ominus\pi(b)H^{2}({\mathbb{T}})$. \[Thm:Rochberg’s Theorem 1\]Let $\{v_{i}\}_{i=1}^{N}$ be an orthonormal basis for $\mathcal{D} = H^{2}({\mathbb{T}})\ominus\pi(b)H^{2}({\mathbb{T}})$ and let $m_{i}=v_{i}J^{-\frac{1}{2}}$, so that $\{m_{i}\}_{i=1}^{N}$ is an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ by Corollary \[Cor: module basis\]. Then a function $f\in L^{\infty}({\mathbb{T}})$ lies in $H^{\infty}({\mathbb{T}})$ if and only if ${\langle}m_{i},f{\rangle}$ lies in $H^{\infty}({\mathbb{T}})$ for all $i$. Further, $f$ lies in the disc algebra $A(\mathbb{D})$ if and only if ${\langle}m_{i},f{\rangle}$ lies in $A(\mathbb{D})$ for all $i$. By Remark \[Canonical Basis\] we know the functions $m_i$ are in the disc algebra. It is thus immediate from $$\label{expansion} f = \sum_{i=1}^n m_i \cdot {\langle}m_i, f{\rangle}, \qquad f \in L^{\infty}({\mathbb{T}}),$$ and the fact that $\beta$ preserves both $H^{\infty}({\mathbb{T}})$ and $A({\mathbb{D}})$ that if the coefficients ${\langle}m_i, f{\rangle}$ all lie in $H^{\infty}({\mathbb{T}})$ or $A({\mathbb{D}})$ then $f$ will also. Conversely, fix $f \in H^{\infty}({\mathbb{T}})$ and any $v \in \mathcal{D}$. We must show that ${\langle}v J^{-1/2}, f{\rangle}$ is in $H^{\infty}({\mathbb{T}})$. Note that ${\langle}v J^{-1/2}, f{\rangle}$ is in $L^{\infty}$ so it suffices to show that this function is in $H^2({\mathbb{T}})$. To this end, fix a positive integer $k$, and compute $$\begin{aligned} ({\langle}v J^{-1/2}, f{\rangle}, e_{-k}) & = (\mathcal{L}(\overline{v J^{-1/2}} f), e_{-k}) \\ & = (\Gamma_b^* (J_0 \overline{v J^{-1/2}} f), e_{-k}) & \text{by Theorem~\ref{Theorem: transfer}}\\ & = (J_0 \overline{v J^{-1/2}} f, b^{-k}) \\ & = (J^{1/2} \overline{v} f, b^{-k}) & \text{as $J_0 = |J| = J^{1/2} \overline{J^{1/2}}$}\\ & = (J^{1/2} f b^k, v).\end{aligned}$$ Since $J^{1/2} f \in H^{\infty}$ and $k > 0$ the function $J^{1/2} f b^k$ is in $\pi(b) H^2({\mathbb{T}})$, so as $v \in \mathcal{D}$ we conclude $(J^{1/2} f b^k, v) = 0$. As $k > 0$ was arbitrary, ${\langle}v J^{-1/2}, f{\rangle}$ is in $H^2({\mathbb{T}})$, as desired. If $f$ is further assumed to be in $A({\mathbb{D}})$, as ${\mathcal{L}}$ maps $C({\mathbb{T}})$ into itself we conclude ${\langle}v J^{-1/2}, f{\rangle}\in C({\mathbb{T}}) \cap H^2({\mathbb{T}}) = A({\mathbb{D}})$. In our notation, Rochberg’s Theorem 1 in [@rR73] asserts that if $\{v_i\}_{i=1}^N$ is the *canonical* orthonormal basis for $\mathcal{{\mathbb{D}}}$, then for any $f \in A({\mathbb{D}})$, there are uniquely determined $f_{1},f_{2},\cdots,f_{N}\in A(\mathbb{D})$ satisfying $$f(z)=\sum_{i=1}^{N}v_{i}(z) \beta(f_i)(z),\label{eq:Rogberg's Eq 5A} \qquad z \in \overline{{\mathbb{D}}},$$ and that moreover for each $1 \leq i \leq N$ the linear map $f\to f_{i}$ thus determined on $A({\mathbb{D}})$ is continuous in the norm of $A({\mathbb{D}})$. We recover this theorem by applying Theorem \[Thm:Rochberg’s Theorem 1\] to the canonical basis $\{v_i\}_{i=1}^N$ and the function $J^{-1/2} f \in A({\mathbb{D}})$: it asserts that holds with the functions $f_i = {\langle}m_i, J^{-1/2} f{\rangle}\in A({\mathbb{D}})$. The norm continuity of the $f_i$ in $f$ is immediate from this formula. Our Theorem \[Thm:Rochberg’s Theorem 1\] provides a slightly stronger uniqueness statement: if $f \in A({\mathbb{D}})$, assuming only that the $f_i$ are in $L^{\infty}({\mathbb{T}})$, multiplying both sides of by $J^{-1/2}$, applying ${\langle}m_j, -{\rangle}$, and using the fact that $\{m_i\}_{i=1}^N$ is an orthonormal basis for ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$, one finds that $f_j$ must be given by the formula above. Rochberg [@rR73] and McDonald [@jMcD03] establish more information about the $f_{i}$ using the special structure of the canonical orthonormal basis of $\mathcal{{\mathbb{D}}}$. Our analysis does not seem to contribute anything new to their refinements. On the other hand, our results are explicitly independent of the choice of basis and connect to the structure of the Hilbert module ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. \[remark: Concluding Remark\]The reader may have noticed that if $m\in L^{\infty}({\mathbb{T}})$ and if $T=\pi(m)C_{b}$, then from the calculations in Theorem \[Thm: Intertwine\], the norm of $T$ is the norm of $m$ calculated in ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. This is not an accident. The Hilbert module ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ becomes a *left* module over $L^{\infty}({\mathbb{T}})$ via the formula $a\cdot\xi:=a\xi$, $a\in L^{\infty}({\mathbb{T}})$, $\xi\in{L^{\infty}(\mathbb{T})_{\mathcal{L}}}$. This makes ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$ what is known as a $C^{*}$*-correspondence* or *Hilbert bimodule* over $L^{\infty}({\mathbb{T}})$. Further, if $\psi:{L^{\infty}(\mathbb{T})_{\mathcal{L}}}\to B(L^{2}({\mathbb{T}})$ is defined by the formula$$\psi(m)=\pi(m)C_{b},\qquad m\in{L^{\infty}(\mathbb{T})_{\mathcal{L}}},$$ then the pair $(\pi,\psi)$ turns out to be what is known as a *Cuntz-Pimsner covariant representation* of the pair $(L^{\infty},{L^{\infty}(\mathbb{T})_{\mathcal{L}}})$. This means, in particular, that $\psi(m)^{*}\psi(m)=\pi({\langle}m,m{\rangle})$, as we noted in Theorem \[Thm: Intertwine\]. Further, the pair $(\pi,\psi)$ extends to a $C^{*}$-representation of the so-called *Cuntz-Pimsner algebra* of ${L^{\infty}(\mathbb{T})_{\mathcal{L}}}$, $\mathcal{O}({L^{\infty}(\mathbb{T})_{\mathcal{L}}})$. We have not made use of this here, but it strikes us as worthy of further investigation. See [@FMR03] for further information about Cuntz-Pimsner algebras and their representations. [18]{} W. Arveson, *Continuous analogues of Fock space*, Mem. Amer. Math. Soc. v. **80**, **\#409**, 1989. J. Cuntz, *Simple $C^{*}$-algebras generated by isometries*, Comm. Math. Phys. **57** (1977), 173–185. R. Douglas, *Contractive projections on an $\mathfrak{L}_{1}$ space*, Pac. J. Math. **15** (1965), 443–462. R. Douglas, *Banach Algebra Techniques in Operator Theory*, 2nd ed., Graduate Text in Mathematics **179**, Springer, New York, 1998. N. Fowler, P. Muhly and I. Raeburn, *Representations of Cuntz-Pimsner algebras*, Indiana U. Math. J. **52** (2003), 569–605. H. Hamada and Y. Watatani, *Toeplitz-composition $C^{*}$-algebras for certain finite Blaschke products*, preprint (arXiv: 0809.3061). H. Helson, *Lectures on Invariant Subspaces*, Academic Press, New York and London, 1964. H. Helson and D. Lowdenslager, *Invariant subspaces*, Proc. International Conf. on Linear Spaces, 1960, pp. 251–262, Macmillan (Pergamon) New York, 1961. M. Ionescu and P. Muhly, *Groupoid methods in wavelet analysis*, in *Group Representations, Ergodic Theory, and Mathematical Physics: A Tribute to George W. Mackey*, Contemporary Mathematics **449**, Amer. Math. Soc., Providence, 2008, pp. 193–208. S. Kametani and T. Ugaheri, *A remark on Kawakami’s extension of Löwner’s lemma*, Proc. Imp. Acad. Tokyo **18** (1942), 14–15. M. Laca, *Endomorphisms of* $B(H)$ *and Cuntz algebras*, J. Operator Th. **30** (1993), 85 – 108. E. C. Lance, *Hilbert* $C^{\ast}$-*modules,* London Math. Soc. Lect. Note Series **210**, Cambridge Univ. Press, Cambridge, 1995. V. Manuilov and E. Troitsky, *Hilbert $C^{*}$-modules*, Translation of Mathematical Monographs, Vol. 226, Amer. Math. Soc., 2005. J. McDonald, *Adjoints of a class of composition operators,* Proc. Amer. Math. Soc. **131** (2003), 601 – 606. R. Rochberg, *Linear maps of the disk algebra*, Pac. J. Math. **44** (1973), 337–354. J. Ryff, *Subordinate $H^{p}$ functions*, Duke Math. J. **33** (1966), 347–354. M. Takesaki, *Theory of Operator Algebras I*, Springer-Verlag, New York, 1979. J. Walsh, *Interpolation and Approximation by Rational Functions in the Complex Domain*, American Mathematical Society Colloquium Publications, vol. **20**, American Mathematical Society, Providence, R. I., 1956. [^1]: DC and SS were partially supported by the University of Iowa Department of Mathematics NSF VIGRE grant DMS-0602242. [^2]: We write the characteristic function (or indicator function) of a set $E$ as $1_{E}$. [^3]: This non-trivial fact is discussed in detail in [@mT79 Section V.5].
{ "pile_set_name": "ArXiv" }
--- abstract: | In this paper, we consider an arbitrary locally compact abelian group $G$, with an ordered dual group $\Gamma$, acting on a space of measures. Under suitable conditions, we define the notion of analytic measures using the representation of $G$ and the order on $\Gamma$. Our goal is to study analytic measures by applying a new transference principle for subspaces of measures, along with results from probability and Littlewood-Paley theory. As a consequence, we will derive new properties of analytic measures as well as extensions of previous work of Helson and Lowdenslager, de Leeuw and Glicksberg, and Forelli. A.M.S. Subject Classification: 43A17, 43A32, Keywords: orders, transference, measure space, sup path attaining, F.&M. Riesz Theorem author: - | Nakhlé Asmar and Stephen Montgomery-Smith\ Department of Mathematics\ University of Missouri\ Columbia, MO 65211\ [[email protected]]{}\ [[email protected]]{} title: Decomposition of analytic measures on groups and measure spaces --- [*Dedicated to the memory of Edwin Hewitt*]{} \[section\] \[defin\][Theorem]{} \[defin\][Example]{} \[defin\][Proposition]{} \[defin\][Lemma]{} \[defin\][Scholium]{} \[defin\][Remarks]{} \[defin\][Corollary]{} Introduction ============ This paper is essentially providing a new approach to generalizations of the F.&M. Riesz Theorems, for example, such results as that of Helson and Lowdenslager [@hl1; @hl2]. They showed that if $G$ is a compact abelian group with ordered dual, and if $\mu$ is an [*analytic*]{} measure (that is, its Fourier transform is supported on the positive elements of the dual), then it follows that the singular and absolutely continuous parts (with respect to the Haar measure) are also analytic. Another direction is that provided by Forelli [@forelli] (itself a generalization of the result of de Leeuw and Glicksberg [@deleeuwglicksberg]), where one has an action of the real numbers $\R$ acting on a locally compact topological space $\Omega$, and a Baire measure $\mu$ on $\Omega$ that is [*analytic*]{} (in a sense that we make precise below) with respect to the action. Then again, the singular and absolutely continuous parts of $\mu$ (with respect to any so called quasi-invariant measure) are also analytic. Indeed common generalizations of both these ideas have been provided, for example, by Yamaguchi [@yama], considering the action of any locally compact abelian group with ordered dual, on a locally compact topological space. For more generalizations we refer the reader to Hewitt, Koshi, and Takahashi [@hkt]. In the paper [@amss], a new approach to proving these kinds of results was given, providing a transference principle for spaces of measures. In that paper, the action was from a locally compact abelian group into a space of isomorphisms on the space of measures of a sigma algebra. A primary requirement that the action had to satisfy was what was called [*sup path attaining*]{}, a property that was satisfied, for example, by the setting of Forelli (Baire measures on a locally compact topological space). Using this transference principle, the authors were able to give an extension and a new proof of Forelli’s result. This was obtained by using a Littlewood-Paley decomposition of an analytic measure. In this paper we wish to continue this process, applying this same transference principle to provide the common generalizations of the results of Forelli and Helson and Lowdenslager. What we provide in this paper is essentially a decomposition of an analytic measure as a sum of martingale differences with respect to a filtration defined by the order. For each martingale difference, the action of the group can be described precisely by a certain action of the group of real numbers, and so we can appeal to the results of [@amss]. In this way, we can reach the following generalization (see Theorem \[application1\] below): if $\cal P$ is any bounded operator on the space of measures that commutes with the action (as does, for example, taking the singular part), and if $\mu$ is an analytic measure, then ${\cal P}\mu$ is also an analytic measure. In the remainder of the introduction, we will establish our notation, including the notion of sup path attaining, and recall the transference principle from [@amss]. In Section 2, we will describe orders on locally compact abelian groups, including the extension of Hahn’s Embedding Theorem provided in [@ams]. In Section 3, we define the notions of analyticity. This somewhat technical section continues into Section 4, which examines the role of homomorphism with respect to analyticity. The technical results basically provide proofs of what is believable, and so may be skipped on first reading. It will be seen that the concept of sup path attaining comes up again and again, and may be seen to be an integral part of all our proofs. In Section 5, we are ready to present the decomposition of analytic measures. This depends heavily on transference of martingale inequalities of Burkholder and Garling, and then using the fact that weakly unconditionally summing series are unconditionally summing in norm for any series in a space of measures [@bp]. In Section 6, we then give applications of this decomposition, giving the generalizations that we alluded to above. Throughout $G$ will denote a locally compact abelian group with dual group $\Gamma$. The symbols $\Z$, $\R$ and $\C$ denote the integers, the real and complex numbers, respectively. If $A$ is a set, we denote the indicator function of $A$ by $1_A$. For $1\leq p<\infty$, the space of Haar measurable functions $f$ on $G$ with $\int_G|f|^p dx<\infty$ will be denoted by $L^p(G)$. The space of essentially bounded functions on $G$ will be denoted by $L^\infty(G)$. The expressions “locally null” and “locally almost everywhere” will have the same meanings as in [@hr1 Definition (11.26)]. Let $\cC_0(G)$ denote the Banach space of continuous functions on $G$ vanishing at infinity. The space of all complex regular Borel measures on $G$, denoted by $M(G)$, consists of all complex measures arising from bounded linear functionals on $\cC_0(G)$. Let $(\O, \Sigma)$ denote a measurable space, where $\O$ is a set and $\Sigma$ is a sigma algebra of subsets of $\O$. Let $M(\Sigma)$ denote the Banach space of complex measures on $\Sigma$ with the total variation norm, and let $\cL^\infty(\Sigma)$ denote the space of measurable bounded functions on $\Omega$. Let $T:\ t\mapsto T_t$ denote a representation of $G$ by isomorphisms of $M(\Sigma)$. We suppose that $T$ is uniformly bounded, i.e., there is a positive constant $c$ such that for all $t\in G$, we have $$\|T_t\|\leq c . \label{uniformlybded}$$ A measure $\mu\in M(\Sigma)$ is called weakly measurable (in symbols, $\mu\in{\cal M}_T(\Sigma)$) if for every $A\in \Sigma$ the mapping $t\mapsto T_t\mu(A)$ is Borel measurable on $G$. \[weakmble\] Given a measure $\mu\in \cMT$ and a Borel measure $\nu \in M(G)$, we define the ‘convolution’ $\nu*_T\mu$ on $\Sigma$ by $$\nu*_T\mu (A)=\int_G T_{-t}\mu(A) d\nu(t) \label{Tconv}$$ for all $A\in\Sigma$. We will assume throughout this paper that the representation $T$ commutes with the convolution (\[Tconv\]) in the following sense: for each $t\in G$, $$T_t(\nu*_T\mu)=\nu*_T(T_t\mu). \label{commut}$$ Condition (\[commut\]) holds if, for example, for all $t\in G$, the adjoint of $T_t$ maps $\cL^\infty(\Sigma)$ into itself. In symbols, $$T_t^*: \cL^\infty(\Sigma) \rightarrow \cL^\infty(\Sigma). \label{adjointT}$$ For proofs we refer the reader to [@ams2]. Using (\[uniformlybded\]) and (\[commut\]), it can be shown that $\nu*_T\mu$ is a measure in $\cMT$, $$\|\nu*_T\mu\|\leq c\|\nu\|\|\mu\|, \label{normofconv}$$ where $c$ is as in (\[uniformlybded\]), and $$\sigma*_T(\nu*_T\mu)=(\sigma*\nu)*_T\mu, \label{associative}$$ for all $\sigma , \nu \in M(G)$ and $\mu \in \cMT$ (see [@ams2]). A representation $T=(T_t)_{t\in G}$ of a locally compact abelian group $G$ in $M(\Sigma)$ is said to be sup path attaining if it is uniformly bounded, satisfies property (\[commut\]), and if there is a constant $C$ such that for every weakly measurable $\mu\in {\cal M}_T(\Sigma)$ we have $$\| \mu\| \leq C\sup \left\{ {\rm ess\ sup}_{t\in G} \left| \int_\O h d (T_t\mu) \right| :\ \ h\in \cL^\infty(\Sigma),\ \|h\|_\infty\leq 1 \right\}. \label{ineqhypa}$$ \[def hypa\] The fact that the mapping $t\mapsto \int_\O h d (T_t\mu)$ is measurable is a simple consequence of the measurability of the mapping $t\mapsto T_t\mu(A)$ for every $A\in\Sigma$. In [@amss] were provided many examples of sup path attaining representations. Rather than give this same list again, we give a couple of examples of particular interest. \(a) (This is the setting of Forelli’s Theorem.) Let $G$ be a locally compact abelian group, and $\Omega$ be a locally compact topological space. Suppose that $\left( T_t\right)_{t\in G}$ is a group of homeomorphisms of $\Omega$ onto itself such that the mapping $$(t,\omega)\mapsto T_t\omega$$ is jointly continuous. Then the space of Baire measures on $\Omega$, that is, the minimal sigma algebra such that compactly supported continuous functions are measurable, is sup path attaining under the action $T_t\mu(A)=\mu(T_t(A))$, where $T_t(A)=\{T_t\omega:\ \omega\in A\}$. (Note that all Baire measures are weakly measurable.) \(b) Suppose that $G_1$ and $G_2$ are locally compact abelian groups and that $\phi:\ G_2\rightarrow G_1$ is a continuous homomorphism. Define an action of $G_2$ on $M(G_1)$ (the regular Borel measures on $G_1$) by translation by $\phi$. Hence, for $x\in G_2, \mu\in M(G_1)$, and any Borel subset $A\subset G_1$, let $T_x\mu(A)=\mu(A+\phi(x))$. Then every $\mu\in M(G_1)$ is weakly measurable, and the representation is sup path attaining with constants $c = 1$ and $C = 1$. \[exhypa\] Suppose that $T$ is sup path attaining and $\mu$ is weakly measurable such that for every $A\in \Sigma$ we have $$T_t\mu(A)=0$$ for locally almost all $t\in G$. Then $\mu=0$. \[prop hypa\] The proof is immediate (see [@ams2]).\ We now recall some basic definitions from spectral theory. If $I$ is an ideal in $L^1(G)$, let $$Z(I)=\bigcap_{f\in I} \left\{ \chi\in\Gamma:\ \ \widehat{f}(\chi)=0 \right\}.$$ The set $Z(I)$ is called the zero set of $I$. For a weakly measurable $\mu\in M(\Sigma)$, let $$\cI (\mu)=\{f\in L^1(G):\ \ f*_T\mu =0\}.$$ When we need to be specific about the representation, we will use the symbol $\cI_T (\mu)$ instead of $\cI (\mu)$. Using properties of the convolution $*_T$, it is straightforward to show that $\cI(\mu)$ is a closed ideal in $L^1(G)$. The $T$-spectrum of a weakly measurable $\mu\in \cMT$ is defined by $${\rm spec}_T (\mu)= \bigcap_{f\in \cI(\mu)} \left\{ \chi\in\Gamma:\ \ \widehat{f}(\chi)=0 \right\}=Z(\cI(\mu)). \label{specsbt}$$ \[Tspectrum\] If $S\subset \Gamma$, let $$L_S^1=L_S^1(G)=\left\{f\in L^1(G):\ \widehat{f}=0\ \mbox{outside of}\ S\right\}\,.$$ In order to state the main transference result, we introduce one more definition. A subset $S\subset\Gamma$ is a $\cT$-set if, given any compact $K\subset S$, each neighborhood of $0\in\Gamma$ contains a nonempty open set $W$ such that $W+K\subset S$. \[t-set\] \[s-set\] [(a) If $\Gamma$ is a locally compact abelian group, then any open subset of $\Gamma$ is a $\cT$-set. In particular, if $\Gamma$ is discrete then every subset of $\Gamma$ is a $\cT$-set.\ (b) The set $\left[ a,\infty\right. )$ is a $\cT$-subset of $\R$, for all $a\in\R$.\ (c) Let $a\in\R$ and $\psi:\ \Gamma \rightarrow \R$ be a continuous homomorphism. Then $S=\psi^{-1}([a,\infty))$ is a $\cT$-set.\ (d) Let $\Gamma=\R^2$ and $S=\{(x,y):\ y^2\leq x\}$. Then $S$ is a $\cT$-subset of $\R^2$ such that there is no nonempty open set $W\subset \R^2$ such that $W+S\subset S$. ]{} The main result of [@amss] is the following transference theorem. Let $T$ be a sup path attaining representation of a locally compact abelian group $G$ by isomorphisms of $M(\Sigma)$ and let $S$ be a $\cT$-subset of $\Gamma$. Suppose that $\nu$ is a measure in $M(G)$ such that $$\|\nu*f\|_1\leq \|f\|_1 \label{hyptransference2}$$ for all $f$ in $L_S^1(G)$. Then for every weakly measurable $\mu \in M(\Sigma)$ with ${\rm spec}_T ( \mu )\subset S$ we have $$\|\nu*_T\mu\|\leq c^3 C \|\mu\|,$$ where $c$ is as in (\[uniformlybded\]) and $C$ is as in (\[ineqhypa\]). \[trans-thm\] Orders on locally compact abelian groups ======================================== An order $P$ on $\Gamma$ is a subset that satisfies the three axioms: $P+P\subset P$; $P\cup (-P)=\Gamma$; and $P\cap (-P)-\{0\}$. We recall from [@ams] the following property of orders. Let $P$ be a measurable order on $\Gamma$. There are a totally ordered set $\Pi$ with largest element $\alpha_0$; a chain of subgroups $\{C_\alpha\}_{\alpha\in\Pi}$ of $\Gamma$; and a collection of continuous real-valued homomorphisms $\{\psi_\alpha\}_{\alpha\in\Pi}$ on $\Gamma$ such that:\ (i)  for each $\alpha\in\Pi$, $C_\alpha$ is an open subgroup of $\Gamma$;\ (ii)  $C_\alpha\subset C_\beta$ if $\alpha > \beta$.\ Let $D_\alpha=\{\chi\in C_\alpha:\ \psi_\alpha(\chi)=0\}$. Then, for every $\alpha\in \Pi$,\ (iii) $\psi_\alpha(\chi)>0$ for every $\chi\in P\cap (C_\alpha\setminus D_\alpha)$,\ (iv) $\psi_\alpha(\chi)<0$ for every $\chi\in (-P)\cap (C_\alpha\setminus D_\alpha).$\ (v)  When $\Gamma$ is discrete, $C_{\alpha_0}=\{0\}$; and when $\Gamma$ is not discrete, $D_{\alpha_0}$ has empty interior and is locally null. \[structureorder\] When $\Gamma$ is discrete, Theorem \[structureorder\] can be deduced from the proof of Hahn’s Embedding Theorem for orders (see [@fu Theorem 16, p.59]). The general case treated in Theorem \[structureorder\] accounts for the measure theoretic aspect of orders. The proof is based on the study of orders of Hewitt and Koshi [@hk]. For $\alpha\in \Pi$ with $\alpha\neq\alpha_0$, let $$\begin{aligned} S_\alpha\equiv P\cap (C_\alpha \setminus D_\alpha) &=& \left\{ \chi\in C_\alpha\setminus D_\alpha:\ \ \psi_\alpha(\chi)\geq 0\right\} \label{alphaslice}\\ &=&\left\{ \chi\in C_\alpha :\ \ \psi_\alpha(\chi)> 0\right\}. \label{alphaslice2}\end{aligned}$$ For $\alpha=\alpha_0$, set $$S_{\alpha_0}=\left\{ \chi\in C_{\alpha_0}:\ \ \psi_{\alpha_0}(\chi)\geq 0\right\}. \label{alpha0slice}$$ Note that when $\Gamma$ is discrete, $C_{\alpha_0}=\{0\}$, and so $S_{\alpha_0}=\{0\}$ in this case. If $A$ is a subset of a topological space, we will use $\overline{A}$ and $A^\circ$ to denote the closure, respectively, the interior of $A$. \(a) It is a classical fact that a group $\Gamma$ can be ordered if and only if it is torsion-free. Also, an order on $\Gamma$ is any maximal positively linearly independent set. Thus, orders abound in torsion-free abelian groups, as they can be constructed using Zorn’s Lemma to obtain a maximal positively linearly independent set. (See [@hk Section 2].) However, if we ask for measurable orders, then we are restricted in many ways in the choices of $P$ and also the topology on $\Gamma$. As shown in [@hk], any measurable order on $\Gamma$ has nonempty interior. Thus, for example, while there are infinitely many orders on $\R$, only two are Lebesgue measurable: $P=[0,\infty [$, and $P=]-\infty,0]$. It is also shown in [@hk Theorem (3.2)] that any order on an infinite compact torsion-free abelian group is non-Haar measurable. This effectively shows that if $\Gamma$ contains a Haar-measurable order $P$, and we use the structure theorem for locally compact abelian groups to write $\Gamma$ as $\R^a\times \Delta$, where $\Delta$ contains a compact open subgroup [@hr1 Theorem (24.30)], then either $a$ is a positive integer, or $\Gamma$ is discrete. (See [@ams].)\ (b) The subgroups $(C_\alpha)$ are characterized as being the principal convex subgroups in $\Gamma$ and for each $\alpha\in \Pi$, we have $$D_\alpha=\bigcup_{\beta>\alpha}C_\beta.$$ Consequently, we have $C_\alpha\subset D_\beta$ if $\beta <\alpha$. By construction, the sets $C_\alpha$ are open. For $\alpha< \alpha_0$, the subgroup $D_\alpha$ has nonempty interior, since it contains $C_\beta$, with $\alpha<\beta$. Hence for $\alpha\neq \alpha_0$, $D_\alpha$ is open and closed. Consequently, for $\alpha\neq \alpha_0$, $C_\alpha\setminus D_\alpha$ is open and closed. \(c) Let $\psi:\ \Gamma_1\rightarrow \Gamma_2$ be a continuous homomorphism between two ordered groups. We say that $\psi$ is order-preserving if $\psi(P_1)\subset P_2$. Consequently, if $\psi$ is continuous and order preserving, then $\psi(\overline{P_1})\subset \overline{P_2}$. For each $\alpha\in \Pi$, let $\pi_\alpha$ denote the quotient homomorphism $\Gamma\rightarrow \Gamma/C_\alpha$. Because $C_\alpha$ is a principal subgroup, we can define an order on $ \Gamma/C_\alpha$ by setting $\psi_\alpha(\chi)\geq 0\Longleftrightarrow \chi\geq 0$. Moreover, the principal convex subgroups in $\Gamma/C_\alpha$ are precisely the images by $\pi_\alpha$ of the principal convex subgroups of $\Gamma$ containing $C_\alpha$. (See [@ams Section 2].) \[remarkstructureorder\] We end this section with a useful property of orders. Let $P$ be a measurable order on $\Gamma$. Then $\overline{P}$ is a ${\cal T}$-set. \[ptset\] [**Proof.**]{}If $\Gamma$ is discrete, there is nothing to prove. If $\Gamma$ is not discrete, the subgroup $C_{\alpha_0}$ is open and nonempty. Hence the set $ C_{\alpha_0}\cap \{\chi\in \Gamma:\psi_{\alpha_0} (\chi)>0\}$ is nonempty, with $0$ as a limit point. Given an open nonempty neighborhood $U$ of $0$, let $$W=U\cap C_{\alpha_0}\cap \{\chi\in \Gamma:\psi_{\alpha_0} (\chi)>0\}.$$ Then $W$ is a nonempty subset of $U\cap P$. Moreover, it is easy to see that $W+\overline{P}\subset P\subset \overline{P}$, and hence $\overline{P}$ is a ${\cal T}$-set. Analyticity =========== We continue with the notation of the previous section. Using the order structure on $\Gamma$ we define some classes of analytic functions on $G$: $$\begin{aligned} H^1(G)&=&\left\{ f\in L^1(G): \widehat{f}=0\ {\rm on}\ (-P)\setminus\{0\} \right\}; \label{h1g}\\ H^1_0(G)&=&\left\{ f\in L^1(G): \widehat{f}=0\ {\rm on}\ -P \right\};\label{h10g}\end{aligned}$$ and $$H^\infty(G)=\left\{ f\in L^\infty(G): \int_G f(x)g(x)dx=0 \ {\rm for\ all}\ g\in H^1_0(G) \right\}. \label{hinfg}$$ We clearly have $$H^1(G)=\left\{ f\in L^1(G): \widehat{f}=0\ {\rm on}\ \overline{(-P)\setminus\{0\}} \right\}.$$ We can now give the definition of analytic measures in $\cMT$. Let $T$ be a sup path attaining representation of $G$ by isomorphisms of $M(\Sigma)$. A measure $\mu\in \cMT$ is called weakly analytic if the mapping $t\mapsto T_t\mu(A)$ is in $H^\infty(G)$ for every $A\in\Sigma$. Recall the $T$-spectrum of a weakly measurable $\mu\in \cMT$, $${\rm spec}_T (\mu)= \bigcap_{f\in \cI(\mu)} \left\{ \chi\in\Gamma:\ \ \widehat{f}(\chi)=0 \right\}. \label{spect}$$ A measure $\mu$ in $\cMT$ is called $T$-analytic if ${\rm spec}_T (\mu)\subset \overline{P}$. \[weakly-analytic\] That the two definitions of analyticity are equivalent will be shown later in this section. Since $\cI(\mu)$ is translation-invariant, it follows readily that for all $t\in G$, $$\cI(T_t\mu)=\cI(\mu),$$ and hence $${\rm spec}_T (T_t(\mu))={\rm spec}_T (\mu). \label{feb.4.95}$$ We now recall several basic results from spectral theory of bounded functions that will be needed in the sequel. Our reference is [@hr2 Section 40]. If $\phi$ is in $\Linfg$, write $\left[ \phi\right]$ for the smallest weak-\* closed translation-invariant subspace of $\Linfg$ containing $\phi$, and let $\cI([\phi])=\cI (\phi)$ denote the closed translation-invariant ideal in $L^1(G)$:\ $$\cI (\phi)=\{f\in L^1(G): f*\phi=0\}.$$ It is clear that $ \cI (\phi)=\{f\in L^1(G): f*g=0, \forall g\in \left[\phi\right]\}$. The spectrum of $\phi$, denoted by $\sigma \left[\phi\right]$, is the set of all continuous characters of $G$ that belong to $\left[\phi\right]$. This closed subset of $\Gamma$ is also given by $$\sigma \left[\phi\right]=Z(\cI(\phi)). \label{spec}$$ (See [@hr2 Theorem (40.5)].) Recall that a closed subset $E$ of $\Gamma$ is a set of spectral synthesis for $L^1(G)$, or an $S$-set, if and only if $\cI([E])$ is the only ideal in $L^1(G)$ whose zero set is $E$. There are various equivalent definitions of $S$-sets. Here is one that we will use at several occasions.\ [*A set $E\subset \Gamma$ is an $S$-set if and only if every essentially bounded function $g$ in $L^\infty(G)$ with $\sigma[g]\subset E$ is the weak-\* limit of linear combinations of characters from $E$.*]{}\ (See [@hr2 (40.23) (a)].) This has the following immediate consequence. Suppose that $B$ is an $S$-set, $g\in L^\infty(G)$, and $\spec (g)\subset B$. (i) If $f$ is in $L^1(G)$ and $\widehat{f}=0$ on $B$, then $f*g(x)=0$ for all $x$ in $G$. In particular, $$\int_G f(x) g(-x)\, d\, x =0.$$ (ii) If $\mu$ is a measure in $M(G)$ with $\widehat{\mu}=0$ on $B$, then $\mu*g(x)=0$ for almost all $x$ in $G$. \[annihilate\] [**Proof.**]{}Part (i) is a simple consequence of [@hr2 Theorems (40.8) and (40.10)]. We give a proof for the sake of completeness. Write $g$ as the weak-\* limit of trigonometric polynomials, $\sum_{\chi\in E} a_\chi \chi(x)$, with characters in $E$. Then $$\begin{aligned} \int_G f(x)g(y-x)\, d\,x &=& \lim \int_G \sum_{\chi\in E} a_\chi \chi(y)f(x)\chi(-x)\, d\,x\\ &=&\lim \sum_{\chi\in E} a_\chi \chi(y) \widehat{f}(\chi)=0\end{aligned}$$ since $\widehat{f}$ vanishes on $E$. To prove (ii), assume that $\mu*g$ is not 0 a.e.. Then, there is $f$ in $L^1(G)$ such that $f*(\mu*g)$ is not 0 a.e.. But this contradicts (i), since $f*(\mu*g)=(f*\mu)*g$, $f*\mu$ is in $L^1(G)$, and $\widehat{f*\mu}=0$ on $B$. The following is a converse of sorts of Proposition \[annihilate\] and follows easily from definitions. Let $B$ be a nonvoid closed subset of $\Gamma$. Suppose that $f$ is in $L^\infty(G)$ and $$\int_G f(x)g(x)dx=0 \label{6.1}$$ for all $g$ in $L^1(G)$ such that $\widehat{g}= 0$ on $-B$. Then $\sigma[f]\subset B$. \[prop5.2\] [**Proof.**]{}Let $\chi_0$ be any element in $\Gamma\setminus B$. We will show that $\chi_0$ is not in the spectrum of $f$ by constructing a function $h$ in $L^1(G)$ with $\widehat{h}(\chi_0)\neq 0$ and $h*f= 0$. Let $U$ be an open neighborhood of $\chi_0$ not intersecting $B$, and let $h$ be in $L^1(G)$ such that $\widehat{h}$ is equal to 1 at $\chi_0$ and to 0 outside $U$. Direct computations show that the Fourier transform of the function $g:\ \ t\mapsto h(x-t)$, when evaluated at $\chi\in\Gamma$, gives $\overline{\chi(x)}\widehat{h}(-\chi)$, and hence it vanishes on $-B$. It follows from (\[6.1\]) that $h*f= 0$, which completes the proof.\ A certain class of $S$-sets, known as the Calderón sets, or $C$-sets, is particularly useful to us. These are defined as follows. A subset $E$ of $\Gamma$ is called a $C$-set if every $f$ in $L^1(G)$ with Fourier transform vanishing on $E$ can be approximated in the $L^1$-norm by functions of the form $h*f$ where $h\in L^1(G)$ and $\widehat{h}$ vanishes on an open set containing $E$.\ $C$-sets enjoy the following properties (see [@hr2 (39.39)] or [@rudin Section 7.5]). - Every $C$-set is an $S$-set. - Every closed subgroup of $\Gamma$ is a $C$-set. - The empty set is a $C$-set. - If the boundary of a set $A$ is a $C$-set, then $A$ is a $C$-set. - Finite unions of $C$-sets are $C$-sets. Since closed subgroups are $C$-sets, we conclude that $\overline{P} \cap \overline{(-P)}$, and $C_\alpha$, for all $\alpha$, are $C$-sets. &gt;From the definition of $S_{\alpha_0}$, (\[alpha0slice\]), and the fact that $C_{\alpha_0}$ is open and closed, it follows that the boundary of $S_{\alpha_0}$ is the closed subgroup $\psi_{\alpha_0}^{-1}(0)\cap C_{\alpha_0}$. Hence $S_{\alpha_0}$ is a $C$-set. For $\alpha\neq \alpha_0$, the set $S_\alpha$ is open and closed, and so it has empty boundary, and thus it is a $C$-set. Likewise $C_\alpha\setminus D_\alpha$ is a $C$-set for all $\alpha\neq \alpha_0$. &gt;From this we conclude that arbitrary unions of $S_\alpha$ and $C_\alpha\setminus D_\alpha$ are $C$-sets, because an arbitrary union of such sets, not including the index $\alpha_0$, is open and closed, and so it is a $C$-set.\ We summarize our findings as follows. Suppose that $P$ is a measurable order on $\Gamma$. We have:\ (i) $\overline{P}$ and $\overline{(-P)}$ are $C$-sets;\ (ii) $S_\alpha$ is a $C$-set for all $\alpha$;\ (iii) arbitrary unions of $S_\alpha$ and $C_\alpha\setminus D_\alpha$ are $C$-sets. \[prop5.1\] As an immediate application, we have the following characterizations. Suppose that $f$ is in $L^\infty(G)$, then\ (i)  $\sigma[f] \subset S_\alpha$ if and only if $\int_G f(x) g(x) d x =0$ for all $g\in L^1(G)$ such that $\widehat{g}=0$ on $-S_\alpha$;\ (ii)  $\sigma[f] \subset \Gamma\setminus C_\alpha$ if and only if $\mu_\alpha *f=0$;\ (iii) $\sigma[f] \subset \overline{P}$ if and only if $f\in H^\infty(G)$. \[cor5.3\] [**Proof.**]{} Assertions (i) and (iii) are clear from Propositions \[prop5.1\] and \[prop5.2\]. To prove (ii), use Fubini’s Theorem to first establish that for any $g\in L^1(G)$, and any $\mu\in M(G)$, we have $$\int_G (\mu*f)(t) g(t)dt=\int_G f(t)(\mu*g)(t)dt.$$ Now suppose that $\sigma[f] \subset \Gamma\setminus C_\alpha$, and let $g$ be any function in $L^1(G)$. From Propositions \[prop5.1\] and \[prop5.2\], we have that $\int_G f g dt = 0$ for all $g$ with Fourier transform vanishing on $\Gamma\setminus C_\alpha$, equivalently, for all $g=\mu_\alpha*g$. Hence, $\int_G f (\mu_\alpha*g) dt = \int_G (\mu_\alpha*f) g dt=0$ for all $g$ in $L^1(G)$, from which it follows that $\mu_\alpha * f=0$. The converse is proved similarly, and we omit the details. Aiming for a characterization of weakly analytic measures in terms of their spectra, we present one more result. Let $\mu$ be weakly measurable in $M(\Sigma)$.\ (i) Suppose that $B$ is a nonvoid closed subset of $\Gamma$ and ${\rm spec}_T\mu\subset B$. Then $\sigma[t\mapsto T_t\mu(A)]\subset B$ for all $A\in\Sigma$.\ (ii) Conversely, suppose that $B$ is an $S$-set in $\Gamma$ and that $\sigma[t\mapsto T_t\mu(A)] \subset B$ for all $A\in\Sigma$, then ${\rm spec}_T\mu \subset B$. \[prop5.5\] [**Proof.**]{} We clearly have $\cI (\mu)\subset \cI ([t\mapsto T_t\mu(A)])$. Hence, ${\rm spec}_T \mu=Z(\cI (\mu))\supset Z(\cI([t\mapsto T_t\mu(A)]))=\sigma[t\mapsto T_t\mu(A)],$ and (i) follows.\ Now suppose that $B$ is an $S$-set and let $g\in L^1(G)$ be such that $\widehat{g}= 0$ on $-B$. Then, for all $A\in\Sigma$, we have from Proposition \[prop5.2\] (ii): $$\int_G g (t) T_t\mu (A)d t = 0.$$ Equivalently, we have that $$\int_G g(-t) T_{-t}\mu(A) dt =0.$$ Since the Fourier transform of the function $t\mapsto g(-t)$ vanishes on $B$, we see that $\cI (\mu) \supset \{f:\ \ \widehat{f}=0\ {\rm on}\ B\}$. Thus $Z(\cI (\mu))\subset Z(\{f:\ \ \widehat{f}=0\ {\rm on} \ B\})=B$, which completes the proof. Straightforward applications of Propositions \[prop5.1\] and \[prop5.5\] yield the desired characterization of weakly analytic measures. Suppose that $\mu\in \cMT$. Then,\ (i) $\mu$ is weakly $T-$analytic if and only if ${\rm spec}_T\mu\subset \overline{P}$ if and only if $\sigma [t\mapsto T_t\mu (A)]\subset \overline{P}$, for every $A\in \Sigma$;\ (ii) $ {\rm spec}_T\mu\subset S_\alpha $ if and only if $\sigma[t\mapsto T_t\mu(A)]\subset S_\alpha$ for every $A\in \Sigma$.\ (iii) $ {\rm spec}_T\mu\subset C_\alpha $ if and only if $\sigma[t\mapsto T_t\mu(A)]\subset C_\alpha$ for every $A\in \Sigma$.\ (iv) $ {\rm spec}_T\mu\subset \Gamma\setminus C_\alpha $ if and only if $\sigma[t\mapsto T_t\mu(A)]\subset \Gamma\setminus C_\alpha$ for every $A\in \Sigma$. \[cor5.7\] The remaining results of this section are simple properties of measures in $\cMT$ that will be needed later. Although the statements are direct analogues of classical facts about measures on groups, these generalization require in some places the sup path attaining property of $T$. Suppose that $\mu\in \cMT$ and $\nu\in M(G)$. Then ${\rm spec}_T \nu*_T\mu$ is contained in the support of $\widehat{\nu}$, and ${\rm spec}_T \nu*_T\mu\subset {\rm spec}_T \mu$. \[proposition5.8\] [**Proof.**]{} Given $\chi_0$ not in the support of $\widehat{\nu}$, to conclude that it is also not in the spectrum of $\nu*_T\mu$ it is enough to find a function $f$ in $L^1(G)$ with $\widehat{f}(\chi_0)=1$ and $f*_T(\nu*_T\mu)=0$. Simply choose $f$ with Fourier transform vanishing on the support of $\widehat{\nu}$ and taking value 1 at $\chi_0$. By Fourier inversion, we have $f*\nu=0$, and since $f*_T(\nu*_T\mu)=(f*\nu)*_T\mu$, the first part of the proposition follows. For the second part, we have $\cI (\mu)\subset \cI (\nu*_T\mu)$, which implies the desired inclusion.\ We next prove a property of $L^\infty(G)$ functions similar to the characterization of $L^1$ functions which are constant on cosets of a subgroup [@hr2 Theorem (28.55)]. \[proposition5.9\] Suppose that $f$ is in $L^\infty(G)$ and that $\Lambda$ is an open subgroup of $\Gamma$. Let $\lambda_0$ denote the normalized Haar measure on the compact group $A(G,\Lambda)$, the annihilator in $G$ of $\Lambda$ (see [@hr1 (23.23)]. Then, $\sigma[f]\subset \Lambda$ if and only if $f=f*\lambda_0$ a. e.  This is also the case if and only if $f$ is constant on cosets of $A(G,\Lambda)$. [**Proof.**]{} Suppose that the spectrum of $f$ is contained in $\Lambda$. Since $\Lambda$ is an $S$-set, it follows that $f$ is the weak-\* limit of trigonometric polynomials with spectra contained in $\Lambda$. Let $\{f_\alpha\}$ be a net of such trigonometric polynomials converging to $f$ weak-\*. Note that, for any $\alpha$, we have $\lambda_0*f_\alpha=f_\alpha$. For $g$ in $L^1(G)$, we have $$\lim_\alpha \int_G f_\alpha \overline{g}dx= \int_G f\overline{g}dx.$$ In particular, we have $$\lim_\alpha \int_G f_\alpha (\lambda_0*\overline{g})dx= \int_G f (\lambda_0*\overline{g})dx,$$ and so $$\lim_\alpha \int_G (f_\alpha *\lambda_0) \overline{g}dx= \int_G (f*\lambda_0) \overline{g}dx.$$ Since this holds for any $g$ in $L^1(G)$, we conclude that $\lambda_0 *f_\alpha$ converges weak-\* to $\lambda_0*f$. But $\lambda_0*f_\alpha=f_\alpha$, and $f_\alpha$ converges weak-\* to $f$, hence $f*\lambda_0=f$. The remaining assertions of the lemma are easy to prove. We omit the details. In what follows, we use the symbol $\mu_\alpha$ to denote the normalized Haar measure on the compact subgroup $A(G,C_\alpha)$, the annihilator in $G$ of $C_\alpha$. This measure is also characterized by its Fourier transform: $$\widehat{\mu_\alpha}=1_{C_\alpha}$$ (see [@hr1 (23.19)]). Suppose that $\mu\in\cMT$. Then,\ (i) ${\rm spec}_T \mu\subset C_\alpha$ if and only if $\mu=\mu_\alpha *_T \mu$;\ (ii)  ${\rm spec}_T \mu\subset \Gamma\setminus C_\alpha$ if and only if $\mu_\alpha *_T \mu=0$. \[corollary5.10\] [**Proof.**]{} (i) If $\mu=\mu_\alpha*_T\mu$, then, by Proposition \[proposition5.9\], $\sigma[t\mapsto \mu_\alpha*_T T_t\mu(A)]\subset C_\alpha$. Hence by Corollary \[cor5.7\], ${\rm spec}_T \mu\subset C_\alpha$. For the other direction, suppose that ${\rm spec}_T \mu\subset C_\alpha$. Then by Corollary \[cor5.7\] we have that the spectrum of the function $t\mapsto T_t\mu (A)$ is contained in $C_\alpha$ for every $A\in \Sigma$. By Proposition \[proposition5.9\], we have that $$T_t\mu(A)=\int_{G_\alpha}T_{t-y}\mu (A) d\mu_\alpha= T_t(\mu_\alpha*\mu)(A)$$ for almost all $t\in G$. Since this holds for all $A\in\Sigma$, the desired conclusion follows from Proposition \[prop hypa\].\ Part (ii) follows from Corollary \[cor5.3\] (ii), Proposition \[prop5.5\](ii), and the fact that $\Gamma\setminus C_\alpha$ is an $S$-set. Suppose that $\mu\in \cMT$ and ${\rm spec}_T\mu \subset C_\alpha$, and let $y\in G_\alpha=A(G,C_\alpha)$. Then $T_y\mu=\mu$. \[cor5.11\] [**Proof.**]{} For any $A\in \Sigma$, we have from Corollary \[corollary5.10\] $$\begin{aligned} T_y\mu (A) &=& T_y(\mu_\alpha *\mu)(A)=\mu_\alpha*T_y\mu(A)\\ &=& \int_{G_\alpha} T_{y-x} \mu (A) d\mu_\alpha (y)\\ &=& \int_{G_\alpha}T_{-x}\mu(A) d\mu_\alpha (y) =\mu_\alpha *\mu(A)=\mu(A).\end{aligned}$$ Homomorphism theorems ===================== We continue with the notation of the previous section: $G$ is a locally compact abelian group, $\Gamma$ the dual group of $G$, $P$ is a measurable order on $\Gamma$, $T$ is a sup path attaining representation of $G$ acting on $M(\Sigma)$. Associated with $P$ is a collection of homomorphisms $\psi_\alpha$, as described by Theorem \[structureorder\]. Let $\phi_\alpha$ denote the adjoint of $\psi_\alpha$. Thus, $\phi_\alpha$ is a continuous homomorphism of $\R$ into $G$. By composing the representation $T$ with the $\phi_\alpha$, we define a new representation $T_{\phi_\alpha}$ of $\R$ acting on $M(\Sigma)$ by: $t\in \R \mapsto T_{\phi_\alpha(t)}$. If $\mu$ in $M(\Sigma)$ is weakly measurable with respect to $T$ then $\mu$ is also weakly measurable with respect to $T_{\phi_\alpha}$. We will further suppose that $T_{\phi_\alpha}$ is sup path attaining for each $\alpha$. This is the case with the representations of Example \[s-set\]. Our goal in this section is to relate the notion of analyticity with respect to $T$ to the notion of analyticity with respect to $T_{\phi_\alpha}$. More generally, suppose that $G_1$ and $G_2$ are two locally compact abelian groups with dual groups $\Gamma_1$ and $\Gamma_2$, respectively. Let $$\psi: \Gamma_1\rightarrow \Gamma_2$$ be a continuous homomorphism, and let $\phi:G_2\rightarrow G_1$ denote its adjoint homomorphism. Suppose $\nu$ is in $M(G_2)$. We define a Borel measure $\Phi(\nu)$ in $M(G_1)$ on the Borel subsets $A$ of $G_1$ by: $$\Phi(\nu)(A)=\int_{G_2} 1_A\circ \phi(t)\,d\nu(t)= \int_{G_1} 1_A d\Phi(\nu), \label{continuous-image1}$$ where $1_A$ is the indicator function of $A$. We have $\|\Phi(g)\|_{M(G_1)}=\|\nu\|_{M(G_2)}$ and, for every Borel measurable bounded function $f$ on $G_1$, we have $$\int_{G_1} f d\Phi(\nu)= \int_{G_2} f\circ \phi(t)\,d \nu (t). \label{continuous-image2}$$ In particular, if $f=\chi$, a character in $\Gamma$, then $$\widehat{\Phi(\nu)}(\chi)=\int_{G_1} \overline{\chi} d\Phi(\nu)=\int_{G_2} \overline{\chi}\circ \phi(t)\,d\nu(t) =\int_{G_2} \overline{\psi(\chi)}(t)\, d\nu(t)=\widehat{\nu}(\psi(\chi)), \label{continuous-image3}$$ where $\psi$ is the adjoint homomorphism of $\phi$. So, $$\widehat{\Phi(\nu)}=\widehat{\nu}\circ \psi. \label{continuous-image4}$$ Our first result is a very useful fact from spectral synthesis of bounded functions. The proof uses in a crucial way the fact that the representation is sup path attaining, or, more precisely, satisfies the property in Proposition \[prop hypa\]. Suppose that $T$ is a sup path attaining representation of $G_1$ acting on $M(\Sigma)$, $\phi$ is a continuous homomorphism of $G_2$ into $G_1$ such that $T_\phi$ is a sup path attaining representation of $G_2$. Suppose that $B$ is a nonempty closed $S$-subset of $\Gamma_1$ and that $\mu$ is in $M(\Sigma)$ with $\spec_T\mu\subset B$. Suppose that $C$ is an $S$-subset of $\Gamma_2$ and $\psi(B)\subset C$. Then $\spec_{T_\phi}\mu\subset C$. \[lem3.1\] [**Proof.**]{}Since $C$ is an $S$-subset of $\Gamma_2$, it is enough to show that for every ‘$A\in \Sigma$, $\spec_{T_\phi}(x\mapsto T_{\phi(x)}\mu(A))\subset C$, by Proposition \[prop5.5\]. For this purpose, it is enough by [@hr2 Theorem (40.8)], to show that $$g*T_{\phi(\cdot)}\mu(A)=0$$ for every $g$ in $L^1(G_2)$ with $\widehat{g}=0$ on $C$. For $r\in G_2$ and $x\in G_1$, consider the measure $$T_x(g*_{T_\phi}T_{\phi(r)}\mu)= g*_{T_\phi}T_{x+\phi(r)}\mu.$$ For $A\in \Sigma$, we have $$\begin{aligned} g*_{T_\phi} T_{x+\phi(r)}\mu(A)&=& \int_GT_{-t+x}(T_{\phi(r)}\mu)(A)\,d\Phi(g)(t)\\ &=&\Phi(g)*[t\mapsto T_t(T_{\phi(r)}\mu)(A)](x)\\ &=&0\end{aligned}$$ for almost all $x\in G_1$. To justify the last equality, we appeal to Proposition \[annihilate\] and note that $\widehat{\Phi(g)}=\widehat{g}\circ \psi$ and so $\widehat{\Phi(g)}=0$ on $B\subset \psi^{-1}(C)$. Moreover, $\sigma[t\mapsto T_t(T_{\phi(r)}\mu)(A)] \subset \spec_T(\mu)\subset B$. Now, using Proposition \[prop hypa\] and the fact that, for every $A\in \Sigma$, $$T_x[g*_{T_\phi} T_{\phi(r)}]\mu(A) =g*_{T_\phi}T_{x+\phi(r)}\mu(A)=0,$$ for almost all $x\in G_1$, we conclude that the measure $g*_{T_\phi} T_{\phi(r)}\mu$ is the zero measure, which completes the proof. Given ${\cal C}$, a collection of elements in $L^1(G_1)$ or $M(G_1)$, let $$Z({\cal C})=\bigcap_{\delta\in {\cal C}} \left\{ \chi:\ \widehat{\delta}(\chi)=0\right\}.$$ This is the same notation for the zero set of an ideal in $L^1(G)$ that we introduced in Section 1. Given a set of measures ${\cal S}$ in $M(G_2)$, let $$\Phi({\cal S})=\left\{ \Phi(\nu):\ \nu \in {\cal S}\right\}\subset M(G_1).$$ In the above notation, if $\mu\in M(\Sigma)$ is weakly measurable, then $$Z\left( \Phi({\cal I}_{T_\phi}\mu)\right)= \psi^{-1}\left( Z ({\cal I}_{T_\phi}\mu)\right)= \psi^{-1}\left( \spec_{T_\phi}\mu\right).$$ \[lem3.2\] [**Proof.**]{} It is enough to establish the first equality; the second one follows from definitions. We have $$\begin{aligned} Z\left( \Phi({\cal I}_{T_\phi}\mu)\right) &=& \bigcap_{\delta\in \Phi({\cal I}_{T_\phi}(\mu)) } \left\{ \chi\in \Gamma:\ \widehat{\delta}(\chi)=0\right\}\\ &=& \bigcap_{g \in {\cal I}_{T_\phi}(\mu) } \left\{ \chi\in \Gamma:\ \widehat{\Phi(g)}(\chi)=0\right\}\\ &=& \bigcap_{g \in {\cal I}_{T_\phi}(\mu) } \left\{ \chi\in \Gamma:\ \widehat{g }(\psi(\chi))=0\right\}\\ &=& \bigcap_{g \in {\cal I}_{T_\phi}(\mu) } \psi^{-1}\left( Z(g) \right)\\ &=& \psi^{-1}\left( \bigcap_{g \in {\cal I}_{T_\phi}(\mu) } \left( Z(g) \right)\right)\\ &=& \psi^{-1}\left( Z \left( {\cal I}_{T_\phi}(\mu) \right)\right) =\psi^{-1}\left(\spec_{T_\phi}\mu\right).\end{aligned}$$ Suppose that $C$ is a nonempty closed $S$-subset of $\Gamma_2$ and that $\psi^{-1}(C)$ is an $S$-subset of $\Gamma_1$. Suppose that $\mu$ is in $M(\Sigma)$ and $\spec_{T_\phi}(\mu)\subset C$. Then $\spec_T\mu\subset \psi^{-1}(C)$. \[lem3.3\] [**Proof.**]{} We will use the notation of Lemma \[lem3.2\]. If $f\in \cI_{T_\phi}(\mu)$ and $t\in G_1$, then $f\in \cI_{T_\phi}(T_t \mu)$. So, for $A\in\Sigma$, we have $f*_{T_\phi}(T_t\mu)(A)=0$. But $$\begin{aligned} f*_{T_\phi}(T_t\mu)(A)&=& \int_\R T_{t-\phi(x)}\mu(A)f(x)\, dx\\ &=& \int_G T_{t- x }\mu(A)\, d\Phi(f),\end{aligned}$$ where $\Phi(f)$ is the homomorphic image of the measure $f(x)\,dx$. Hence, $\Phi(f) \in \cI_T ([t\mapsto T_t\mu(A)])$, and so $\Phi(\cI_{T_\phi}(\mu))\subset \cI_T([t\mapsto T_t\mu(A)])$, which implies that $$Z\left(\Phi(\cI_{T_\phi}(\mu))\right)\supset Z\left(\cI_T([t\mapsto T_t\mu(A)])\right) =\spec_T(t\mapsto T_t\mu(A)).$$ By Lemma \[lem3.2\], $$Z\left(\Phi(\cI_{T_\phi}(\mu))\right)= \psi^{-1}\left( \spec_{T_\phi}\mu \right)\subset \psi^{-1}(C).$$ Hence, $\spec_T(t\mapsto T_t\mu(A))\subset \psi^{-1}(C)$ for all $A\in\Sigma$, which by Proposition \[prop5.5\] implies that $\spec_T(\mu)\subset \psi^{-1}(C)$. Taking $G_1=G,\ G_2=\R$ and $\psi=\psi_\alpha$ to be one of the homomorphisms in Theorem \[structureorder\], and using the fact that $[0,\infty[$, $S_\alpha$, $C_\alpha\setminus D_\alpha$ are all $S$-sets, we obtain useful relationships between different types of analyticity. \[equiv-def\] Let $G$ be a locally compact abelian group with ordered dual group $\Gamma$, and let $P$ denote a measurable order on $\Gamma$. Suppose that $T$ is a sup path attaining representation of $G$ by isomorphisms of $M(\Sigma)$, such that $T_{\phi_\alpha}$ is sup path attaining, where $\phi_\alpha$ is as in Theorem \[structureorder\].\ (i) If $\mu\in M(\Sigma)$ and $\spec_T(\mu)\subset C_\alpha\setminus D_\alpha$. Then $$\spec_T(\mu)\subset S_\alpha\Leftrightarrow \spec_{T_{\phi_\alpha}}(\mu)\subset [0,\infty[.$$ (ii) If $\mu\in M(\Sigma)$ and $\spec_T(\mu)\subset C_{\alpha_0}$. Then $$\spec_T(\mu)\subset S_{\alpha_0}\Leftrightarrow \spec_{T_{\phi_{\alpha_0}}}(\mu)\subset [0,\infty[.$$ We can use the representation $T_\phi$ to convolve a measure $\nu\in M(G_2)$ with $\mu\in M(G_1)$: $$\nu*_{T_\phi}\mu(A)=\int_{G_2}T_{-\phi(x)}\mu(A)d\nu(x)=\int_{G_2}\mu(A-\phi(x))\,d\nu(x) ,$$ for all Borel $A\subset G_1$. Alternatively, we can convolve $\Phi(\nu)$ in the usual sense of [@hr1 Definition 19.8] with $\mu$ to yield another measure in $M(G_1)$, defined on the Borel subsets of $G_1$ by $$\Phi(\nu)*\mu(A)=\int_{G_1}\int_{G_1}1_A(x+y)d\Phi(\nu)(x)d\mu(y).$$ Using (\[continuous-image2\]), we find that $$\begin{aligned} \Phi(\nu)*\mu(A) &=& \int_{G_1}\int_{G_2}1_A(\phi(t)+y)d\nu(t)d\mu(y)\\ &=& \int_{G_2}\mu(A-\phi(t))d\nu(t) =\nu *_{T_\phi}\mu(A).\end{aligned}$$ Thus, $$\Phi(\nu)*\mu=\nu*_{T_\phi}\mu. \label{7.feb.95.2}$$ We end the section with homomorphism theorems, which complement the well-known homomorphism theorems for $L^p$-multipliers (see Edwards and Gaudry [@eg Appendix B]). In these theorems, we let $G_1$ act on $M(G_1)$ by translation. That is, if $\mu\in M(G_1)$, $x\in G_1$, and $A$ is a Borel subset of $G_1$, then $$T_x\mu(A)=\mu(A+x).$$ Let $\phi:\ G_2\rightarrow G_1$ be a continuous homomorphism. By Example \[exhypa\], $T$ and $T_\phi$ are sup path attaining. (Recall that if $t\in G_2$, $\mu\in M(G_1)$, then $T_{\phi(t)}\mu(A)=\mu(A+\phi(t))$.) A simple exercise with definitions shows that for $\mu\in M(G_1)$ $$\spec_T\mu=\supp \widehat{\mu}.$$ Suppose that $\Gamma_1$ and $\Gamma_2$ contain measurable orders $P_1$ and $P_2$, respectively, and $\psi:\ \Gamma_1\rightarrow \Gamma_2$ is a continuous, order-preserving homomorphism (that is, $\psi(\overline{P_1})\subset \overline{P_2}$). Suppose that there is a positive constant $N(\nu)$ such that for all $f\in H^1(G_2)$ $$\|\nu*f\|_1\leq N(\nu)\|f\|_1. \label{hom1eq}$$ Then $$\|\Phi(\nu)*\mu\|\leq N(\nu)\|\mu\| \label{hom2eq1}$$ for all Borel measures in $M(G_1)$ such that $\widehat{\mu}$ is supported in $\overline{P_1}$. \[homth2\] [**Proof.**]{}We have $\Phi(\nu)*\mu=\nu*_{T_\phi}\mu$. Also $\overline{P_2}$ is a ${\cal T}$-set. So (\[hom2eq1\]) will follow from Theorem \[trans-thm\] once we show that $\spec_{T_\phi}\mu\subset \overline{P_2}$. For that purpose, we use Lemma \[lem3.1\]. We have $$\spec_T\mu =\supp \widehat{\mu}\subset \overline{P_1},$$ and $\psi(\overline{P_1})\subset \overline{P_2}$ is an $S$-set. Hence $\spec_{T_\phi}\mu\subset \overline{P_2}$ by Lemma \[lem3.1\]. The following special case of Theorem \[homth2\] deserves a separate statement. With the above notation, suppose that there is a positive constant $N(\nu)$ such that for all $f\in H^1(G_2)$ $$\|\nu*f\|_1\leq N(\nu)\|f\|_1. \label{hom1eq1}$$ Then for all $f\in H^1(G_1)$ we have $$\|\Phi(\nu)*f\|_1\leq N(\nu)\|f\|_1. \label{hom1eq2}$$ \[homth1\] Suppose that there is a positive constant $N(\nu)$ such that for all $f\in H^1(\R)$ $$\|\nu*f\|_1\leq N(\nu)\|f\|_1. \label{hom2eq11}$$ Then for all $\mu\in M(G)$ with support of $\widehat{\mu}$ contained in $C_\alpha\setminus D_\alpha$, where $\alpha<\alpha_0$, we have $$\|\Phi_\alpha(\nu)*\mu\|_1\leq N(\nu)\|\mu\|. \label{hom2eq2}$$ \[varianthomth2\] [**Proof.**]{}The proof is very much like the proof of Theorem \[homth2\]. We have $\Phi_\alpha (\nu)*\mu=\nu*_{T_\phi}\mu$. Apply Theorem \[trans-thm\], taking into consideration that $$\spec_T\mu=\supp \widehat{\mu}\subset C_\alpha\setminus D_\alpha$$ is an $S$-set and so $$\spec_{T_{\phi_\alpha}}\mu\subset \psi_\alpha(C_\alpha\setminus D_\alpha)\subset [0,\infty[.$$ Decomposition of Analytic Measures ================================== Define measures $\mu_{\alpha_0}$ and $d_\alpha$ by their Fourier transforms: $\widehat{\mu_{\alpha_0}}=1_{C_{\alpha_0}}$, and $\widehat{d_\alpha}=1_{C_\alpha\setminus D_\alpha}$. Then we have the following decomposition theorem. \[decomp of measures\] Let $G$ be a locally compact abelian group with an ordered dual group $\Gamma$. Suppose that $T$ is a sup path attaining representation of $G$ in $M(\Sigma)$. Then for any weakly analytic measure $\mu \in M(\Sigma)$ we have that the set of $\alpha$ for which $d_\alpha *_T\mu \ne 0$ is countable, and that $$\mu = \mu_{\alpha_0}*_T\mu+ \sum_\alpha d_\alpha *_T\mu ,$$ where the right side converges unconditionally in norm in $M(\Sigma)$. Furthermore, there is a positive constant $c$, depending only upon $T$, such that for any signs $\epsilon_\alpha = \pm 1$ we have $$\left\| \sum_\alpha \epsilon_\alpha d_\alpha*_T\mu \right\| \le c \|\mu\| .$$ One should compare this theorem to the well-known results from Littlewood-Paley theory on $L^p(G)$, where $1<p<\infty$ (see Edwards and Gaudry [@eg]). For $L^p(G)$ with $1<p<\infty$, it is well-known that the subgroups $(C_\alpha)$ form a Littlewood-Paley decomposition of the group $\Gamma$, which means that the martingale difference series $$f= \mu_{\alpha_0}*f+ \sum_\alpha d_\alpha *f$$ converges unconditionally in $L^p(G)$ to $f$. Thus, Theorem \[decomp of measures\] above may be considered as an extension of Littlewood-Paley Theory to spaces of analytic measures. The next result, crucial to our proof of Theorem \[decomp of measures\], is already known in the case that $G = \T^n$ with the lexicographic order on the dual. This is due to Garling [@gar], and is a modification of the celebrated inequalities of Burkholder. Our result can be obtained directly from the result in [@gar] by combining the techniques of [@ams3] with the homomorphism theorem \[homth2\]. However, we shall take a different approach, in effect reproducing Garling’s proof in this more general setting. \[ucc of h1 fts\] Suppose that $G$ is a locally compact group with ordered dual $\Gamma$. Then for $f\in H^1(G)$, for any set $\{\alpha_j\}_{j=1}^n$ of indices less than $\alpha_0$, and for any numbers $\epsilon_j \in \{0,\pm 1\}$ ($1 \le j \le n$), there is an absolute constant $a>0$ such that $$\label{34} \left\| \sum_{j=1}^n \epsilon_j d_{\alpha_j} *f \right\|_1 \leq a \|f\|_1.$$ Furthermore, $$\label{ucc of h1 equation} f = \mu_{\alpha_0}*f + \sum_\alpha d_\alpha*f,$$ where the right hand side converges unconditionally in the norm topology on $H^1(G)$. [**Proof.**]{} The second part of Theorem \[ucc of h1 fts\] follows easily from the first part and Fourier inversion. Now let us show that if we have the result for compact $G$, then we have it for locally compact $G$. Let $\pi_{\alpha_0} :\ \Gamma\rightarrow \Gamma/C_{\alpha_0}$ denote the quotient homomorphism of $\Gamma$ onto the discrete group $\Gamma/C_{\alpha_0}$ (recall that $C_{\alpha_0}$ is open), and define a measurable order on $\Gamma/C_{\alpha_0}$ to be $\pi_{\alpha_0}(P)$. By Remarks \[remarkstructureorder\] (c), the decomposition of the group $\Gamma/C_{\alpha_0}$ that we get by applying Theorem \[structureorder\] to that group, is precisely the image by $\pi_{\alpha_0}$ of the decomposition of the group $\Gamma$. Let $G_0$ denote the compact dual group of $\Gamma/C_{\alpha_0}$. Thus if Theorem \[ucc of h1 fts\] holds for $ H^1(G_0)$, then applying Theorem \[homth2\], we see that Theorem \[ucc of h1 fts\] holds for $G$. Henceforth, let us suppose that $G$ is compact. We will suppose that the Haar measure on $G$ is normalized, so that $G$ with Haar measure is a probability space. Since each one of the subgroups $C_\alpha$, and $D_\alpha$ ($\alpha<\alpha_0$) is open, it follows that their annihilators in $G$, $G_\alpha=A(G,C_\alpha)$, and $A(G,D_\alpha)$, are compact. Let $\mu_\alpha$ and $\nu_\alpha$ denote the normalized Haar measures on $A(G,C_\alpha)$ and $A(G,D_\alpha)$, respectively. We have $\widehat{\mu}_\alpha=1_{C_\alpha}$ (for all $\alpha$), and $\widehat{\nu}_\alpha=1_{D_\alpha}$ (for all $\alpha\neq \alpha_0$), so that $d_\alpha=\mu_\alpha -\nu_\alpha$. For each $\alpha$, let ${\cal B}_\alpha$ denote the $\sigma$-algebra of subsets of $G$ of the form $A+G_\alpha$, where $A$ is a Borel subset of $G$. We have ${\cal B}_{\alpha_1}\subset {\cal B}_{\alpha_2}$, whenever $\alpha_1>\alpha_2$. It is a simple matter to see that for $f\in L^1(G)$, the conditional expectation of $f$ with respect to ${\cal B}_\alpha$ is equal to $\mu_\alpha*f$ (see [@eg Chapter 5, Section 2]). We may suppose without loss of generality that $\alpha_1>\alpha_2>\ldots>\alpha_n$. Thus the $\sigma$-algebras ${\cal B}_{\alpha_k}$ form a filtration, and the sequence $(d_{\alpha_1}*f, d_{\alpha_2}*f,\ldots,d_{\alpha_n}*f)$ is a martingale difference sequence with respect to this filtration. In that case, we have the following result of Burkholder [@bur Inequality (1.7)], and [@bur1]. If $0<p<\infty$, then there is a positive constant $c$, depending only upon $p$, such that $$\label{burkholder's inequality} \left\| \sup_{1 \le k \le n} \left( \sum_{j=1}^k \epsilon_j d_{\alpha_j}*f\right) \right\|_p \le c \left\| \sup_{1 \le k \le n} \left( \sum_{j=1}^k d_{\alpha_j}*f\right) \right\|_p.$$ For any index $\alpha$, $0<p<\infty$, and $f\in H^1(G)\cap L^p(G)$, we have almost everywhere on $G$ $$\left| \mu_\alpha *f \right|^p \leq \mu_\alpha*\left| f\right|^p, \label{eq1.95}$$ where $\mu_\alpha$ is the normalized Haar measure on the compact subgroup $G_\alpha=A(G,C_\alpha)$. \[improved jensen\] [**Proof.**]{} The dual group of $G_\alpha$ is $\Gamma/C_\alpha$ and can be ordered by the set $\pi_\alpha (P)$, where $\pi_\alpha$ is the natural homomorphism of $\Gamma$ onto $\Gamma/C_\alpha$. Next, by convolving with an approximate identity for $L^1(G)$ consisting of trigonometric polynomials, we may assume that $f$ is a trigonometric polynomial. Then we see that for each $x \in G$ that the function $y\mapsto f(x+y)$, $y\in G_\alpha$, is in $H^1(G_\alpha)$. To verify this, it is sufficient to consider the case when $f$ is a character in $P$. Then $$f(x+y)= f(x) \pi_\alpha (f)(y),$$ and by definition $\pi_\alpha(f)$ is in $H^1(G_\alpha)$. Now we have the following generalization of Jensen’s Inequality, due to Helson and Lowdenslager [@hl1 Theorem 2]. An independent proof based on the ideas of this section is given in [@ams3]. For all $g\in H^1(G)$ $$\left|\int_G g(x) d x\right| \leq \exp \int_G\log |g(x)|d x. \label{jensen's inequality}$$ Apply (\[jensen’s inequality\]) to $y\mapsto f(x+y)$, $y\in G_\alpha$ to obtain $$\left|\int_{G_\alpha} f(x+y) d \mu_\alpha(y)\right| \leq \exp \int_{G_\alpha}\log |f(x+y)|d \mu_\alpha(y).$$ Extending the integrals to the whole of $G$ (since $\mu_\alpha$ is supported on $G_\alpha$), raising both sides to the $p$th power, and then applying the usual Jensen’s inequality for the logarithmic function on finite measure spaces, we obtain $$\begin{aligned} \left|\int_G f(x+y) d \mu_\alpha(y)\right|^p &\leq& \exp \int_G \log |f(x+y)|^p d \mu_\alpha(y)\\ &\leq& \int_G |f(x+y)|^pd \mu_\alpha(y).\end{aligned}$$ Changing $y$ to $-y$, we obtain the desired inequality.\ Let us continue with the proof of Theorem \[ucc of h1 fts\]. We may suppose that $f$ is a mean zero trigonometric polynomial, and that the spectrum of $f$ is contained in $\bigcup_{j=1}^n C_{\alpha_j} \setminus D_{\alpha_j} $, that is to say $$f = \sum_{j=1}^n d_{\alpha_j} * f .$$ By Lemma \[improved jensen\], we have that $$\begin{aligned} \sup_{1\leq k\leq n} \left| \mu_{\alpha_k}*f \right| &=& \left( \sup_{1\leq k\leq n} \left| \mu_{\alpha_k}*f \right|^{1/2}\right)^2 \nonumber\\ &\leq& \left( \sup_{1\leq k\leq n} \mu_{\alpha_k}*| f|^{1/2} \right)^2. \label{ucc 2}\end{aligned}$$ Also, we have that $(\mu_{\alpha_j}*|f|^{1/2})_{j=1}^n$ is a martingale with respect to the filtration $({\cal B}_j)_{j=1}^n$. Hence, by Doob’s Maximal Inequality [@doob Theorem (3.1), p. 317] we have that $$\begin{aligned} \left\| \sup_{1\leq k\leq n'} \mu_{\beta_k}*| f|^{1/2} \right\|_2^2 &\leq& 4 \left\| \mu_{\beta_{n'}}*| f|^{1/2} \right\|_2^2 \nonumber\\ &\leq& 4 \left\| |f|^{1/2}\right\|_2^2 = 4\|f\|_1. \label{ucc 3}\end{aligned}$$ The desired inequality follows now upon combining Burkholder’s Inequality (\[burkholder’s inequality\]) with (\[ucc 2\]), and (\[ucc 3\]).\ [**Proof of Theorem \[decomp of measures\].**]{} Transferring inequality (\[34\]) by using Theorem \[trans-thm\], we obtain that for any set $\{\alpha_j\}_{j=1}^n$ of indices less than $\alpha_0$, and for any numbers $\epsilon_j \in \{0,\pm 1\}$ ($1 \le j \le n$), there is a positive constant $c$, depending only upon the representation $T$, such that $$\left\| \sum_{j=1}^n \epsilon_j d_{\alpha_j} *_T \mu \right\| \leq c \|\mu\|.$$ Now suppose that $\{\alpha_j\}_{j=1}^\infty$ is a countable collection of indices less than $\alpha_0$. Then by Bessaga and Pełczyński [@bp], the series $\sum_{j=1}^\infty d_{\alpha_j} *_T \mu$ is unconditionally convergent. In particular, for any $\delta>0$, for only finitely many $k$ do we have that $\| d_{\alpha_k} *_T \mu \| > \delta$. Since this is true for all such countable sets, we deduce that the set of $\alpha$ for which $ d_\alpha *_T \mu \ne 0$ is countable. Hence we have that $\sum_\alpha d_{\alpha} *_T \mu$ is unconditionally convergent to some measure, say $\nu$. Clearly $\nu$ is weakly measurable. To prove that $\mu=\nu$, it is enough by Proposition \[prop hypa\] to show that for every $A\in\Sigma$, we have $T_t\mu(A)=T_t\nu(A)$ for almost all $t\in G$. We first note that since for every $f\in L^1(G)$ the series $\mu_{\alpha_0}*f+ \sum_\alpha d_\alpha *f$ converges to $f$ in $L^1(G)$, it follows that, for every $g\in L^\infty(G)$, the series $\mu_{\alpha_0}*g+ \sum_\alpha d_\alpha *g$ converges to $g$ in the weak-\* topology of $L^\infty(G)$. Now on the one hand, for $t\in G$ and $A\in \Sigma$, we have $\mu_{\alpha_0}*_TT_t\mu(A)+ \sum_\alpha d_\alpha *_T T_t\mu(A)=T_t\nu(A)$, because of the (unconditional) convergence of the series $\mu_{\alpha_0}*_T\mu+ \sum_\alpha d_\alpha *_T\mu$ to $\nu$. On the other hand, by considering the $L^\infty(G)$ function $t\mapsto T_t(A)$, we have that $\mu_{\alpha_0}*_TT_t\mu(A)+ \sum_\alpha d_\alpha *_T T_t\mu(A)= \mu_{\alpha_0}*T_t\mu(A)+ \sum_\alpha d_\alpha * T_t\mu(A)=T_t\mu(A)$, weak \*. Thus $T_t\mu(A)=T_t\nu(A)$ for almost all $t\in G$, and the proof is complete. Generalized F. and M. Riesz Theorems ==================================== Throughout this section, we adopt the notation of Section 5, specifically, the notation and assumptions of Theorem \[decomp of measures\]. Suppose that $T$ is a sup path attaining representation of $\R$ by isomorphisms of $M(\Sigma)$. In [@amss], we proved the following result concerning bounded operators $\cP$ from $M(\Sigma)$ into $M(\Sigma)$ that commute with the representation $T$ in the following sense: $$\cP\circ T_t=T_t\circ \cP$$ for all $t\in \R$. Suppose that $T$ is a representation of $\R$ that is sup path attaining, and that $\cP$ commutes with $T$. Let $\mu\in M(\Sigma)$ be weakly analytic. Then $\cP \mu$ is also weakly analytic. \[caseofR\] To describe an interesting application of this theorem from [@amss], let us recall the following. Let $T$ be a sup path attaining representation of $G$ in $M(\Sigma)$. A weakly measurable $\sigma$ in $M(\Sigma)$ is called quasi-invariant if $T_t\sigma$ and $\sigma$ are mutually absolutely continuous for all $t\in G$. Hence if $\sigma$ is quasi-invariant and $A\in \Sigma$, then $|\sigma|(A)=0$ if and only if $|T_t(\sigma)|(A)=0$ for all $t\in G$. \[qi\] Using Theorem \[caseofR\] we obtained in [@amss] the following extension of results of de Leeuw-Glicksberg [@deleeuwglicksberg] and Forelli [@forelli], concerning quasi-invariant measures. Suppose that $T$ is a sup path attaining representation of $\R$ by isometries of $M(\Sigma)$. Suppose that $\mu\in M(\Sigma)$ is weakly analytic, and $\sigma$ is quasi-invariant. Write $\mu=\mu_a+\mu_s$ for the Lebesgue decomposition of $\mu$ with respect to $\sigma$. Then both $\mu_a$ and $\mu_s$ are weakly analytic. In particular, the spectra of $\mu_a$ and $\mu_s$ are contained in $[0,\infty)$. \[lebesgue-decomp-forR\] Our goal in this section is to extend Theorems \[caseofR\] above to representations of a locally compact abelian group $G$ with ordered dual group $\Gamma$. More specifically, we want to prove the following theorems. \[application1\] Suppose that $T$ is a sup path attaining representation of $G$ by isomorphisms of $M(\Sigma)$ such that $T_{\phi_\alpha}$ is sup path attaining for each $\alpha$. Suppose that $\cP$ commutes with $T$ in the sense that $$\cP\circ T_t=T_t\circ \cP$$ for all $t\in G$. Let $\mu\in M(\Sigma)$ be weakly analytic. Then $\cP \mu$ is also weakly analytic. As shown in [@amss Theorem (4.10)] for the case $G=\R$, an immediate corollary of Theorem \[application1\] is the following result. \[application2\] Suppose that $T$ is a sup path attaining representation of $G$ by isometries of $M(\Sigma)$, such that $T_{\phi_\alpha}$ is sup path attaining for each $\alpha$. Suppose that $\mu\in M(\Sigma)$ is weakly analytic with respect to $T$, and $\sigma$ is quasi-invariant with respect to $T$. Write $\mu=\mu_a+\mu_s$ for the Lebesgue decomposition of $\mu$ with respect to $\sigma$. Then both $\mu_a$ and $\mu_s$ are weakly analytic with respect to $T$. In particular, the $T$-spectra of $\mu_a$ and $\mu_s$ are contained in $\overline{P}$. [**Proof of Theorem \[application1\].**]{}Write $$\mu=\mu_{\alpha_0}*_T\mu +\sum_\alpha d_\alpha *_T\mu,$$ as in (\[decomp of measures\]), where the series converges unconditionally in $M(\Sigma)$. Then $$\label{-3} \cP\mu=\cP(\mu_{\alpha_0}*_T\mu) +\sum_\alpha \cP(d_\alpha *_T\mu).$$ It is enough to show that the $T$-spectrum of each term is contained in $\overline{P}$. Consider the measure $\mu_{\alpha_0}*_T\mu$. We have $\spec_T(\mu_{\alpha_0}*_T\mu)\subset S_{\alpha_0}$. Hence by Theorem \[equiv-def\], $\mu_{\alpha_0}*_T\mu$ is $T_{\phi_{\alpha_0}}$-analytic. Applying Theorem \[caseofR\], we see that $$\label{-2} \spec_{T_{\phi_{\alpha_0}}} (\cP (\mu_{\alpha_0}*_T\mu))\subset [0,\infty[.$$ Since $\cP$ commutes with $T$, it is easy to see from Proposition \[proposition5.9\] and Corollary \[corollary5.10\] that $$\spec_T (\cP (\mu_{\alpha_0}*_T\mu))\subset C_{\alpha_0}.$$ Hence by (\[-2\]) and Theorem \[equiv-def\], $$\spec_T (\cP (\mu_{\alpha_0}*_T\mu))\subset S_{\alpha_0},$$ which shows the desired result for the first term of the series in (\[-3\]). The other terms of the series (\[-3\]) are handled similarly.\ [**Acknowledgments**]{} The second author is grateful for financial support from the National Science Foundation (U.S.A.) and the Research Board of the University of Missouri. [Dillo 83]{} N. Asmar and S. Montgomery-Smith, [*Hahn’s embedding theorem for orders and harmonic analysis on groups with ordered duals*]{}, Colloq. Math. [**70**]{} (1996), 235–252. N. Asmar and S. Montgomery-Smith, [*Analytic measures and Bochner measurability*]{}, Bull. Sci. Math. [**122**]{} (1998), 39–66. N. Asmar and S. Montgomery-Smith, [*Hardy martingales and Jensen’s Inequality*]{}, Bull. Austral. Math. Soc. [**55**]{} (1997), 185–195. N. Asmar, S. Montgomery-Smith, and S. Saeki, [*Transference in spaces of measures*]{}, J. Functional Analysis, [**165**]{} (1999), 1–23. C. Bessaga and A. Pełczyński, [*On bases and unconditional convergence of series in Banach spaces*]{}, Studia Math. [**17**]{} (1958), 151–164. S. Bochner, [*Boundary values of analytic functions in several variables and almost periodic functions*]{}, Ann. of Math., [**45**]{} (1944), 708–722. D. L. Burkholder, [*A geometrical characterization of Banach spaces in which martingale difference sequences are unconditional*]{}, Ann. Math. Statist., [**37**]{} (1966), 1494–1504. D. L. Burkholder, [*Martingale transforms*]{}, The Annals of Probability, [**9**]{} (1981), 997–1011. K. De Leeuw and I. Glicksberg, [*Quasi-invariance and measures on compact groups*]{}, Acta Math., [**109**]{} (1963), 179–205. J. L. Doob, “Stochastic Processes”, Wiley Publications in Statistics, New York 1953. R. E. Edwards, and G. I. Gaudry, “Littlewood-Paley and Multiplier Theory”, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer-Verlag, No. 90, Berlin, 1977. F. Forelli, [*Analytic and quasi-invariant measures*]{}, Acta Math., [**118**]{} (1967), 33–59. L. Fuchs, “ Partially ordered algebraic systems”, Pergamon Press, Oxford, New York, 1960. D. J. H. Garling, [*On martingales with values in a complex Banach space*]{}, Math. Proc. Camb. Phil. Soc. [**104**]{} (1988), 399–406. D. J. H. Garling, [*Hardy martingales and the unconditional convergence of martingales*]{}, Bull. London Math. Soc. [**23**]{} (1991), 190–192. H. Helson and D. Lowdenslager, [*Prediction theory and Fourier series in several variables*]{}, Acta Math. [**99**]{} (1958), 165–202. H. Helson and D. Lowdenslager, [*Prediction theory and Fourier series in several variables II*]{}, Acta Math. [**106**]{} (1961), 175–212. E. Hewitt and S. Koshi, [*Orderings in locally compact Abelian groups and the theorem of F. and M. Riesz*]{}, Math. Proc. Camb. Phil. Soc. [**93**]{} (1983), 441–457. E. Hewitt, S. Koshi, and Y. Takahashi, [*The F. and M. Riesz Theorem revisited*]{}, Math. Scandinav., [**60**]{} (1987), 63–76. E. Hewitt and K. A. Ross, “Abstract Harmonic Analysis I,” $2^{nd}$ Edition, Grundlehren der Math. Wissenschaften, Band 115, Springer–Verlag, Berlin 1979. E. Hewitt and K. A. Ross, “Abstract Harmonic Analysis II,” Grundlehren der Math. Wissenschaften in Einzeldarstellungen, Band 152, Springer–Verlag, New York, 1970. W. Rudin, “Fourier Analysis on Groups,” Interscience Tracts in Pure and Applied Mathematics, No. 12, John Wiely, New York, 1962. H. Yamaguchi, [*A property of some Fourier-Stieltjes transforms*]{}, Pac. J. Math. [**108**]{} (1983), 243–256.
{ "pile_set_name": "ArXiv" }
TECHNION-PH-2016-05\ EFI 16-09\ March 2016\ **From $\Xi_b \to \Lambda_b \pi$ to $\Xi_c \to \Lambda_c \pi$** Michael Gronau *Physics Department, Technion, Haifa 32000, Israel* Jonathan L. Rosner *Enrico Fermi Institute and Department of Physics, University of Chicago* *Chicago, IL 60637, U.S.A.* > Using a successful framework for describing S-wave hadronic decays of light hyperons induced by a subprocess $s \to u (\bar u d)$, we presented recently a model-independent calculation of the amplitude and branching ratio for $\Xi^-_b \to \Lambda_b \pi^-$ in agreement with a LHCb measurement. The same quark process contributes to $\Xi^0_c \to \Lambda_c \pi^-$, while a second term from the subprocess $cs \to cd$ has been related by Voloshin to differences among total decay rates of charmed baryons. We calculate this term and find it to have a magnitude approximately equal to the $s \to u (\bar u d)$ term. We argue for a negligible relative phase between these two contributions, potentially due to final state interactions. However, we do not know whether they interfere destructively or constructively. For constructive interference one predicts ${\cal B}(\Xi_c^0 \to \Lambda_c \pi^-) = (1.94 \pm 0.70)\times 10^{-3}$ and ${\cal B}(\Xi_c^+ \to \Lambda_c \pi^0) = (3.86 \pm 1.35)\times 10^{-3}$. For destructive interference, the respective branching fractions are expected to be less than about $10^{-4}$ and $2 \times 10^{-4}$. INTRODUCTION {#sec:intro} ============ Most decays of charmed and beauty baryons observed up to now occur by $c$ and $b$ quark decays. In strange heavy flavor baryons an $s$ quark may decay instead via the heavy flavor conserving subprocess $s \to u (\bar u d)$ or $su \to ud$, with the $c$ or $b$ quark acting as a spectator. In strange charmed baryons an additional Cabibbo-suppressed subprocess $c s \to c d$ can contribute. Early investigations of heavy flavor conserving two body hadronic decays of charmed and beauty baryons involving a low energy pion have been performed in Ref. [@Cheng:1992ff; @Sinha:1999tc; @Voloshin:2000et; @Li:2014ada; @Faller:2015oma; @Cheng:2015ckx]. In these studies a soft pion limit, partial conservation of the axial-vector current (PCAC) and current algebra have implied expressions for decay amplitudes in terms of matrix elements of four-fermion operators between initial and heavy baryon states. These matrix elements are difficult to estimate and depend strongly on models for heavy baryon wave functions. Recently we proposed a model-independent approach for studying the decay $\Xi_b^- \to \Lambda_b \pi^-$ [@Gronau:2015jgh] which had just been observed by the LHCb collaboration at CERN [@Aaij:2015yoy]. In the heavy $b$ quark limit this decay by $s \to u (\bar u d)$ proceeds purely via an S-wave. Assuming that properties of the light diquark in $\Xi_b^-$ are not greatly affected by the heavy nature of the spectator $b$ quark, the decay amplitude for $\Xi_b^- \to \Lambda_b \pi^-$ may be related to amplitudes for S-wave nonleptonic decays of $\Lambda$, $\Sigma$, and $\Xi$ which have been measured with high precision [@Roos:1982sd]. We calculated a branching fraction for $\Xi^-_b \to \Lambda_b \pi^-$ consistent with the range allowed in the LHCb analysis. Our purpose now is to extend this calculation to charmed baryon decays $\Xi_c^0 \to \Lambda_c \pi^-$ and $\Xi_c^+ \to \Lambda_c \pi^0$. Sec. \[sec:supi\] summarizes the result of Ref. [@Gronau:2015jgh] for the amplitude of $\Xi^-_b \to \Lambda_b \pi^-$, in which the underlying quark transition is $s \to u (\bar u d)$. This result is then applied to a contribution of the same quark subprocess to $\Xi^0_c \to \Lambda_c \pi^-$. A second term in this amplitude due to the subprocess $c s \to c d$ is studied in Sec. \[sec:cscd\]. The total amplitude and the branching ratios for $\Xi^0_c \to \Lambda_c \pi^-$ and $\Xi_c^+ \to \Lambda_c \pi^0$ are calculated in Sec. \[sec:total\] while Section \[sec:con\] concludes. $s \to u (\bar u d)$ TERM IN $\Xi^-_b \to \Lambda_b \pi^-$ AND $\Xi^0_c \to \Lambda_c \pi^-$ {#sec:supi} ============================================================================================ We will use notations which are common for describing hadronic hyperon decays [@Roos:1982sd]. The effective Lagrangian for $B_1 \to B_2 \pi$ given by \[eqn:SP\] [L]{}\_[eff]{} = G\_F m\_\^2\[|\_2(A + B \_5)\_1\] \_involves two dimensionless parameters $A$ and $B$ describing S-wave and P-wave amplitudes, respectively. Here $G_F = 1.16638 \times 10^{-5}$ GeV$^{-2}$ is the Fermi decay constant. The partial width is \[eqn:rate\] (B\_1 B\_2 ) = q \[(m\_1+m\_2)\^2-m\_\^2\]|A|\^2 + \[(m\_1-m\_2)\^2-m\_\^2\]|B|\^2 , where $q$ is the magnitude of the final three-momentum of either particle in the $B_1$ rest frame. Consider first $\Xi_b^- \to \Lambda_b \pi^-$ studied in Ref. [@Gronau:2015jgh]. In the heavy $b$ quark limit the light quarks $s$ and $d$ in $\Xi^-_b=bsd$ are in an S-wave state anstisymmetric in flavor with total spin $S = 0$. The light quarks $u$ and $d$ in the $\Lambda_b = bud$ are also in an S-wave state with $I = S = 0$. In the decay $\Xi^-_b \to \Lambda_b \pi^-$ which proceeds via $s \to u (\bar u d)$ the $b$ quark acts as a spectator. The transition among light quarks is thus one with $J^P = 0^+ \to 0^+ \pi$, and hence is purely a parity-violating S wave. Thus it may be related to parity-violating S-wave amplitudes in nonleptonic decays of the hyperons $\Lambda$, $\Sigma$, and $\Xi$. S-wave hadronic decays of hyperons, $B_1 \to B_2 \pi$, where the baryons $B_1$ and $B_2$ belong to the lowest SU(3) octet baryons, have been known for fifty years to be described well by using PCAC and current algebra and assuming octet dominance [@Sugawara:1965zza; @Suzuki:1965zz]. An equivalent and somewhat more compact parametrization of these amplitudes based on duality has been suggested a few years later [@Nussinov:1969hp]. All hyperon S-wave amplitudes may be expressed in terms of an overall normalization parameter $x_0$ and a parameter $F$ describing the ratio of antisymmetric and symmetric three-octet coupling. (In the soft pion limit the commutator of the axial charge with the weak Hamiltonian represents a third octet in addition to the two baryons.) Thus one finds [@Gronau:2015jgh; @Nussinov:1969hp] \[eqn:amps\] A(p \^-) & = & -(2F+1) x\_0/ ,\ A(\^+ n \^+) & = & 0 ,\ A(\^- n \^-) & = & -(2 F - 1) x\_0 ,\ A(\^- \^-) & = & (4F -1) x\_0/ , while amplitudes involving a neutral pion are related to these amplitudes by isospin. Using best fit values $F = 1.652,~x_0= 0.861$, one finds good agreement between predicted and measured amplitudes as shown in Table I (see [@Gronau:2015jgh]). The relative signs of S-wave amplitudes are convention-dependent and differ from those in Ref. [@Roos:1982sd]. An overall sign change is also permitted, associated with two possible signs of $x_0$. --------------------------- -------------------- ---------------------- ----------- Decay Predicted $A$ Observed Predicted amplitude value [@Roos:1982sd] value $\Lambda \to p \pi^-$ $-(2F+1)x_0/\sx$ $-1.47 \pm 0.01$ –1.51 $\Lambda \to n \pi^0$ $(2F+1)x_0/(2\st)$ $ 1.07 \pm 0.01$ 1.07 $\Sigma^+ \to n \pi^+$ 0 $0.06 \pm 0.01$ 0 $\Sigma^+ \to p \pi^0$ $-(2F-1)x_0/\s$ $-1.48 \pm 0.05$ –1.40 $\Sigma^- \to n \pi^-$ $-(2F-1)x_0$ $-1.93 \pm 0.01$ –1.98 $\Xi^0 \to \Lambda \pi^0$ $(4F-1)x_0/(2\st)$ $1.55 \pm 0.03$ 1.39 $\Xi^- \to \Lambda \pi^-$ $(4F-1)x_0/\sx$ $2.04 \pm 0.01$ 1.97 --------------------------- -------------------- ---------------------- ----------- : Predicted and observed S-wave amplitudes $A$ for nonleptonic hyperon decays. Predicted values are for best-fit parameters $F = 1.652$, $x_0 = 0.861$. \[tab:amps\] In the decay $\Xi^-_b \to \Lambda_b \pi^-$, which also proceeds by $s \to u (\bar u d)$, the light diquarks $sd$ and $ud$ in the initial and final baryons form each a spinless antisymmetric $3^*$ of flavor SU(3). The weak transition occurs between this pair of diquarks while the $b$ quark acts as a spectator. Neglecting the effect of the heavy $b$ quark on relevant properties of the light diquarks, this amplitude is expected to be equal to an amplitude for a transition between light hyperons, $\Lambda \to \Lambda (\bar u u)$, in which the diquarks in initial and final hyperons are also in an antisymmetric $3^*$ while the $s$ quark acts as a spectator. Thus one finds [@Gronau:2015jgh] \[eqn:xibamp\] A(\^-\_b \_b \^-) = (5F - 2)x\_0/3 . Using the best fit values of $x_0$ and $F$ one obtains $A(\Xi^-_b \to \pi^- \Lambda_b) = \pm 1.796$. One may improve this calculation somewhat by including SU(3) breaking. We note that the measured S-wave amplitudes for $\Lambda \to p \pi^-$ and $\Sigma^- \to n \pi^-$ alone determine a slightly different value for $x_0$, $x_0=0.835$ having practically no effect on $F$. The relation \[eqn:rel\] A(\^-\_b \_b \^-) = -A(p \^-) - A(\^- n \^-) , and experimental values of the amplitudes on the right-hand side imply \[eqn:Xib\] A(\^-\_b \_b \^-)= 1.75 0.26 . In the three amplitudes occurring in (\[eqn:rel\]) an $s$ quark occurs in the decaying baryons taking part in the transition but not as a spectator. This leads to a common redefinition of $x_0$ which now includes SU(3) breaking. While the value (\[eqn:Xib\]) includes this effect of SU(3) breaking we have attributed to it an uncertainty of $15\%$ caused by assuming octet dominance and by neglecting the effect of the heavy $b$ quark on properties of the light diquarks. The considerations and calculation leading to (\[eqn:Xib\]) apply also to the contribution of the transition $s \to u (\bar u d)$ to the S-wave amplitude for $\Xi^0_c \to \Lambda_c \pi^-$. Here one replaces a spectator $b$ quark in $\Xi^-_b$ and $\Lambda_b$ by a $c$ quark in $\Xi^0_c = csd$ and $\Lambda_c = cud$, assuming that the $c$ quark mass is much heavier than the light $u, d$ and $s$ quarks. In this approximation we have \[eqn:XicXib\] A\_[s u|ud]{}(\^0\_c \_c \^-) = A(\^-\_b \_b \^-) . $c s \to c d$ CONTRIBUTION TO $\Xi^0_c \to \Lambda_c \pi^-$ {#sec:cscd} =========================================================== The S-wave amplitude for $\Xi^0_c \to \Lambda_c \pi^-$ obtains a second contribution from an “annihilation" subprocess $c s \to c d$ involving an interaction between the $c$ and $s$ quarks in the $\Xi^0_c$. We will now present in some detail a method proposed by Voloshin [@Voloshin:2000et; @Voloshin:1999pz; @Voloshin:1999ax] for calculating this amplitude in the heavy $c$-quark limit in terms of differences among measured total widths of charmed baryons. The effective weak Hamiltonian responsible for this Cabibbo-suppressed strangeness-changing transition is given by \[eqn:CS\] H\_W = -G\_F \_C\_C . In the following we will use values $C_+ = 0.80$ and $C_- = 1.55$ for Wilson coefficients calculated in a leading-log approximation at a scale $\mu = m_c= 1.4$ GeV corresponding to $\alpha_s(m_c)/\alpha(m_W) = 2.5$. Applying a soft pion limit and using PCAC, the amplitude due to $cs \to cd$ is given in our normalization (\[eqn:SP\]) \[which is related to that of Ref.[@Voloshin:2000et] by a factor $\xi/(G_F m_\pi^2)$\] by \[eqn:Acscd1\] & & A\_[cscd]{}(\^0\_c \_c \^-) =\ & & \_C\_C\_c| (C\_+-1mm+-1mmC\_-) (|c\_L\_s\_L)(|u\_L \_c\_L)+(C\_+-1mm--1mmC\_-) (|u\_L\_s\_L)(|c\_L\_c\_L)|\^0\_c\ & & = \_C\_C . Here $f_\pi = 0.130$ GeV, $\xi \equiv 2m_{\Xi^0_c}/\sqrt{(m_{\Xi^0_c} + m_{\Lambda_c})^2 - m^2_{\pi^-}} = 1.04$ [@Agashe:2014kda]. In the above one defines two matrix element $x$ and $y$ (of dimension GeV$^3$) in which the contribution of the axial-current vanishes for a heavy $c$ quark, x & & -\_c|(|c\_c)(|u \_s)| \^0\_c ,\ y & & -\_c|(|c\_i\_c\_k)(|u\_k \_s\_i)| \^0\_c , where $i, k$ are color indices. Using flavor SU(3) one may write these two terms as differences of diagonal matrix elements of four fermion operators, $\langle {\cal O}\rangle_{\psi - \phi} \equiv \langle \psi|{\cal O}|\psi\rangle - \langle \phi|{\cal O}|\phi\rangle$, for charmed baryon states belonging to V-spin and U-spin doublets: x & = & (|c\_c)\_[\^0\_c-\_c]{} = (|c\_c)\_[\_c-\^+\_c]{} ,\ y & = & (|c\_i\_c\_k)\_[\^0\_c-\_c]{} = (|c\_i\_c\_k)\_[\_c-\^+\_c]{} . Within a heavy quark expansion the quantities $x$ and $y$ can be used to describe differences of inclusive decay rates among the above three charmed baryons. Adding contributions of hadronic and semileptonic Cabibbo-favored and singly Cabibbo-suppressed decays one finds in the flavor SU(3) limit [@Voloshin:2000et; @Voloshin:1999pz; @Voloshin:1999ax]: (\^0\_c) - (\_c) & = & (-x\[\^4\_CC\_+C\_- + \^2\^2(6C\_+C\_- + 5C\^2\_++5C\^2\_-)\] .\ & & + .y\[3\^4\_CC\_+C\_- +\^2\_C\^2\_C(6C\_+C\_- - 3C\^2\_+ + C\^2\_-) + 2\]) ,\ (\_c) - (\^+\_c) & = & (-x\^4\_C(5C\^2\_+ + 5C\^2\_- - 2C\_+C\_-) .\ & & + . y\[\^4\_C(C\^2\_- - 3C\^2\_+ - 2C\_+C\_-) - 2(\^2\_C - \^2\_C)\] ) . Substituting the above values of $C_+, C_-$ and $\cos\theta_C=0.97424, \sin\theta_C=0.2253$ [@Agashe:2014kda] one has (\^0\_c) - (\_c) & = & \[ -1.39x + 5.64y\] ,\ (\_c) - (\^+\_c) & = & \[ -2.87x - 3.15y \] . Eliminating $x$ and $y$ in these equations Eq. (\[eqn:Acscd1\]) now implies \[eqn:Acscd\] A\_[cscd]{}(\^0\_c \_c \^-) = -(0.44\[(\^0\_c) - (\_c)\] + 0.05\[ (\_c) - (\^+\_c)\]) . Using the measured charmed baryon lifetimes [@Agashe:2014kda] (\^0\_c) = 0.112\^[+0.013]{}\_[-0.010]{} [ps]{} ,  (\_c\^+) = 0.442 0.026 [ps]{} ,   (\_c) = 0.200 0.006 [ps]{} , we calculate \[eqn:Xicstou\] A\_[cscd]{}(\^0\_c \_c \^-) = -(1.85 0.40 0.40) ()\^2 . The first (symmetrized) error corresponds to errors in lifetime measurements, while the second one is associated with uncertainties due to SU(3) breaking and due to a finite $c$-quark mass. We checked that replacing the Wilson coefficients $C_\pm$ by values calculated beyond the leading-log approximation, $C_+ = 0.80, C_- = 1.63$ [@Buchalla:1995vs], has a negligible effect on the central value. DECAY RATES OF $\Xi^0_c\to \Lambda_c \pi^-$ AND $\Xi^+_c\to \Lambda_c \pi^0$ {#sec:total} ============================================================================ Combining Eqs. (\[eqn:Xib\]), (\[eqn:XicXib\]) and (\[eqn:Xicstou\]) and adding errors in quadrature we find for $m_c=1.4$ GeV and destructive interference \[eqn:Ampd\] A(\_c\^0 \_c\^-) = |A\_[s u|ud]{}(\_c\^0\_c \^-)| + A\_[cscd]{}(\_c\^0\_c \^-) = - 0.10 0.62 , while for constructive interference we find \[eqn:Ampc\] A(\_c\^0\_c \^-) = - |A\_[s u|ud]{}(\_c\^0\_c \^-)| + A\_[cscd]{}(\_c\^0\_c \^-) = - 3.60 0.62 . In the former case the small central value of the amplitude is the result of cancellation between two real contributions of approximately equal magnitudes but opposite signs. In principle each of the two terms in the above two equations could involve a phase due to final state strong interactions. A final state interaction one might anticipate in S-wave $\Xi_c \to \Lambda_c \pi$ or $\Xi_b \to \Lambda_b \pi$ would be the effect of $\Sigma_c^*$ or $\Sigma_b^*$. However, their parity is wrong for such contributions. Final state interactions are negligible in these heavy baryon decays for the same reason they are small in S-wave nonleptonic hyperon decays. This is demonstrated by the well-fitted real amplitudes in Table I and by a triangle relation which follows from more general considerations [@Lee:1964zzc; @Sugawara:1964zz], 2 A(\^- \^- ) + A(\^- p) = - (3/2)\^[1/2]{} A(\^- \^- n) , which holds best for real values. The second term in $\Xi^0_c \to \Lambda_c \pi^-$ due to $cs \to cd$ is real and negative, given in (\[eqn:Acscd\]) in terms of width differences among charmed baryons. For constructive interference, the branching fraction is predicted by Eq.(\[eqn:rate\]) to be ${\cal B}(\Xi_c^0 \to \Lambda_c \pi^-) = (1.94 \pm 0.70)\times 10^{-3}$. This branching ratio is somewhat smaller than that of the corresponding $\Xi^-_b$ decay, ${\cal B}(\Xi^-_b \to \Lambda_b \pi^-) = (6.00 \pm 1.81)\times 10^{-3}$, calculated using (\[eqn:Xib\]) and the $\Xi^-_b$ lifetime which is roughly an order of magnitude larger than $\tau(\Xi^0_c)$ [@Agashe:2014kda]. For destructive interference, at 90% c.l. it is less than $\sim 10^{-4}$. The amplitude for $\Xi_c^+ \to \Lambda_c \pi^0$ is related to that for $\Xi_c^0 \to \Lambda_c \pi^-$ by the $\Delta I = 1/2$ rule, which holds for both contributions. Consequently, the partial decay rate is half that for $\Xi_c^0 \to \Lambda_c \pi^-$. Because of the larger lifetime of the $\Xi_c^+$, which is about four times that of $\Xi_c^0$, the corresponding branching fraction is predicted to be about two times larger, $(3.86 \pm 1.35)\times 10^{-3}$ for constructive interference or less than about $2 \times 10^{-4}$ for destructive interference. CONCLUSIONS {#sec:con} =========== We have discussed the heavy-flavor-conserving decays $\Xi_c^0 \to \Lambda_c \pi^-$ and $\Xi_c^+ \to \Lambda_c \pi^0$ within the context of current algebra, taking separate account of amplitudes governed by the subprocesses $s \to u \bar u d$ and $cs \to cd$. We have used a previous result for $\Xi_b^- \to \Lambda_b \pi^-$ to obtain the former amplitude, while updating an estimate by Voloshin for the latter. The relative signs of the amplitudes are not determined. For constructive interference, we predict ${\cal B}(\Xi_c^0 \to \Lambda_c \pi^-) = (1.94 \pm 0.70)\times 10^{-3}$ with half the rate and twice the branching fraction for $\Xi_c^+ \to \Lambda_c \pi^0$. For destructive interference, the former branching fraction is expected to be less than about $10^{-4}$ or twice that for the latter. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== The work of J.L.R. was supported in part by the United States Department of Energy through Grant No. DE-FG02-13ER41598. [99]{} H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin, T. M. Yan and H. L. Yu, “Heavy-flavor-conserving nonleptonic weak decays of heavy baryons,” Phys. Rev. D [**46**]{} (1992) 5060. S. Sinha and M. P. Khanna, “Beauty-conserving strangeness-changing two-body hadronic decays of beauty baryons,” Mod. Phys. Lett. A [**14**]{} (1999) 651. M. B. Voloshin, “Weak decays $\Xi_Q \to \Lambda_Q \pi$,” Phys. Lett. B [**476**]{} (2000) 297. X. Li and M. B. Voloshin, “Decays $\Xi_b \to \Lambda_b \pi$ and diquark correlations in hyperons,” Phys. Rev. D [**90**]{} (2014) 033016 \[arXiv:1407.2556 \[hep-ph\]\]. S. Faller and T. Mannel, “Light-Quark Decays in Heavy Hadrons,” Phys. Lett. B [**750**]{} (2015) 653 \[arXiv:1503.06088 \[hep-ph\]\]. H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin, T. M. Yan and H. L. Yu, “Heavy-Flavor-Conserving Hadronic Weak Decays of Heavy Baryons,” JHEP [**1603**]{} (2016) 028 \[arXiv:1512.01276 \[hep-ph\]\]. M. Gronau and J. L. Rosner, “$S$-wave nonleptonic hyperon decays and $\Xi^-_b \to \pi^- \Lambda_b$,” Phys. Rev. D [**93**]{} (2016) 034020 \[arXiv:1512.06700 \[hep-ph\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], “Evidence for the strangeness-changing weak decay $\Xi_b^-\to\Lambda_b^0\pi^-$,” Phys. Rev. Lett.  [**115**]{} (2015) 241801 \[arXiv:1510.03829 \[hep-ex\]\]. M. Roos [*et al.*]{} (Particle Data Group Collaboration), “Review of Particle Properties,” Phys. Lett. B [**111**]{} (1982) 1. See in particular the mini-review by O. Overseth, p. 286. H. Sugawara, “Application of Current Commutation Rules to Nonleptonic Decay of Hyperons,” Phys. Rev. Lett. [**15**]{} (1965) 870. M. Suzuki, “Consequences of Current Commutation Relations in the Nonleptonic Hyperon Decays,” Phys. Rev. Lett. [**15**]{} (1965) 986. S. Nussinov and J. L. Rosner, “Duality and nonleptonic hyperon decay,” Phys. Rev. Lett.  [**23**]{} (1969) 1264. M. B. Voloshin, “Relations between inclusive decay rates of heavy baryons,” Phys. Rept.  [**320**]{} (1999) 275 \[hep-ph/9901445\]. M. B. Voloshin, “Reducing model dependence of spectator effects in inclusive decays of heavy baryons,” Phys. Rev. D [**61**]{} (2000) 074026 \[hep-ph/9908455\]. K. A. Olive [*et al.*]{} (Particle Data Group Collaboration), “Review of Particle Physics,” Chin. Phys. C [**38**]{} (2014) 090001. G. Buchalla, A. J. Buras and M. E. Lautenbacher, “Weak decays beyond leading logarithms,” Rev. Mod. Phys.  [**68**]{} (1996) 1125 \[hep-ph/9512380\]. B. W. Lee, “Transformation Properties of Nonleptonic Weak Interactions,” Phys. Rev. Lett.  [**12**]{} (1964) 83. H. Sugawara, “A New Triangle Relation for Nonleptonic Hyperon Decay Amplitudes as a Consequence of the Octet Spurion and the R Symmetry,” Prog. Theor. Phys. [**31**]{} (1964) 213.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The large scale binary black hole effort in numerical relativity has led to an increasing distinction between numerical and mathematical relativity. This note discusses this situation and gives some examples of succesful interactions between numerical and mathematical methods is general relativity.' address: 'Albert Einstein Institute, Am Mühlenberg 1, D-14476 Potsdam, Germany Department of Mathematics, University of Miami, Coral Gables, FL 33124, USA' author: - Lars Andersson date: 'July 2, 2006' title: On the relation between mathematical and numerical relativity --- Introduction ============ After a lengthy period of fighting various “monsters” [@Lakatos], such as spurious radiation, constraint instabilities, boundary effects, collapse of the lapse, etc., the effort in numerical general relativity directed at modelling mergers of binary black holes is now rapidly entering a phase of “normal science”. Although not all of the monsters have been tamed, a number of groups are reporting multiple orbit evolutions and the goal of providing reliable wave forms in sufficient numbers and of sufficient accuracy for use in gravitational wave data analysis is in sight. The conceptual framework for the numerical work on the binary black hole (BBH) problem, which arguably has played an absolutely necessary role as foundation and stimulus for this work, has been provided by the global picture of spacetimes, including singularity theorems, ideas of cosmic censorship, post-Newtonian and other analytical approximations of the 2-body problem in general relativity, which have been arrived at purely by analytical and geometric techniques. Further, the theoretical analysis of numerical approximations to solutions of systems of PDE’s, the analysis of the Cauchy problem for the Einstein equations, developments in computer science concerning parallell processing, all provided essential stepping stones on the path towards successful BBH simulations. Due to the large scale of the effort that goes into the BBH work, the division of the general relativity community into “numerical” and “theoretical/mathematical” groups has become pronounced. With this in mind, it seems natural to ask oneself what grounds there are for future interactions between these two communities. On the one hand, one may take the point of view that the “strong field” regime of general relativity is going to be the essentially exclusive domain of numerical general relativity, the phenomena one is likely to encounter being too complex to be amenable to mathematical analysis; a consequence of this point of view is the recommendation to mathematical relativists interested in these aspects of general relativity to devote themselves to becoming numerical relativists. On the other hand, one may take the point of view that the strong field regime of general relativity likely contains new phenomena of interest both for our understanding of the analytical nature of the Einstein equations, as well as for our understanding of physical reality. In the latter point of view, which I am proposing in this note, the relation between numerical and mathematical general relativity is similar to that of experimental mathematics to mathematics, i.e. as a tool for discovering new phenomena, testing conjectures, and developing a heuristic framework which can be used in a precise mathematical analysis. In either case, there is a clear need for an effort to bridge the emerging gap between the two communities. Numerical experiments and mathematics ------------------------------------- Mathematics has a long history of interaction between computer simulations and analytical work. Areas where this interaction has been prominent are number theory, dynamical systems, and fluid mechanics. The interaction has provided both the discovery of new phenomena, as well as proofs of theorems conjectured on the basis of numerical experiments. A few examples where the interaction between mathematics and computer simulations has played an important role are provided by the accidental discovery in 1963 by Lorentz [@lorentz] of chaotic behavior in a system of equations derived from atmospheric models[^1], the discovery by Feigenbaum [@feigenbaum] of universality in period doubling bifurcations, the discovery and study of strange attractors in dynamical systems, and the analysis of fractals including the Mandelbrot set [@mandelbrot]. The proof of the existence of solutions to the Feigenbaum functional equation was computer based, using rigorous numerical computer techniques [@lanford]. The existence of strange attractors for the H[é]{}non map [@henon] was proved by Benedicks and Carleson [@BC], using analytic techniques. The proof was preceeded by a lengthy period of theoretical work as well as very detailed computer simulations which gave strong support to the conjectured picture of the attractor and the dynamics of the H[é]{}non map. The H[é]{}non map was derived as a model for the Poincar[é]{} map of the Lorentz system. It was recently proved that the Lorentz system contains a strange attractor [@tucker], thus providing a solution to Smale’s 14th problem. The proof of this fact was again computer based. Overview of this paper ---------------------- Below, in section \[sec:success\], I shall discuss three examples from general relativity. The first is the Bianchi IX, or Mixmaster system, an anisotropic homogenous cosmological model, and in particular modelled by a system of ODE’s, see section \[sec:mixmaster\]. The second is the Gowdy $T^2$-symmetric cosmological model, which is modelled by a 1+1 dimensional system of wave equations, see section \[sec:gowdy\]. Third, I will discuss critical collapse, see section \[sec:critical\] which was first discovered during numerical simulations of the collapse of a self-gravitating scalar field. In section \[sec:open\], I mention some open problems where it seems likely that the interaction of numerical and analytical techniques will play an important role. In section \[sec:T2\], I discuss general $T^2$ symmetric cosmologies, which provide a simple model for the full BKL type behavior followed by a few remarks on generic singularities in section \[sec:generic\]. The next section \[sec:wavemap\], introduces the problem of self-gravitating wave maps and the $U(1)$ model. Finally, the stability of the Kerr black hole is discussed in section \[sec:Kerr\]. Concluding remarks are given in section \[sec:concluding\]. Success stories {#sec:success} =============== The Mixmaster spatially homogenous cosmology {#sec:mixmaster} -------------------------------------------- The Bianchi IX or Mixmaster model is given by restricting the vacuum Einstein equations to the spatially homogenous case with $S^3$ spatial topology. The dynamics of this system was first discussed in some detail by Misner, see [@misner:1993] and references therein, see also [@WE]. Misner gave a Hamiltonian analysis which indicated that the system exhibits “bounces” interspersed with periods of “coasting”. The thesis of Chitre [@chitre] gave an approximation of the dynamics as a hyperbolic billiard. It was quickly realized that the billiard system is chaotic in a certain sense, namely it projects to the Gauss map, $x \mapsto \{ \frac{1}{x} \}$, see [@barrow], which has been well studied. The heuristic picture of the oscillatory, and chaotic, asymptotic behavior of the Mixmaster model played a central role in the proposal of Belinskiǐ, Khalatnikov, and Lifshitz (BKL) concerning the structure of generic singularities for the gravitational field [@bkl70; @bkl82]. An essential aspect of the BKL proposal is that the dynamics near typical spatial points is asymptotically “Mixmaster”-like. In the case of spacetimes containing stiff matter on the other hand, the asymptotics is “Kasner”-like, and quiescent. The quiescent behavior also occurs under certain symmetry conditions, an important example being the Gowdy spacetimes to be discussed below. Apart from the intrinsic beauty of the Mixmaster system, the BKL proposal provides one of the main motivations for studying the Mixmaster system in detail. It should be remarked that the chaotic nature of the Mixmaster dynamics was used by Misner as a basis for the so-called “chaotic cosmology” proposal, in which it was argued that the dynamics of Mixmaster gave a way around the horizon problem which plauged cosmology during this period (i.e. pre-inflation). The full Mixmaster model resisted analysis for a long time, in spite of a large number of papers devoted to this subject. Numerical experiments indicated that the model has sensitive dependence on initial data, but also revealed that evolving the system of ODE’s describing the Mixmaster dynamics for sufficiently long times to give useful insights, and with sufficient accuracy to give reliable results, presented a difficult challenge. It was only with the work of Berger, Garfinkle and Strasser [@BGS] which made use of symplectic integration techniques and an analytic approximation that it was possible to overcome the extremely stiff nature of the system of ODE’s for the Mixmaster model. This numerical work gave strong support for the basic conjectures concerning the Mixmaster system, and led to a renewed interest within the mathematical general relativity community in the analysis of the Mixmaster dynamics. The volumes [@HBC] as well as the paper [@rendall:mix] played an important role in spreading the word about this problem. The main conjectures concerning the mixmaster model, including proof of cosmic censorship in the Bianchi class A models, the oscillatory nature of the Bianchi IX singularity, as well as the existence of an attractor for the Bianchi IX system were proved by Ringström in a series of papers [@ring2000; @ring2001]. However, in spite of these very important results, many basic and important questions concerning both the full Mixmaster system, as well as the billiard approximation, remain open. Recent work of Damour, Henneaux, Nicolai and others [@DHN] have shown that the BKL conjecture extends in a very interesting way to higher dimensional theories of gravitation inspired by supergravity theories in D=11 spacetime dimensions. A formal argument indicates that these models have asymptotic Mixmaster like behavior, governed by a hyperbolic billiard, determined by the Weyl chamber of a certain Kac-Moody Lie algebra. Applying this analysis to 3+1 vacuum gravity reproduces the Chitre model. These domains occurring in these hyperbolic billiards are arithmetic, which has interesting consequences for the length spectrum of the billiard. An important open problem is to understand the relation between the “Hamiltonian” approach, developed by Misner-Chitre, and which also is used in the work of Damour-Henneaux with the scale invariant variables approach developed by Ellis-Wainwright-Hsu, and which was used in the work of Ringstrom on Mixmaster. The scale invariant variables formalism has been generalized to inhomogenous models by Uggla et. al, [@uggetal2003], and applied to formal and numerical analysis of inhomogenous cosmological models [@AELU; @GarfPRL]. The Gowdy $T^2$-symmetric cosmologies {#sec:gowdy} ------------------------------------- The cosmological models on $T^3 \times {\mathbb R}$, with $T^2$ symmetry, and with hypersurface orthogonal Killing fields, the so-called Gowdy model, is one of the simplest inhomogenous cosmological models. The Einstein equations reduce to a system of PDE’s on $S^1 \times {\mathbb R}$, consisting of a pair of nonlinear wave equations of wave maps type, and a pair of constraint equations. Eardley, Liang and Sachs [@ELS] introduced the notion of asymptotically velocity dominated singularities, to describe the asymptoticall locally Kasner like, non-oscillating behavior of certain cosmological models. We will refer to this behavior as quiescent. In particular, analysis showed that one could expect the Gowdy model to exhibit quiescent behavior at the singularity. A programme to study the Gowdy model analytically, with a view towards proving strong cosmic censorship for this class of models, was initiated by Moncrief. The methods used included a Hamiltonian analysis, and formal power series expansions around the singularity. The formal power series expansions of Grubisic and Moncrief [@GM] supported the idea that a family of Gowdy spacetimes with “full degrees of freedom”, i.e. roughly speaking parametrized by four functions, exhibited quiescent behavior at the singularity. In the course of this work, an obstruction to the convergence of the formal power series was discovered. The condition for the consistency of the formal power series expansions was that the “asymptotic velocity” $k$ of the Gowdy spacetime, satisfies $0 < k < 1$. The term asympotic velocity has its origin in the fact that the evolution of a Gowdy spacetime corresponds to the motion of a loop in the hyperbolic plane. The asymptotic velocity $k(x)$ for $x \in S^1$ is defined as the asymptotic hyperbolic velocity of the point with parameter $x$ on the evolving loop in the hyperbolic plane. It is a highly nontrivial fact that this limiting value exists, see [@ring:vel]. Numerical studies carried out by Berger and Moncrief [@BM93] and later by Berger and Garfinkle [@BG98] gave rise to a good heuristic picture of the asymptotic dynamics of Gowdy models. In particular, the numerical work showed that Gowdy spacetimes exhibit sharp features (spikes), which formed and appeared to persist until the singularity. The spatial scale of the spikes turned out to be shrinking exponentially fast, and it was therefore impossibly to resolve these features numerically for more than a limited time. Kichenassamy and Rendall [@KR98] showed, using Fuchsian techniques, that the picture developed in the work on formal expansions could be made rigorous, and in particular that full parameter families of “low velocity” Gowdy spacetimes could be constructed with quiescent singularities. Further, Rendall and Weaver [@RW] used a combination of Fuchsian and solution generating techniques to construct Gowdy spacetimes containing spikes with arbitrary prescribed velocity. This allowed one to gain detailed understanding of the nature of the spikes, and in particular of the nature of the discontinuity of the asymptotic velocity at spikes. These developments gave through the numerical work, a vivid graphical picture of the dynamics of Gowdy spacetimes, but also established with rigor some of the fundamental conjectured aspects of Gowdy spacetimes. Based on these developments, Ringstrom [@ring2004; @ring2004b; @ring2004c] was able to analyze the nature of the singularity of generic Gowdy spacetimes, and in particular give a proof of strong cosmic censorship for this class of spacetimes. Critical collapse {#sec:critical} ----------------- Critical behavior in singularity formation was discovered by Matt Choptuik [@chop92], during numerical studies of the collapse of self-gravitating scalar fields. He found that for one-parameter families of initial data, interpolating between data leading to dispersion and data leading to collapse, data on the borderline between dispersion and collapse exibited, for a period depending on the parameter, a discrete self-similar behavior before dispersing or collapsing. Further, Choptuik found that the rate of divergence from the self-similar behavior exhibited a “universal” behavior, analogous to the universality discovered by Feigenbaum in connection with period doubling bifurcations. This work opened up a very rich field of investigation which is still active. The basic principle is now well established through a large number of numerical experiments and investigations, see the review paper [@gundlach:crit]. A formal analysis indicates that the “universal” behavior mentioned above may be explained in terms of a linearized analysis around the self-similar critical solution [@evans:coleman; @koike:etal]. It turns out that the detailed behavior, in particular the rate, depends on the details of the nonlinearity, or in the case of general relativity, the matter model under consideration, but the basic idea of universality within a matter model, and the above mentioned mechanism for the critical behavior is well established over a wide range of models. Depending on the matter model, the self-similar behavior may be discrete, continuous, or even in some cases a mixture of the two types. Virtually all numerical work on critical behavior has so far been in the spherically symmetric case. The reason is the extreme demands on numerical precision presented by the problem. By generalizing the notion of critical behavior from general relativity to semilinear wave equations, Yang-Mills equations, and wave maps equations, Bizon and others [@biz2000], have been able in some cases to find the explicit form of the first unstable (self-similar) mode, and thus give an analytic description of the blowup solutions. They find good agreement with numerical data. However, beyond the linearized stablility analysis mentioned above, not many rigorous results are known for critical behavior for the hyperbolic equations mentioned above, including the case of general relativity. This state of affairs should be contrasted with the asymptotic analysis of singular solutions of semilinear parabolic equations. In the 2+1 dimensional case, a new phenomenon arises. For wave maps on (2+1)-dimensional Minkowski space, with spherical target, there is no self-similar solution. Instead there is a one-parameter family of static solutions, and numerical work in the equivariant case shows that this family mediates the blowup [@biz2001]. This is borne out by the proof due to Struwe [@struwe:blow] that in the equivariant case, a rescaling limit of a blowup solution converges to a harmonic map from ${\mathbb R}^2$ to $S^2$. Due to this fact, the nature of the critical blowup in 2+1 dimensions is fundamentally different from the 3+1 dimensional case, and the analysis of the asympotic rate of concentration of blowup solutions is much more delicate. Recent work of Rodnianski and Sterbenz [@rodster] sheds light on this question in the general case without symmetries. The further study of the asymptotic behavior of blowup solutions of semilinear wave equations as well as the gravitational field, in the non-spherically symmetric case is one of the important challenges for the near future. Here it appears likely that a lot of the technology developed during the course of the BBH work, such as adaptive mesh-refinement, etc. will play a decisive role. Indeed, the original work by Choptuik on critical collapse used a version of adaptive mesh refinement in the spherically symmetric situation. Open problems {#sec:open} ============= In this section, I shall briefly indicate some problems which I consider to be of interest from the point of view of the interaction between numerical and mathematical work in general relativity and related fields. The survey papers [@And:sur; @Rend:sur] provide general references for many of the problems mentioned below. The problems I will mention are not exactly coincident with the “forefront” of numerical relativity, and may by some workers in that field be considered as simple problems, not worthy of their attention. There are several reasons for this. One is that the development of the mathematical theory of relativity is to a large extent lagging behing the exploratory and goal oriented work being performed within numerical relativity. Further, in order to provide reliable insights into the nonlinear problems under consideration, the numerical experiments must necessarily be carried out to a high degree of accuracy. This level of precision is so far not available in general in the 3+1 or even 2+1 dimensional numerical evolutions, in particular not in the strong field regime where many of the phenomena of interest take place. In particular, a serious numerical study of the asymptotic behavior at cosmological and other types of singularities, with a view to better understanding the BKL proposal in general relativity, as well as the asymptotic behavior of blowup solutions of geometric wave equations, without symmetry assumptions, is likely to be at least as challenging as the BBH problem. General $T^2$-symmetric comologies {#sec:T2} ---------------------------------- The full $T^2$ symmetric model on $T^3 \times {\mathbb R}$, without the condition the the Killing fields be surface orthogonal exhibits oscillatory behavior at the singularity. While this has not been rigorously established, this is indicated by formal and numerical studies. The formal work includes the analysis of the silent boundary due to Uggla et al [@uggetal2003]. There are formal and numerical studies using both the metric formulation by Berger et al. [@ber2001] and the scale invariant formulation by Andersson et al. [@AELU]. The last mentioned work gives numerical support to the silent boundary picture for the case of $T^2$-symmetric cosmologies. The review by Berger [@ber2002] provides a general reference on the numerical investigation of spacetime singularities. The numerical studies lend support to the BKL proposal on the nature of generic cosmological singularities, and also indicate some new dynamical features of the $T^2$ singularity. These new features can be interpreted as spikes. However in contrast to the Gowdy case, where the asymptotic velocity at the spike is a constant, the spikes in $T^2$ are dynamical features, which according to the numerical experiments [@AELU] exhibit a simple dynamics, closely related to the asymptotic billiard for the silent boundary system for $T^2$. Singularities in generic cosmologies {#sec:generic} ------------------------------------ The $U(1)$ model has been studied in the spatially compact case by Choquet-Bruhat and Moncrief [@ChBM; @ChB]. They proved global existence in the expanding direction for small data on spacetimes with topology $\Sigma \times {\mathbb R}$ with ${\text{\rm genus}}(\Sigma) > 1$. For the polarized case, one has a self-gravitating scalar field. In the polarized case, one expects to have quiescent behavior at the singularity, a full parameter family of such solutions was constructed by Choquet-Bruhat, Isenberg and Moncrief [@IM; @ChBIM]. For the full $U(1)$ model, one expects an oscillatory singularity, as in the full $T^2$ case. Numerical and analytical work of Berger and Moncrief [@BM2000; @BM2000b] give support to this picture. For this model, as for the $T^2$ model, the BKL proposal, and in particular, the silent boundary proposal of Uggla et al [@uggetal2003] provides a heuristic picture of the dynamical behavior that one expects to see. In fact, the scale invariant variables introduced by Uggla et al provides, with for example CMC time gauge, a well posed elliptic-hyperbolic system, which can be used to model the dynamical behavior at the singularity. Some preliminary numerical experiments using a CMC code have been carried out. Even for the polarized case in the 2+1 dimensions the need for adaptive codes is apparent. The oscillatory nature of the singularity means that spatial structure is created at small scales. This will make it impossible to produce even somewhat realistic evolutions of the full $U(1)$ model without using an adaptive code. See however the recent work by Garfinkle [@GarfPRL] for some numerical experiments in the 3+1 case. For these, even though they reproduce the heuristic picture derived from the silent boundary conjecture, the accuracy is too low to provide reliable information. Earlier work by Garfinkle [@Gar2002] on cosmological singularities in self-gravitating scalar field model in 3+1 dimensions made use of spacetime harmonic (or wave) coordinates. The fact that this experiment, which for essentially the first time made use of spacetime harmonic coordinates for a numerical relativity code was successful, has later had a significant influence on current work on the BBH problem. The self-gravitating scalar field model is known to have large families of data which give rise to quiescent singularities [@AR2001]. Self-gravitating wave maps and $U(1)$ {#sec:wavemap} ------------------------------------- Vacuum 3+1 dimensional gravity with a spatial $U(1)$ action gives, after a Kaluza-Klein reduction, a self-gravitating wave-maps model in 2+1 dimensions, with hyperbolic target space. Further imposing on this model a rotational symmetry, i.e. another spatial $U(1)$ action, which acts equivariantly, results in an equivariant self-gravitating 2+1 dimensional wave map with hyperbolic target. The equivariant $U(1)$ action does not correspond to a Killing field in the 3+1 dimensional picture. In this case it is natural to impose asymptotic flatness for the 2+1 dimensional spacetime [@ashvar]. If this is done, turning off the gravitational interaction gives a flat space wave maps model with hyperbolic target. According to standard conjectures, the flat space 2+1 dimensional wave map with hyperbolic target is expected to be well posed in energy norm, see [@tao2004], and it is therefore reasonable to expect that also the self-gravitating version of this model is globally well-posed. For the 2+1 dimensional wave maps model with spherical target, on the other hand, numerical experiments indicate that one has blowups for large data, see section \[sec:critical\] above. Based on the idea that the blowup in the wave maps model with spherical target is mediated by static solutions, one may argue that in the self-gravitating case, one cannot have blowups for sufficiently large values of the coupling constant. The reason for this is that the energy balance between the gravatational field and the wave maps field does not leave enough energy for the wave maps field to produce the static solutions which mediate blowup. Some numerical experiments have been carried out which support this picture. It would be of great interest to have detailed numerical experiments in this situation. The full 2+1 dimensional version of the self-gravitating wave maps problem is very challenging both numerically and theoretically. Stability of Kerr {#sec:Kerr} ----------------- As mentioned above, the understanding of the structure of full 3+1 dimensional cosmological singularities and the strong cosmic censorship represents a major challenge to the numerical and mathematical relativity community. However, the stability of the Kerr black hole is perhaps closer to the type of problems which occupy most of the attention of current work in numerical relativity. As is well known, according to the cosmic censorship picture, the end state of the evolution of an asymptotically flat data set is a single Kerr black hole. A proof of the nonlinear stability of Kerr would provide an important step towards a proof of this far-reaching conjecture. The nonlinear stability of Minkowski space was proved by Christodoulou and Klainerman, see [@ChK], see also [@friedrich:complete] for an earlier partial result, using a conformally regular form of the Einstein equations. For quasilinear wave equations which satisfy the so-called null condition of Christodoulou [@Chrnull], global existence for sufficiently small data is known to hold in dimension $n+1$, $n \geq 3$. It is a very important fact that the Einstein equations do not satisfy the classical null condition [@ChBnull]. The proof of Christodoulou and Klainerman relied upon detailed and rather delicate estimates of higher order Bel-Robinson energies, using a combination of techniques. The geometry of certain null foliations was studied, exploiting the transport equations for geometric data along null rays. Further, a variant of the vector fields method of Klainerman [@Kl:weight] was used. The vector fields method was developed to prove decay estimates for solutions of wave equations, and requires at least approximate symmetries of the background solution. The method has been used by Klainerman and Rodnianski in a micro-local setting in order to prove well-posedness for the Einstein equations with rough data. The method of Christodoulou and Klainerman has later, in a series of papers by Nicolo and Klainerman [@KN:book; @KN2003] been shown to yield the correct peeling behavior at null infinity, which is expected from the Penrose picture. Recently, a substantially simpler proof of the nonlinear stability of Minkowski space was given by Lindblad and Rodnianski [@LR]. Their proof relies upon the so-called weak null condition. Neither of the above mentioned techniques generalize easily to the case of a non-flat backgrund solution. One serious problem is that the light cones in a black hole spacetime differ by a logarithmic term from those in Minkowski space. Further, due to the presence of the horizon, and in particular the ergo region, one has different types of decay behavior in the region close to the black hole and in the asymptotic region. Several natural problems arise in this context. The decay of scalar field on Schwarzschild and Kerr backgrounds, in particular the behavior at the horizon (Price law) is a natural starting point. Several recent papers have studied this problem [@DR2005; @FKSY] and for the case of a Schwarzschild background the estimates agree with the conjectured Price law behavior. While some mathematical results on for example the decay of scalar fields on Schwarzschild and Kerr backgrounds are available, the techniques used to prove these are intimately tied to the symmetries of the background, and make heavy use of spherical harmonics expansions. Therefore these proofs do not directly generalize to spacetimes which are close in a suitable sense to Kerr. Further, one could even say that we don’t have a good notion of what “close to Kerr” actually means. Thus, the problem of stability of Kerr opens up a natural arena for the interaction of numerical and mathematical relativity. The aspects of this problem where numerical experiments may be able to provide crucial insights include the asymptotic decay behavior of the gravitational field and matter fields near the horizon. A question closely related to this, and of direct relevance for numerical work, is the asymptotic behavior of dynamical horizons [@AK; @AG; @AMS; @SKB]. It is not unlikely that a good understanding of the asympotic geometry of dynamical horizons near timelike infinity will play a crucial role in the global analysis of black hole spacetimes. In the far region and intermediate region, one expects linear effects to dominate and here there is a lot of information available from systematic post-Newtonian calculations. It is of interest to compare this to the results of numerical simulations, and a great deal of work in this direction is already being carried out in the context of the BBH programme. Concluding remark {#sec:concluding} ================= This note represents a personal view and the rather incomplete discussion here leaves out very large areas of numerical relativity and numerical geometric analysis, including Ricci flow, heat flow, higher dimensional general relativity models, including black strings and other areas which are being worked on intensely. Further, the asymptotic behavior of cosmologies in the expanding direction, which has not been discussed here, provides interesting open questions, which can be fruitfully studied using numerical techniques. Acknowledgements {#acknowledgements .unnumbered} ---------------- This note is loosely based on a talk given during the Newton Institute programme on Global Problems in Mathematical Relativity, fall of 2005. I am grateful to the organizers of the GMR programme for inviting me to the Newton Institute and to the Newton institute for the excellent working environment. I would also like to thank the editors of the special issue on numerical general relativity, for the invitation to contribute an article. \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [10]{} Lars Andersson, *The global existence problem in general relativity*, The Einstein equations and the large scale behavior of gravitational fields, Birkhäuser, Basel, 2004, pp. 71–120. Lars Andersson, Marc Mars, and Walter Simon, *Local existence of dynamical and trapping horizons*, Physical Review Letters **95** (2005), 111102. Lars Andersson and Alan D. Rendall, *Quiescent cosmological singularities*, Comm. Math. Phys. **218** (2001), no. 3, 479–511. Lars Andersson, Henk [van Elst]{}, Woei Chet Lim, and Claes Uggla, *Asymptotic silence of generic cosmological singularities*, Physical Review Letters **94** (2005), 051101. Abhay Ashtekar and Gregory J. Galloway, *Some uniqueness results for dynamical horizons*, Adv. Theor. Math. Phys. **9** (2005), no. 1, 1–30. Abhay Ashtekar and Badri Krishnan, *Dynamical horizons: energy, angular momentum, fluxes, and balance laws*, Phys. Rev. Lett. **89** (2002), no. 26, 261101, 4. Abhay Ashtekar and Madhavan Varadarajan, *A striking property of the gravitational hamiltonian*, Physical Review D **50** (1994), 4944. John D. Barrow, *Chaotic behaviour in general relativity*, Phys. Rep. **85** (1982), no. 1, 1–49. Vladimir A. Belinskiǐ, Isaac M. Khalatnikov, and Evgeny M. Lifshitz, *Oscillatory approach to a singular point in the relativistic cosmology*, Adv. Phys. **19** (1970), 525–573. , *A general solution of the [E]{}instein equations with a time singularity*, Adv. Phys. **31** (1982), 639–667. Michael Benedicks and Lennart Carleson, *The dynamics of the [H]{}énon map*, Ann. of Math. (2) **133** (1991), no. 1, 73–169. Beverly K. Berger, *Numerical approaches to spacetime singularities*, Living Rev. Relativ. **5** (2002), 2002–1, 58 pp. (electronic). Beverly K. Berger and David Garfinkle, *Phenomenology of the gowdy universe on ${T}^3 \times {R}$*, Physical Review D **57** (1998), 4767. Beverly K. Berger, David Garfinkle, and Eugene Strasser, *New algorithm for [M]{}ixmaster dynamics*, Classical Quantum Gravity **14** (1997), no. 2, L29–L36. Beverly K. Berger, James Isenberg, and Marsha Weaver, *Oscillatory approach to the singularity in vacuum spacetimes with [$T\sp 2$]{} isometry*, Phys. Rev. D (3) **64** (2001), no. 8, 084006, 20. Beverly K. Berger and Vincent Moncrief, *Numerical investigation of cosmological singularities*, Phys. Rev. D (3) **48** (1993), no. 10, 4676–4687. Beverly K. Berger and Vincent Moncrief, *Exact [U(1)]{} symmetric cosmologies with local [M]{}ixmaster dynamics*, Physical Review D **62** (2000), 023509. , *Signature for local [M]{}ixmaster dynamics in [U(1)]{} symmetric cosmologies*, Physical Review D **62** (2000), 123501. Piotr Bizo[ń]{}, Tadeusz Chmaj, and Zbis[ł]{}aw Tabor, *Dispersion and collapse of wave maps*, Nonlinearity **13** (2000), no. 4, 1411–1423. , *Formation of singularities for equivariant [$(2+1)$]{}-dimensional wave maps into the 2-sphere*, Nonlinearity **14** (2001), no. 5, 1041–1053. D. M. [Chitre]{}, *[Investigations of Vanishing of a Horizon for Bianchy Type X (the [M]{}ixmaster) Universe.]{}*, Ph.D. Thesis (1972). M. W. Choptuik, *“[C]{}ritical” behaviour in massless scalar field collapse*, Approaches to numerical relativity (Southampton, 1991), Cambridge Univ. Press, Cambridge, 1992, pp. 202–222. Y. Choquet-Bruhat, J. Isenberg, and V. Moncrief, *Topologically general [U]{}(1) symmetric vacuum space-times with [AVTD]{} behavior*, Nuovo Cimento Soc. Ital. Fis. B **119** (2004), no. 7-9, 625–638. Yvonne Choquet-Bruhat, *Asymptotic solutions of non linear wave equations and polarized null conditions*, Actes des Journées Mathématiques à la Mémoire de Jean Leray, Sémin. Congr., vol. 9, Soc. Math. France, Paris, 2004, pp. 125–141. , *Future complete [${\rm U}(1)$]{} symmetric [E]{}insteinian spacetimes, the unpolarized case*, The Einstein equations and the large scale behavior of gravitational fields, Birkhäuser, Basel, 2004, pp. 251–298. Yvonne Choquet-Bruhat and Vincent Moncrief, *Nonlinear stability of an expanding universe with the [$S\sp 1$]{} isometry group*, Partial differential equations and mathematical physics (Tokyo, 2001), Progr. Nonlinear Differential Equations Appl., vol. 52, Birkhäuser Boston, Boston, MA, 2003, pp. 57–71. Demetrios Christodoulou, *Global solutions of nonlinear hyperbolic equations for small initial data*, Comm. Pure Appl. Math. **39** (1986), no. 2, 267–282. Demetrios Christodoulou and Sergiu Klainerman, *The global nonlinear stability of the [M]{}inkowski space*, Princeton Mathematical Series, vol. 41, Princeton University Press, Princeton, NJ, 1993. Mihalis Dafermos and Igor Rodnianski, *The red-shift effect and radiation decay on black hole spacetimes*, 2005. T. Damour, M. Henneaux, and H. Nicolai, *Cosmological billiards*, Classical Quantum Gravity **20** (2003), no. 9, R145–R200. D. Eardley, E. Liang, and R.K. Sachs, *Velocity-dominated singularities in irrotational dust cosmologies*, J. Math. Phys. **13** (1972), no. 1, 99–107. Charles R. Evans and Jason S. Coleman, *Critical phenomena and self-similarity in the gravitational collapes of radiation fluid*, Phys. Rev. Lett. **72** (1994), 1782–1785. Mitchell J. Feigenbaum, *Quantitative universality for a class of nonlinear transformations*, J. Statist. Phys. **19** (1978), no. 1, 25–52. F. Finster, N. Kamran, J. Smoller, and S.-T. Yau, *Decay of solutions of the wave equation in the [K]{}err geometry*, Comm. Math. Phys. **264** (2006), no. 2, 465–503. Helmut Friedrich, *On the existence of $n$-geodesically complete or future complete solutions of [E]{}instein’s field equations with smooth asymptotic structure*, Comm. Math. Phys. **107** (1986), no. 4, 587–609. David Garfinkle, *Harmonic coordinate method for simulating generic singularities*, Phys. Rev. D (3) **65** (2002), no. 4, 044029, 6. David Garfinkle, *Numerical simulations of generic singuarities*, Physical Review Letters **93** (2004), 161101. Boro Grubi[š]{}i[ć]{} and Vincent Moncrief, *Asymptotic behavior of the [$T\sp 3\times{\bf R}$]{} [G]{}owdy space-times*, Phys. Rev. D (3) **47** (1993), no. 6, 2371–2382. Carsten Gundlach, *Critical phenomena in gravitational collapse*, Living Rev. Relativ. **2** (1999), 1999–4, 58 pp. (electronic). M. H[é]{}non, *A two-dimensional mapping with a strange attractor*, Comm. Math. Phys. **50** (1976), no. 1, 69–77. David Hobill, Adrian Burd, and Alan Coley (eds.), *Deterministic chaos in general relativity*, NATO Advanced Science Institutes Series B: Physics, vol. 332, New York, Plenum Press, 1994. James Isenberg and Vincent Moncrief, *Asymptotic behaviour in polarized and half-polarized [U[$(1)$]{}]{} symmetric vacuum spacetimes*, Classical Quantum Gravity **19** (2002), no. 21, 5361–5386. Satyanad Kichenassamy and Alan D. Rendall, *Analytic description of singularities in [G]{}owdy spacetimes*, Classical Quantum Gravity **15** (1998), no. 5, 1339–1355. Sergiu Klainerman, *Weighted [$L\sp{\infty }$]{} and [$L\sp{1}$]{} estimates for solutions to the classical wave equation in three space dimensions*, Comm. Pure Appl. Math. **37** (1984), no. 2, 269–288. Sergiu Klainerman and Francesco Nicol[ò]{}, *The evolution problem in general relativity*, Progress in Mathematical Physics, vol. 25, Birkhäuser Boston Inc., Boston, MA, 2003. , *Peeling properties of asymptotically flat solutions to the [E]{}instein vacuum equations*, Classical Quantum Gravity **20** (2003), no. 14, 3215–3257. Tatsuhiko Koike, Takashi Hara, and Satoshi Adachi, *Critical behavior in gravitational collapse of radiation fluid: A renormalization group (linear perturbation) analysis*, Phys. Rev. Lett. **74** (1995), 5170–5173. Imre Lakatos, *Proofs and refutations*, Cambridge University Press, Cambridge, 1976, The logic of mathematical discovery, Edited by John Worrall and Elie Zahar. Oscar E. Lanford, III, *A computer-assisted proof of the [F]{}eigenbaum conjectures*, Bull. Amer. Math. Soc. (N.S.) **6** (1982), no. 3, 427–434. Hans Lindblad and Igor Rodnianski, *Global existence for the [E]{}instein vacuum equations in wave coordinates*, Comm. Math. Phys. **256** (2005), no. 1, 43–110. E. N. Lorentz, *Deterministic nonperiodic flow*, J. Atmos. Sci. **20** (1963), 130–141. Benoit Mandelbrot, *Fractal aspects of the iteration of $z\mapsto\lambda z(1-z)$, for complex $\lambda,z$*, Nonlinear dynamics (R. G. H. Helleman, ed.), Annals of the New York Academy of Sciences, vol. 357, 1980, pp. 249–259. Charles W. Misner, *The mixmaster cosmological metrics*, Deterministic chaos in general relativity (Kananaskis, AB, 1993), NATO Adv. Sci. Inst. Ser. B Phys., vol. 332, Plenum, New York, 1994, pp. 317–328. Alan D. Rendall, *Global dynamics of the mixmaster model*, Classical Quantum Gravity **14** (1997), no. 8, 2341–2356. , *Theorems on existence and global dynamics for the [E]{}instein equations*, Living Rev. Relativ. **5** (2002), 2002–6, 62 pp. (electronic). Alan D. Rendall and Marsha Weaver, *Manufacture of [G]{}owdy spacetimes with spikes*, Classical Quantum Gravity **18** (2001), no. 15, 2959–2975. H. Ringstr[ö]{}m, *Curvature blow up in [B]{}ianchi [VIII]{} and [IX]{} vacuum spacetimes*, Classical Quantum Gravity **17** (2000), no. 4, 713–731. Hans Ringstr[ö]{}m, *The [B]{}ianchi [IX]{} attractor*, Ann. Henri Poincaré **2** (2001), no. 3, 405–500. , *Asymptotic expansions close to the singularity in [G]{}owdy spacetimes*, Classical Quantum Gravity **21** (2004), no. 3, S305–S322, A spacetime safari: essays in honour of Vincent Moncrief. , *On a wave map equation arising in general relativity*, Comm. Pure Appl. Math. **57** (2004), no. 5, 657–703. , *On [G]{}owdy vacuum spacetimes*, Math. Proc. Cambridge Philos. Soc. **136** (2004), no. 2, 485–512. Hans Ringström, *Existence of an asymptotic velocity and implications for the asymptotic behavior in the direction of the singularity in ${T}^3$-gowdy*, Comm. Pure Appl. Math. **59** (2006), 977–1041. Igor Rodnianski and Jacob Sterbenz, *On the formation of singularities in the critical o(3) sigma-model*, 2006. Erik Schnetter, Badri Krishnan, and Florian Beyer, *Introduction to dynamical horizons in numerical relativity*, 2006. Michael Struwe, *Equivariant wave maps in two space dimensions*, Comm. Pure Appl. Math. **56** (2003), no. 7, 815–823, Dedicated to the memory of Jürgen K. Moser. Terence Tao, *Geometric renormalization of large energy wave maps*, 2004, submitted, Forges les Eaux conference proceedings. Warwick Tucker, *A rigorous [ODE]{} solver and [S]{}male’s 14th problem*, Found. Comput. Math. **2** (2002), no. 1, 53–117. Claes Uggla, Henk van Elst, John Wainwright, and George F. R. Ellis, *The past attractor in inhomogeneous cosmology*, Phys. Rev. D **68** (2003), no. 10, 103502–22. J. Wainwright and G. F. R. Ellis (eds.), *Dynamical systems in cosmology*, Cambridge University Press, Cambridge, 1997, Papers from the workshop held in Cape Town, June 27–July 2, 1994. [^1]: The famous Fermi-Pasta-Ulam experiment of 1955 is perhaps the non-chaotic counterpart of the Lorentz experiment.
{ "pile_set_name": "ArXiv" }
--- author: - '[ ]{}\' title: '[**Some results on a question of M. Newman on isomorphic subgroups of solvable groups**]{} ' --- **Abstract.**\ **Keywords:** **Mathematics Subject Classification (2020):**  20D10. **Introduction** ================ Recently, G. Glauberman, I.M. Isaacs and G.R. Robinson’s works [@IR; @GR] focus on a question which posted by Moshe Newman who asked the following: [@IR; @GR] Whether can it ever happen that a finite solvable group $G$ has isomorphic subgroup $H$ and $K$, where $H$ is maximal and $K$ is not? In 2015, I.M. Isaacs and G.R. Robinson have done some partial results as follows. [@IR Theorem A, Theorem B] Let $H$ be a maximal subgroup of a solvable group $G$, and suppose that $K\leq G$ and $K\cong H$. If $H$ has a Sylow tower, or a Sylow $2$-subgroup of $H$ is abelian, then $K$ is maximal in $G$. And recently, G. Glauberman and G.R. Robinson get some partial results about the structure of $G$ when there exists a negative answer of Question 1.1. [@GR Theorem A] Let $H$ be a maximal subgroup of the finite solvable group $G$ and suppose that $|G : H|=p^a$ where $p$ is a prime and $a$ is a positive integer. Let $K$ be a subgroup of $G$ which is isomorphic to $H$. Suppose that $K$ is not maximal in $G$. Then $p \leq 3$, and, for $q =5-p$, we have $$O_{q'}(H)=O_{q'}(G)=O_{q'}(K)$$ and, for $G^{\ast}= G/O_{q'}(G)$, etc., $H^{\ast}$ and $K^{\ast}$ are isomorphic subgroups of $G^{\ast}$ with $H^{\ast}$ maximal and $K^{\ast}$ not maximal. The above result use the remarkable theorem of G. Glauberman(see [@G; @S]). And it tells us that Question 1.1 is ture when $p\geq 5$ where $|G: H|=p^a$ for some positive integer $a$. So we will only need to discuss this question in cases that $p\leq 3$. Depending on some results of the above authors’ works, we find that a class of finite groups is important for Question 1.1. This class of finite groups is of $characteristic~ l$. Here, $l$ is a prime number. Recall that a finite group $G$ is said to be of $characteristic~ l$ if $C_G(O_l(G)) \leq O_l(G).$ We have a reduction theorem for Question 1.1 as follows. For each prime $q$, assume that Question 1.1 holds for every characteristic $q$-groups $G$. Then Question 1.1 holds for every finite solvable groups. If $G$ is of $characteristic~ q$, then we have $C_G(O_q(G)) \leq O_q(G)$. So, we can find that $$\mathrm{Aut}_G(O_q(G))=N_G(O_q(G))/C_G(O_q(G))=G/Z(O_q(G)).$$ We can find some information of $G$ from the $\mathrm{Aut}(O_q(G))$. Especially, it becomes useful when $O_q(G)$ is small or abelian. Let $G$ be a finite solvable group and $G$ has isomorphic subgroup $H$ and $K$. Let $H$ is maximal subgroup of $G$, we can set $|G: H|=p^n$. Let $p\leq 3$ and $q=5-p$. Let $Q\in Syl_q(H)$. If $|G|_q\leq q^4$, then $K$ is also maximal. Recall that if $G$ is a $p$-soluble group, the $p$-length $l_p(G)$ is the number of factors of the lower $p$-series of $G$ that are $p$-groups(see [@Go p.227]). Let $G$ be a finite solvable group and $G$ has isomorphic subgroup $H$ and $K$. Let $H$ is maximal subgroup of $G$. If $l_p(G)\leq 1$, then $K$ is also maximal. In the other opinion, a model of a constrained fusion systems is also of $characteristic~ q$ for some prime number $q$. By Theorem A, we can get the following theorem. Let $G$ be a finite solvable group and $G$ has isomorphic subgroup $H$ and $K$. Let $H$ is maximal subgroup of $G$, we can set $|G: H|=p^n$. Let $p\leq 3$ and $q=5-p$. Let $Q\in Syl_q(H)$. If $\mathcal{F}_Q(H)\unlhd \mathcal{F}_Q(G)$, then $K$ is also maximal. $Structure~ of ~ the~ paper:$ After recalling preliminary results, we give proofs of Theorem A, Theorem 1.4 and 1.5 in Section 2. And in Section 3, we give a proof of Theorem B. **Preliminary results, and proofs of Theorem A, Theorem 1.4 and 1.5** ===================================================================== The following lemmas are very useful to get the proof of Theorem A. [@IR Lemma 2] Let $G$ be a solvable group and $H\leq G$, where $|G: H|$ is power of a prime $p$. Then $O_{p}(G)\cap H= O_{p}(H)$. [@IR Theorem 3] Let $H$ be a maximal subgroup of a solvable group $G$ with index a power of the prime $p$, and suppose that $K\leq G$ and $K\cong H$. If $O_p(G)\nleq H$, then $K$ is maximal in $G$. [@GR Theorem A] Let $H$ be a maximal subgroup of the finite solvable group $G$ and suppose that $|G : H|=p^a$ where $p$ is a prime and $a$ is a positive integer. Let $K$ be a subgroup of $G$ which is isomorphic to $H$. Suppose that $K$ is not maximal in $G$. Then $p \leq 3$, and, for $q =5-p$, we have $$O_{q'}(H)=O_{q'}(G)=O_{q'}(K)$$ and, for $G^{\ast}= G/O_{q'}(G)$, etc., $H^{\ast}$ and $K^{\ast}$ are isomorphic subgroups of $G^{\ast}$ with $H^{\ast}$ maximal and $K^{\ast}$ not maximal. [@GR Theorem B] Let $H$ be a maximal subgroup of the finite solvable group $G$ and suppose that $|G : H|=p^a$ where $p\leq 3$ is a prime and $a$ is a positive integer. Let $K$ be a subgroup of $G$ which is isomorphic to $H$. Suppose that $K$ is not maximal in $G$ and that $F(H)$, $F(K)$ and $F(G)$ are all $q$-groups, where $q =5-p$. Let $Q$ be a Sylow $q$-subgroup of $H$. Then $G$ has a homomorphic image $G^{\ast}$ such that $H^{\ast}$ and $K^{\ast}$ (the respective images of $H$ and $K$) are isomorphic subgroups of $G^{\ast}$ with $H^{\ast}$ maximal and $K^{\ast}$ not maximal, and with $F(G^{\ast}), F(H^{\ast})$ and $F(K^{\ast})$ all $q$-groups. Furthermore, $O_{\{2,3\}}(K^{\ast})$ involves $Qd(q)$ and no non-identity characteristic subgroup of $Q^{\ast}$ is normal in $H^{\ast}$. By above two theorems, we can find that Question 1.1 holds when $p\geq 5$. So we will only need to consider the cases when $p\leq 3$. Here, $p$ is a prime satisfied that $|G: H|=p^a$ where $a$ is a positive integer. Now, we will prove Theorem A as follows. This can be seem as a corollary of [@GR Theorem A] and [@GR Theorem B]. For each prime $q$, assume that Question 1.1 holds for every characteristic $q$-groups $G$. Then Question 1.1 holds for every finite solvable groups. Suppose that $(G, H, K)$ is a counterexample. Since $H$ is maximal in a solvable group $G$, we can set $|G:H|=p^n$ for some prime $p$ and positive integer $n$. [**Case 1.**]{} $O_p(G)\neq 1$. By [@IR Theorem 3], we have $O_{p}(G)\leq H$. By [@IR Lemma 2], we have $$O_{p}(G)=O_{p}(G)\cap H= O_{p}(H),~~~~O_{p}(G)\cap K= O_{p}(K).$$ Since $H\cong K$, we have $O_{p}(H)\cong O_{p}(K)$. Hence, $O_{p}(G)\leq K$. Now, we focus on $(G/O_{p}(G), H/O_{p}(G), K/O_{p}(G))$, we can see that $K/O_{p}(G)$ is maximal in $G/O_{p}(G)$ because $(G, H, K)$ is a counterexample. So $K$ is maximal in $G$. That is a contradiction. [**Case 2.**]{} $O_p(G)=1$. First, since $G$ is solvable, we have $O_{p'}(G)\neq 1$. By [@GR Theorem A], we can see that $O_{q'}(G)=1$ because $(G, H, K)$ is a counterexample. So the Fitting subgroup $F(G)=O_{q}(G)$ and $O_{q}(G)\neq 1$ because $O_{p'}(G)\neq 1$. Since $C_{G}(F(G))\leq F(G)$, we have$C_{G}(O_{q}(G))\leq O_{q}(G)$. It implies $G$ is of characteristic $q$-group. But by the assumption, we know that Question 1.1 holds for every characteristic $q$-groups $G$. Hence, that is a contradiction. So, we complete the proof. Now, we will prove Theorem 1.4 as follows. Let $G$ be a finite solvable group and $G$ has isomorphic subgroup $H$ and $K$. Let $H$ is maximal subgroup of $G$, we can set $|G: H|=p^n$. Let $p\leq 3$ and $q=5-p$. Let $Q\in Syl_q(H)$. If $|G|_q\leq q^4$, then $K$ is also maximal. Suppose that $(G, H, K)$ is a counterexample. Since $H$ is maximal in a solvable group $G$, we can set $|G:H|=p^n$ for some prime $p$ and positive integer $n$. [**Case 1.**]{} $O_p(G)\neq 1$. By [@IR Theorem 3], we have $O_{p}(G)\leq H$. By [@IR Lemma 2], we have $$O_{p}(G)=O_{p}(G)\cap H= O_{p}(H),~~~~O_{p}(G)\cap K= O_{p}(K).$$ Since $H\cong K$, we have $O_{p}(H)\cong O_{p}(K)$. Hence, $O_{p}(G)\leq K$. Now, we focus on $(G/O_{p}(G), H/O_{p}(G), K/O_{p}(G))$. Since $H/O_{p}(G)\cong K/O_{p}(G)$ and $|G/O_{p}(G)|_q=|G|_q\leq q^4$, we can see that $K/O_{p}(G)$ is maximal in $G/O_{p}(G)$ because $(G, H, K)$ is a counterexample. So $K$ is maximal in $G$. That is a contradiction. [**Case 2.**]{} $O_p(G)=1$. First, since $G$ is solvable, we have $O_{p'}(G)\neq 1$. By [@GR Theorem A], we can see that $O_{q'}(G)=1$ because $(G, H, K)$ is a counterexample. So $F(G)=O_{q}(G)$ and $O_{q}(G)\neq 1$ because $O_{p'}(G)\neq 1$. Here, we have $C_{G}(O_{q}(G))\leq O_{q}(G)$. Since $|G:H|=p^n$, we have $O_{q}(G)\leq H$. Similarly, $O_{q}(G)\leq K$. By the assumption $|G|_q\leq q^4$, we can discuss as follows. [**Case 2.1.**]{} $|O_{q}(G)|=q^4$. Since $H\cong K$, we can set an isomorphic map $\alpha: K\to H$. So $\alpha$ set $O_{q}(G)$ to $\alpha(O_{q}(G))$. Here, $\alpha(O_{q}(G))\leq H, O_{q}(G)\leq H$. Hence $\alpha(O_{q}(G))O_{q}(G)=O_{q}(G)$ because $|G|_q\leq q^4$. So $\alpha(O_{q}(G))=O_{q}(G)$. Then we can consider $(G/O_{q}(G), H/O_{q}(G), K/O_{q}(G))$. Since $H/O_{q}(G)= H/\alpha(O_{q}(G)\cong K/O_{q}(G)$, we have $K/O_{q}(G)$ is maximal in $G/O_{q}(G)$ because $(G, H, K)$ is a counterexample. So $K$ is maximal in $G$. That is a contradiction. [**Case 2.2.**]{} $|O_{q}(G)|=q^3$. Since $H\cong K$, we can set an isomorphic map $\alpha: K\to H$. So $O_{q}(G)$ is sent to $\alpha(O_{q}(G))$ by map $\alpha$. If $\alpha(O_{q}(G))=O_{q}(G)$, then we can consider $(G/O_{q}(G), H/O_{q}(G), K/O_{q}(G))$. Since $H/O_{q}(G)= H/\alpha(O_{q}(G))\cong K/O_{q}(G)$, we have $K/O_{q}(G)$ is maximal in $G/O_{q}(G)$ because $(G, H, K)$ is a counterexample. So $K$ is maximal in $G$. That is a contradiction. Hence, $\alpha(O_{q}(G))\neq O_{q}(G)$, we have $$\alpha(O_{q}(G))O_{q}(G)\gneq O_q(G).$$ Since $|O_{q}(G)|=q^3$ and $|G|_q\leq q^4$, we have $\alpha(O_{q}(G))O_{q}(G)\in \mathrm{Syl}_q(G)$. Set $Q:=\alpha(O_{q}(G))O_{q}(G)$, we have $\alpha^{-1}(Q)\in\mathrm{Syl}_q(G)$. There exists $g\in G$ such that $Q=\alpha^{-1}(Q)^g$. Now we can consider $(G, H, K^g)$. We have $Q=\alpha^{-1}(Q)^g\leq K^g$. So $Q$ is sent to $Q$ by morphism $$\xymatrix@C=0.5cm{ K^g \ar[rr]^{c_{g^{-1}}} && K \ar[rr]^{\alpha} && H}.$$ Since $Q\unlhd H$, we have $Q\unlhd K^g$. If $K^g\leq H$, we can see that $K^g$ is maximal in $G$. That is a contradiction. Hence, $K^g\nleq H$. So $Q\unlhd G$. Now, we can consider $(G/Q, H/Q, K^g/Q)$. Since $$K^g/Q\cong K/\alpha^{-1}(Q)\cong H/Q,$$ we have $K^g$ is maximal in $G$. That is contradiction. [**Case 2.3.**]{} $|O_{q}(G)|\leq q^2$. By similar reason of the above case, we can set $\alpha(O_{q}(G))\neq O_{q}(G)$ and $1\neq\alpha(O_{q}(G))\cap O_{q}(G)\lneq O_{q}(G)$. Set $N_1=\alpha(O_{q}(G))\cap O_{q}(G)$ and $N_2=\alpha(N_1)\cap O_{q}(G)$. It is easy to see that $N_2\leq N_1$. Since $|O_{q}(G)|\leq q^2$, we have either $N_2=1$ or $N_2=N_1$. If $N_2=N_1$, we have $N_1=N_2=\alpha(N_1)\cap O_{q}(G)$. So $N_1=\alpha(N_1)$. Since $N_1\unlhd H$, we have $N_1\unlhd K$. So, $N_1\unlhd G$. Now, we consider $(G/N_1, H/N_1, K/N_1)$, we have $K$ is maximal in $G$. That is contradiction. If $N_2=1$, we have $\alpha(N_1)\cap O_{q}(G)=1$. But $G/O_{q}(G)$ is isomorpical to a subgroup of $\mathrm{Aut}(O_{q}(G))$, we have $|G|_q\leq q^3$. Hence $\alpha(O_{q}(G))O_{q}(G)\in \mathrm{Syl}_q(G)$. So, by the similar reason of above case, we can get a contradiction. So, we complete the proof. Now, we will prove Theorem 1.5 as follows. First, recall that if $G$ is a $p$-soluble group, the $p$-length $l_p(G)$ is the number of factors of the lower $p$-series of $G$ that are $p$-groups(see [@Go p.227]). Let $G$ be a finite solvable group and $G$ has isomorphic subgroup $H$ and $K$. Let $H$ is maximal subgroup of $G$. If $l_p(G)\leq 1$, then $K$ is also maximal. Suppose that $(G, H, K)$ is a counterexample. Since $H$ is maximal in a solvable group $G$, we can set $|G:H|=p^n$ for some prime $p$ and positive integer $n$. [**Case 1.**]{} $O_p(G)\neq 1$. By [@IR Theorem 3], we have $O_{p}(G)\leq H$. By [@IR Lemma 2], we have $$O_{p}(G)=O_{p}(G)\cap H= O_{p}(H),~~~~O_{p}(G)\cap K= O_{p}(K).$$ Since $H\cong K$, we have $O_{p}(H)\cong O_{p}(K)$. Hence, $O_{p}(G)\leq K$. Now, we focus on $(G/O_{p}(G), H/O_{p}(G), K/O_{p}(G))$. Since $l_p(G)\leq 1$, we have $SO_{p'}(G)\unlhd G$ for some Sylow $p$-subgroup of $G$. We can see that $O_{p'}(G)\leq O_{p, p'}(G)$, so $$SO_{p, p'}(G)=SO_{p'}(G)O_{p, p'}(G)\unlhd G.$$ Hence $l_{p}(G/O_{p}(G))\leq 1$. So, we can see that $K/O_{p}(G)$ is maximal in $G/O_{p}(G)$ because $(G, H, K)$ is a counterexample. Hence, $K$ is maximal in $G$. That is a contradiction. [**Case 2.**]{} $O_p(G)=1$. Since $G$ is solvable, we have $F(G)\leq O_{p'}(G)\neq 1$. And $C_{G}(O_{p'}(G))\leq O_{p'}(G)$. Now, we assert that $O_{p'}(G)\leq H$. If $O_{p'}(G)\nleq H$, thus $O_{p'}(G)\cap H\lneq O_{p'}(G)$. By $$\frac{|HO_{p'}(G)|}{|H|}=\frac{|O_{p'}(G)|}{|O_{p'}(G)\cap H|},$$ we have $r||HO_{p'}(G): H|$ for some prime $r$ which is not $p$. That is a contradiction to $|G: H|=p^n$. Hence, $O_{p'}(G)\leq H$. Similarly, we have $O_{p'}(G)\leq K$ because $|G:K|=|G:H|=p^n$. First, we assert that $O_{p'}(G)$ is not a Hall $p'$-subgroup of $G$. Else, $H/O_{p'}(G)\cong K/O_{p'}(G)$. Then we can get a contradiction by induction. Since $l_p(G)\leq 1$, for each $S\in \mathrm{Syl}_p(G)$, we have $T:=SO_{p'}(G)=O_{p',p}(G)\unlhd G$. And $|G: H|=p^n$, we have $T\nleq H$. Similarly, $T\nleq K.$ Now, we can see that $$H\cap T=H\cap SO_{p'}(G)=(H\cap S)O_{p'}(G)\unlhd H.$$ Since $S\nleq H$, thus $N_{S}(H\cap S)\gneq H\cap S$. So let $x\in N_{S}(H\cap S)- H\cap S$, then $$((H\cap S)O_{p'}(G))^x=(H\cap S)^x O_{p'}(G)=(H\cap S)O_{p'}(G).$$ But $x\notin H$ and $H$ is maximal in $G$. Hence, we have $$(H\cap S)O_{p'}(G)\unlhd G$$ because $(H\cap S)O_{p'}(G)\unlhd H.$ Let $R\in \mathrm{Syl}_p(H)$, there exists $t\in G$ such that $R\leq S^t$. For $S^t$, we have $S^tO_{p'}(G)=SO_{p'}(G)\unlhd G$. Then $$(H\cap S^t)O_{p'}(G)\unlhd G$$ and $H\cap S^t\geq R$. So $H\cap S^t=R\in \mathrm{Syl}_p(H)$. Now, we replace $S^t$ by $S$. That means $$(H\cap S)O_{p'}(G)\unlhd G~ \mathrm{and} ~H\cap S \in \mathrm{Syl}_p(H).$$ [**Case 2.1.**]{} $(K\cap S)O_{p'}(G)\leq H.$ Then $(K\cap S)O_{p'}(G)\leq (H\cap S)O_{p'}(G).$ We know that $G=KSO_{p'}(G)=HSO_{p'}(G)$ and $SO_{p'}(G)\unlhd G$. So $$K/((K\cap S)O_{p'}(G))\cong G/SO_{p'}(G)\cong H/((H\cap S)O_{p'}(G)).$$ Since $K\cong H$, we have $|(K\cap S)O_{p'}(G)|=|(H\cap S)O_{p'}(G)|$. Then $$(K\cap S)O_{p'}(G)=(H\cap S)O_{p'}(G).$$ Now, for $(G/((H\cap S)O_{p'}(G)), K/((H\cap S)O_{p'}(G)), H/((H\cap S)O_{p'}(G)))$, we assert that $l_p(G/((H\cap S)O_{p'}(G)))\leq 1$. Since $$O_{p'}(G/((H\cap S)O_{p'}(G)))\cdot SO_{p'}(G)/((H\cap S)O_{p'}(G))\unlhd G/((H\cap S)O_{p'}(G)),$$ we have $l_p(G/((H\cap S)O_{p'}(G)))\leq 1$. So $K/((H\cap S)O_{p'}(G))$ is maximal in $G/((H\cap S)O_{p'}(G))$. Hence, $K$ is maximal in $G$. That is a contradiction. [**Case 2.2.**]{} $(K^u\cap S)O_{p'}(G)\nleq H$ for each $u\in G$. Since $H$ is maximal in $G$, we have $((K^u\cap S)O_{p'}(G))H=G$. We assert that $K^u(H\cap S)=G$. Since $$|G|=|H(K^u\cap S)|=\frac{|H||K^u\cap S|}{|K^u\cap H\cap S|}$$ for each $u\in G$, we can choose $u_0$ such that $K^{u_0}\cap S\in \mathrm{Syl}_p(K^{u_0}).$ So $$\frac{|H||K^{u_0}\cap S|}{|K^{u_0}\cap H\cap S|}=\frac{|K^{u_0}||H\cap S|}{|K^{u_0}\cap H\cap S|}=|K^{u_0}(H\cap S)|$$ because $K^{u_0}\cong H$. Hence, $K^{u_0}(H\cap S)=G$. Now, we replace $K^{u_0}$ by $K$. That means $K(H\cap S)=G.$ Set $V=(H\cap S)O_{p'}(G)$ which is a normal subgroup of $G$. Set $Y:=H\cap K$ and $\alpha(Y)=X$ where $\alpha: K\to H$ is an isomorphic map. First, we assert that $Y$ is maximal in $K$. Since $KV=G$, there exists an isomorphism $\phi: G/V\to K/K\cap V$. And we can see that $\phi(H/V)=(H\cap K)/(K\cap V)= Y/(K\cap V).$ Since $H/V$ is maximal in $G/V$, we have $Y/(K\cap V)$ is maximal in $K/(K\cap V).$ Hence, $Y$ is maximal in $K$, as wanted. Then $X$ is maximal in $H$. Since $H\cap S\in \mathrm{Syl}_p(H)$ and $(H\cap S)O_{p'}(G)\unlhd H$, we have $l_p(H)\leq 1$. By induction we have $Y$ is also maximal in $H$. Let $K\leq L\lneq G$. Then $H\geq L\cap H\geq H\cap K=Y$. If $L\cap H=H$, then $H\leq L$. So $L=G$. Hence, $L\cap H= H\cap K$. And $$L=L\cap G= L\cap KV=K(L\cap V)=K(L\cap ((H\cap S)O_{p'}(G))).$$ But $L\cap ((H\cap S)O_{p'}(G))=(L\cap (H\cap S))O_{p'}(G)=(K\cap H\cap S)O_{p'}(G)\leq K$. Hence, $L\leq K$. That means $K$ is maximal in $G$. That is a contradiction. So, we complete the proof. **Notation of fusion systems, and proof of Theorem B** ====================================================== In this section we collect some known results that will be needed later. For the background theory of fusion systems, we refer to [@AsKO; @BLO1; @BLO2]. A $fusion ~ system$ $\mathcal{F}$ over a finite $p$-group $S$ is a category whose objects are the subgroups of $S$, and whose morphism sets $\mathrm{Hom}_{\mathcal{F}}(P,Q)$ satisfy the following two conditions: 0.3cm \(a) $\mathrm{Hom}_{S}(P,Q)\subseteq \mathrm{Hom}_{\mathcal{F}}(P,Q)\subseteq\mathrm{Inj}(P,Q)$ for all $P,Q\leq S$. 0.3cm \(b) Every morphism in $\mathcal{F}$ factors as an isomorphism in $\mathcal{F}$ followed by an inclusion. Let $\mathcal{F}$ be a fusion system over a $p$-group $S$. $\bullet$ Two subgroups $P,Q$ are $\mathcal{F}$-$conjugate$ if they are isomorphic as objects of the category $\mathcal{F}$. Let $P^{\mathcal{F}}$ denote the set of all subgroups of $S$ which are $\mathcal{F}$-conjugate to $P$. Since $\mathrm{Hom}_{\mathcal{F}}(P,P)\subseteq\mathrm{Inj}(P,P)$, we usually write $\mathrm{Hom}_{\mathcal{F}}(P,P)=\mathrm{Aut}_{\mathcal{F}}(P)$ and $\mathrm{Hom}_{S}(P,P)=\mathrm{Aut}_{S}(P)$. $\bullet$ A subgroup $P\leq S$ is $fully~automised$ in $\mathcal{F}$ if $\mathrm{Aut}_{S}(P)\in \mathrm{Syl}_{p}(\mathrm{Aut}_{\mathcal{F}}(P))$. $\bullet$ A subgroup $P\leq S$ is $receptive$ in $\mathcal{F}$ if it has the following property: for each $Q\leq S$ and each $\varphi\in \mathrm{Iso}_{\mathcal{F}}(Q, P)$, if we set $$N_{\varphi}=\{g\in N_{S}(Q)|\varphi \circ c_{g}\circ \varphi^{-1}\in \mathrm{Aut}_{S}(P)\},$$ then there is $\overline{\varphi}\in \mathrm{Hom}_{\mathcal{F}}(N_{\varphi},S)$ such that $\overline{\varphi}|_{Q}=\varphi$. (where $c_{g}:x\longmapsto g^{-1}xg$ for $g\in S$) $\bullet$ A fusion system $\mathcal{F}$ over a $p$-group $S$ is $saturated$ if each subgroup of $S$ is $\mathcal{F}$-conjugate to a subgroup which is fully automised and receptive. Let $\mathcal{F}$ be a fusion system over a $p$-group $S$. $\bullet$ A subgroup $P\leq S$ is $fully~normalized$ in $\mathcal{F}$ if $|N_{S}(P)|\geq |N_{S}(Q)|$ for all $Q\in P^{\mathcal{F}}$. $\bullet$ A subgroup $P\leq S$ is $\mathcal{F}$-$centric$ if $C_{S}(Q)=Z(Q)$ for $Q\in P^{\mathcal{F}}$. $\bullet$ Let $\mathcal{F}^{c}$ denote the full subcategory of $\mathcal{F}$ whose objects are $\mathcal{F}$-centric, $\bullet$ Let $\mathcal{F}^{f}$ denote the full subcategory of $\mathcal{F}$ whose objects are fully normalized in $\mathcal{F}$. $\bullet$ A subgroup $P\leq S$ is $normal$ in $\mathcal{F}$ (denoted $P\trianglelefteq \mathcal{F}$) if for all $Q,R\in S$ and all $\varphi\in \mathrm{Hom }_{\mathcal{F}}(Q,R)$, $\varphi$ extends to a morphism $\overline{\varphi}\in \mathrm{Hom }_{\mathcal{F}}(QP,RP)$ such that $\overline{\varphi}(P)=P$. Moreover, $O_{p}(\mathcal{F})$ denotes the largest subgroup of $S$ which is normal in $\mathcal{F}$. [@AsKO I, Definition 6.1] Let $\mathcal{F}$ a saturated fusion system over a finite $p$-group $S$. Let $\mathcal{E}$ be a subsystem of $\mathcal{F}$ over a subgroup $T$ of $S$. $\bullet$ Define $\mathcal{E}$ to be $\mathcal{F}$-invariant if: (I1) $T$ is strongly closed in $S$ with respect to $\mathcal{F}$; (I2) For each $P\leq Q\leq T$, $\phi\in \mathrm{Hom}_{\mathcal{E}}(P, Q)$, and $\alpha\in \mathrm{Hom}_{\mathcal{F}}(Q, S)$, $\phi^{\alpha}\in \mathrm{Hom}_{\mathcal{E}}(\alpha(P), T)$. If $\mathcal{E}$ is saturated, we call that $\bullet$ A subsystem $\mathcal{E}\subseteq \mathcal{F}$ is weakly normal in $\mathcal{F}$ ($\mathcal{E}\dot{\unlhd} \mathcal{F}$) if $\mathcal{E}$ is saturated and $\mathcal{E}$ is $\mathcal{F}$-invariant. $\bullet$ A weakly normal subsystem $\mathcal{E}\dot{\unlhd} \mathcal{F}$ is normal in $\mathcal{F}$ if: (N1) Each $\phi\in \mathrm{Aut}_{\mathcal{E}}(T)$ extends to $\hat{\phi}\in \mathrm{Aut}_{\mathcal{F}}(TC_S(T))$ such that $[\hat{\phi}, C_S(T)]\leq Z(T)$. We write $\mathcal{E}\unlhd \mathcal{F}$ to indicate that $\mathcal{E}$ is normal in $\mathcal{F}$. $\bullet$ $\mathcal{F}$ is simple if it contains no proper nontrivial normal fusion subsystem. $\bullet$ Define $O^p(\mathcal{F})$ to be the minimal normal subsystem of $\mathcal{F}$ which has $p$-power index in $\mathcal{F}$ (See [@AsKO I, Theorem 7.4]). $\bullet$ Define $O^{p'}(\mathcal{F})$ to be the minimal normal subsystem of $\mathcal{F}$ which has index prime to $p$ in $\mathcal{F}$. Now, we introduce constrained fusion systems. For the theory of constrained fusion systems, we refer to [@AsKO; @BCGLO; @BLO2]. And the definition of component of fusion system is due to [@As5; @As6]. [@AsKO; @BCGLO] A saturated fusion system $\mathcal{F}$ is $constrained$ if $\mathcal{F}$ contains a normal centric $p$-subgroup, i.e., $O_{p}(\mathcal{F})$ is centric. (Model theorem for constrained fusion systems [@AsKO III, 5.10],[@BCGLO]. Let $\mathcal{F}$ be a constrained, saturated fusion system over a $p$-group $S$. Fix $Q\in \mathcal{F}^c$ such that $Q\unlhd \mathcal{F}$. Then the following hold. \(a) There is a model for $\mathcal{F}$: a finite group $G$ with $S\in \mathrm{Syl}_p(G)$ such that $Q\unlhd G$, $C_G(Q) \leq Q,$ and $\mathcal{F}_S(G) = \mathcal{F}$. \(b) For any finite group $G$ such that $S\in \mathrm{Syl}_p(G)$ such that $Q\unlhd G$, $C_G(Q) \leq Q,$ and $\mathrm{Aut}_{G}(Q) = \mathrm{Aut}_{\mathcal{F}}(Q)$, there is $\beta \in \mathrm{Aut}(S)$ such that $\beta|_Q = \mathrm{Id}_Q$ and $\mathcal{F}_S(G) =~ ^{\beta}\mathcal{F}$. \(c) The model $G$ is unique in the following strong sense: if $G_1, G_2$ are two finite groups such that $S\in \mathrm{Syl}_p(G_i)$, $Q\unlhd G_i$, $\mathcal{F}_S(G_i) = \mathcal{F}$, and $C_{G_i}(Q) \leq Q,$ for $i = 1, 2$, then there is an isomorphism $\psi: G_1\longrightarrow G_2$ such that $\psi|_S = \mathrm{Id}_S.$ If $\psi$ and $\psi'$ are two such isomorphisms, then $\psi' = \psi \circ c_z$ for some $z\in Z(S)$. [@As5 Theorem 1] Let $\mathcal{F}$ be a constrained, saturated fusion system over a finite $p$-group $S$, $G$ a model of $\mathcal{F}$ and $\mathcal{E}\unlhd \mathcal{F}$. Then there is a unique normal subgroup of $G$ which is a model of $\mathcal{E}$. Let $G$ be a finite solvable group and $G$ has isomorphic subgroup $H$ and $K$. Let $H$ is maximal subgroup of $G$, we can set $|G: H|=p^n$. Let $p\leq 3$ and $q=5-p$. Let $Q\in Syl_q(H)$. If $\mathcal{F}_Q(H)\unlhd \mathcal{F}_Q(G)$, then $K$ is also maximal. Suppose that $(G, H, K)$ is a counterexample. Since $H$ is maximal in a solvable group $G$, we can set $|G:H|=p^n$ for some prime $p$ and positive integer $n$. [**Case 1.**]{} $O_p(G)\neq 1$. By [@IR Theorem 3], we have $O_{p}(G)\leq H$. By [@IR Lemma 2], we have $$O_{p}(G)=O_{p}(G)\cap H= O_{p}(H),~~~~O_{p}(G)\cap K= O_{p}(K).$$ Since $H\cong K$, we have $O_{p}(H)\cong O_{p}(K)$. Hence, $O_{p}(G)\leq K$. Now, we focus on $(G/O_{p}(G), H/O_{p}(G), K/O_{p}(G))$, we can see that $K/O_{p}(G)$ is maximal in $G/O_{p}(G)$ because $(G, H, K)$ is a counterexample. So $K$ is maximal in $G$. That is a contradiction. [**Case 2.**]{} $O_p(G)=1$. First, since $G$ is solvable, we have $O_{p'}(G)\neq 1$. And $O_{p'}(G)\leq H$ because $|G: H|=p^n$. By [@GR Theorem A], we can see that $O_{q'}(G)=1$ because $(G, H, K)$ is a counterexample. So $F(G)=O_{q}(G)$ and $O_{q}(G)\neq 1$ because $O_{p'}(G)\neq 1$. Since $C_{G}(O_{q}(G))\leq O_{q}(G)$, it implies $G$ is a model of fusion system $\mathcal{F}_Q(G)$. Since $\mathcal{F}_Q(H)\unlhd \mathcal{F}_Q(G)$, thus there exists a normal subgroup $U$ of $G$ such that $$\mathcal{F}_Q(H)=\mathcal{F}_Q(U)$$ by [@As5 Theorem 1]. Since $\mathcal{F}_Q(H)=\mathcal{F}_Q(U),$ we have $$\mathrm{Aut}_H(O_q(G))=\mathrm{Aut}_U(O_q(G)).$$ So for each $h\in H$, we have $c_h|_{O_q(G)}=c_u|_{O_q(G)}$ for some $u\in U$. That means $$hu^{-1}\in C_G(O_{q}(G))\leq O_q(G)\leq H\cap U.$$ Hence, $H=U\unlhd G$. Since $G/H$ is a $p$-group, we have that $|G/H|=p$ because $H$ is maximal in $G$. Hence, $K$ is maximal in $G$. That is a contradiction. So, we complete the proof. Let $G$ be a finite solvable group and $G$ has isomorphic subgroup $H$ and $K$. Let $H$ is maximal subgroup of $G$, we can set $|G: H|=p^n$. Let $p\leq 3$ and $q=5-p$. Let $Q\in Syl_q(H)$. Set $\mathcal{F}:=\mathcal{F}_Q(G)$. If $ O^{q'}(\mathcal{F})\geq\mathcal{F}_Q(H)$ and $ O^{q}(\mathcal{F})=\mathcal{F}$, then $K$ is also maximal. Suppose that $(G, H, K)$ is a counterexample. Since $H$ is maximal in a solvable group $G$, we can set $|G:H|=p^n$ for some prime $p$ and positive integer $n$. [**Case 1.**]{} $O_p(G)\neq 1$. By [@IR Theorem 3], we have $O_{p}(G)\leq H$. By [@IR Lemma 2], we have $$O_{p}(G)=O_{p}(G)\cap H= O_{p}(H),~~~~O_{p}(G)\cap K= O_{p}(K).$$ Since $H\cong K$, we have $O_{p}(H)\cong O_{p}(K)$. Hence, $O_{p}(G)\leq K$. Now, we focus on $(G/O_{p}(G), H/O_{p}(G), K/O_{p}(G))$, we can see that $K/O_{p}(G)$ is maximal in $G/O_{p}(G)$ because $(G, H, K)$ is a counterexample. So $K$ is maximal in $G$. That is a contradiction. [**Case 2.**]{} $O_p(G)=1$. Since $G$ is solvable, we have $F(G)=O_{q}(G)=O_{p'}(G)\neq 1$. And $C_{G}(O_q(G))\leq O_q(G)$. So $G$ is a model of fusion system $\mathcal{F}_Q(G)$. Since $O^{q'}(\mathcal{F})\unlhd \mathcal{F}_Q(G)$ and $ O^{q}(\mathcal{F})\unlhd \mathcal{F}_Q(G)$, thus there exist normal subgroup $U$ of $G$ such that $$O^{q'}(\mathcal{F})=\mathcal{F}_Q(U)$$ by [@As5 Theorem 1]. We have $O_{p'}(G)\leq H$ because $|G: H|=p^n$. Similarly, we have $O_{p'}(G)\leq K$ because $|G:K|=|G:H|=p^n$. Since $\mathcal{F}_Q(U)= O^{q'}(\mathcal{F})\geq\mathcal{F}_Q(H),$ we have $$\mathrm{Aut}_U(O_q(G))\geq \mathrm{Aut}_H(O_q(G)).$$ So for each $h\in H$, we have $c_u|_{O_q(G)}=c_h|_{O_q(G)}$ for some $u\in U$. That means $$hu^{-1}\in C_G(O_{q}(G))\leq O_q(G)\leq Q\leq U.$$ Hence, $H\leq U.$ Since $H$ is maximal in $G$, we have $U=H$ or $U=G$. If $H=U\unlhd G$, we have $K$ is also maximal in $G$ by above theorem. That is a contradiction. So, we have $U=G.$ That means $\mathcal{F}=O^{q'}(\mathcal{F})$. Since $O^{q}(\mathcal{F})=\mathcal{F}$, we have $\mathcal{F}$ is not Puig-solvable. But $G$ is a model of $\mathcal{F}$ and $G$ is solvable, we can see that $\mathcal{F}$ is Puig-solvable by [@AsKO Part II, Theorem 12.4]. That is a contradiction. So, we complete the proof. **ACKNOWLEDGMENTS**The authors would like to thank Prof. C. Broto for his discussion on the definition of characteristic p type group. The authors would like to thank Southern University of Science and Technology for their kind hospitality hosting joint meetings of them. [13]{} L. Alperin, R. Bell, Groups and Representations, Spinger, (1995). M. Aschbacher, Finite Group Theory, Cambridge University Press, (1986). M. Aschbacher, Normal subsystems of fusion systems, Proc. London Math. Soc. 97(2008), 239-271. M. Aschbacher, The generalized Fitting subsystem of a fusion system, Memoirs Amer. Math. Soc. 209 (2011), no. 986. M. Aschbacher, Generation of fusion systems of characteristic 2-type, Invent. Math. 180(2)(2010), 225-299. M. Aschbacher, $N$-groups and fusion systems, J. Algebra 449 (2016), 264-320. M. Aschbacher, R. Kessar, B. Oliver, Fusion systems in algebra and topology, London Mathematical Society Lecture Note Series 391, Cambridge University Press, Cambridge, (2011). C. Broto, N. Castellana, J. Grodal, R. Levi, B. Oliver, Subgroup families controlling -local finite groups, Proc. London Math. Soc. 91 (2005), 325-354. C. Broto, R. Levi, B. Oliver, Homotopy equivalences of $p$-completed classifying spaces of finite groups, Invent. Math. 151 (2003), 611-664. C. Broto, R. Levi, B. Oliver, The homotopy theory of fusion systems, J. Amer. Math. Soc. 16 (2003), 779-856. C. Broto, R. Levi, B. Oliver, The theory of $p$-local groups: A survey, Homotopy theory (Northwestern Univ. 2002), Contemp. math., 346, Amer. Math. Soc. (2004), 51-84. G. Glauberman, A characteristic subgroup of a $p$-stable group, Canad. J. Math. 20(1968) 1101-1135. G. Glauberman, G.R. Robinson, More on a question of M. Newman on isomorphic subgroups of solvable groups J. Alg. 532(2019) 1-7. D. Gorenstein, Finite groups, Chelsea publishing company, New York, (1980). I.M. Isaacs, G.R. Robinson, Isomorphic subgroups of solvable groups, Proc. Amer. Math. Soc. 143(2015) 3371-3376. D.J.S. Robinson, A course in the theory of groups, Springer, (1995). B. Stellmacher, A characteristic subgroup of $S_4$-free groups, Israel J. Math 94(1996) 367-379.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present an overview of the HALOGAS (Hydrogen Accretion in LOcal GAlaxieS) Survey, which is the deepest systematic investigation of cold gas accretion in nearby spiral galaxies to date. Using the deep [[Hi]{}]{} data that form the core of the survey, we are able to detect neutral hydrogen down to a typical column density limit of about $10^{19}\,\mathrm{cm^{-2}}$ and thereby characterize the low surface brightness extra-planar and anomalous-velocity neutral gas in nearby galaxies with excellent spatial and velocity resolution. Through comparison with sophisticated kinematic modeling, our 3D HALOGAS data also allow us to investigate the disk structure and dynamics in unprecedented detail for a sample of this size. Key scientific results from HALOGAS include new insight into the connection between the star formation properties of galaxies and their extended gaseous media, while the developing HALOGAS catalogue of cold gas clouds and streams provides important insight into the accretion history of nearby spirals. We conclude by motivating some of the unresolved questions to be addressed using forthcoming 3D surveys with the modern generation of radio telescopes.' author: - 'George Heald$^{1,2}$' - the HALOGAS Team title: The WSRT HALOGAS Survey --- Background ========== Spiral galaxies require a continuous supply of gas to maintain star formation over long timescales and to explain their metallicity distribution and evolution [see @sancisi_etal_2008 and references therein]. What form this gas supply takes has not been observationally established, but it may be that the dominant form of supply is cold gas accretion [e.g., @keres_etal_2009]. In the Milky Way (MW) and the nearest galaxies high velocity clouds (HVCs) are observed [@wakker_vanwoerden_1997; @thilker_etal_2004], some with low metallicity [@wakker_etal_1999; @collins_etal_2007]. The HVCs may contribute a small fraction of the fuel needed for star formation, but a more complete census is required to be able to draw general conclusions. While distinct [[Hi]{}]{} clouds of this nature are not commonly detected in the outskirts of galaxies [e.g., @giovanelli_etal_2007; @putman_etal_2012], another morphological feature – thick gas disks – is becoming increasingly recognized as a typical structural feature of spiral galaxies [e.g., @oosterloo_etal_2007; @putman_etal_2012]. These gas disks can contribute up to $\approx10-20\%$ of the total [[Hi]{}]{} mass of their host galaxies, and have distinct kinematics [e.g., @fraternali_etal_2002]. These slowly rotating thick disks may be connected to the accretion history [@marinacci_etal_2010] and provide an observational tracer of coronal material feeding the disk. Searching for clouds and for thick [[Hi]{}]{} disks requires specialized deep observations. The HALOGAS [Hydrogen Accretion in Local Galaxies; @heald_etal_2011] Survey was undertaken to provide the required capability for a large enough sample to begin to draw statistical conclusions. The core of the survey are Westerbork Synthesis Radio Telescope (WSRT) observations of 22 edge-on and moderately-inclined nearby galaxies, each observed for 120 hours in the 21-cm [[Hi]{}]{} line. The typical $5\sigma$ column density sensitivity is $N_\mathrm{HI}\lesssim10^{19}\,\mathrm{cm^{-2}}$ for typical linewidth of $\Delta\,v=12\,\mathrm{km\,s^{-1}}$, making HALOGAS the deepest interferometric galaxy [[Hi]{}]{} survey available to date. HALOGAS has an optimal angular resolution of $\approx30^{\prime\prime}$ (1.5 kpc at a typical distance of 10 Mpc), unsuitable for detailed high-resolution studies of the gas-star formation connection like THINGS [The [[Hi]{}]{} Nearby Galaxy Survey; @walter_etal_2008], but ideal for faint diffuse emission. The typical HALOGAS mass sensitivity for unresolved clouds with the same linewidth allows the detection of HVC analogues in the outskirts of the survey galaxies, $$M_\mathrm{HI}=2.7\times10^5\,\left(\frac{D}{10\,\mathrm{Mpc}}\right)^2\,M_\odot.$$ The primary HALOGAS observations were completed as of early 2013. A number of ancillary data products have been collected to supplement our deep [[Hi]{}]{} observations. Deep multi-band optical imagery have been obtained at the [*Isaac Newton Telescope*]{} (INT) and Kitt Peak National Observatory (KPNO); see e.g. @deblok_etal_2014. Sensitive [*GALEX*]{} UV data have also been collected to supplement deep survey data already available for some HALOGAS targets. Together with the [[Hi]{}]{} line observations, broadband full-polarization continuum data are also available for many of the survey galaxies, allowing the detailed investigation of magnetic fields and cosmic rays in the same galaxies. The interferometric HALOGAS data are being supplemented with single dish data from Effelsberg and the the Green Bank Telescope (GBT) — useful not only to provide a missing short-spacing correction, but to enhance the search for clouds and streams in the outer parts of HALOGAS galaxies. Overview of HALOGAS results {#section:overview} =========================== Sophisticated data analysis techniques are required in order to achieve reliable structural parameters from the sensitive [[Hi]{}]{} data cubes provided by HALOGAS. Isolating thick [[Hi]{}]{} disks in particular requires detailed modeling of the 3D gas distribution and kinematics. This modeling work is mostly performed using the TiRiFiC [Tilted Ring Fitting Code; @jozsa_etal_2007] software. Two examples that illustrate the need for detailed tilted ring modeling are shown in Figure \[fig:fig1\]. In the case of UGC 7774, the strong warping already apparent in the plane of the sky [see also @garciaruiz_etal_2002] also causes the vertical thickening. In the other galaxy, NGC 1003, the disk structure recovered from a detailed 3D analysis shows a substantial warp ($\approx15-20{^\circ}$) along the line of sight, causing the disk to appear thicker in projection than the actual vertical distribution of the [[Hi]{}]{} layer. Another interesting example, NGC 5023 (not shown), illustrates an extreme case: recovery of spiral structure in an edge-on disk [see @kamphuis_etal_2013]. The modeling works best when these spiral structures are present and are required to fully understand the kinematics, emphasizing that this level of detail is essential to properly describe the 3D structure of the [[Hi]{}]{} in galaxies. HALOGAS galaxies are both found to host thick [[Hi]{}]{} disks [@gentile_etal_2013; @deblok_etal_2014] and to have little extraplanar [[Hi]{}]{} [@zschaechner_etal_2011; @zschaechner_etal_2012]. This dichotomy provides powerful leverage on the origin of [[Hi]{}]{} thick disks in galaxies. ![Two HALOGAS sample galaxies in optical and [[Hi]{}]{}. [*Left*]{}: DSS2 $R$-band image of UGC 7774 displayed on a logarithmic stretch and overlaid with HALOGAS [[Hi]{}]{} total intensity contours, starting at $N_\mathrm{HI}=9\times10^{18}\,\mathrm{cm^{-2}}$ and increasing by multiples of 2. The angular resolution of the [[Hi]{}]{} data is $36{^{\prime\prime}}\times33{^{\prime\prime}}$. [*Right*]{}: DSS2 $R$-band image of NGC 1003 displayed on a logarithmic stretch and overlaid with HALOGAS [[Hi]{}]{} total intensity contours. The [[Hi]{}]{} from the entire galaxy is shown in white contours, starting at $N_\mathrm{HI}=5\times10^{18}\,\mathrm{cm^{-2}}$ and increasing by multiples of 4. The black contours show the HVC analogues discussed in the text (§\[subsection:accretion\]), starting at about the same column density and increasing by multiples of 2. The resolution of the [[Hi]{}]{} data is $39{^{\prime\prime}}\times34{^{\prime\prime}}$.[]{data-label="fig:fig1"}](heald_fig1a.pdf "fig:"){width="0.49\hsize"} ![Two HALOGAS sample galaxies in optical and [[Hi]{}]{}. [*Left*]{}: DSS2 $R$-band image of UGC 7774 displayed on a logarithmic stretch and overlaid with HALOGAS [[Hi]{}]{} total intensity contours, starting at $N_\mathrm{HI}=9\times10^{18}\,\mathrm{cm^{-2}}$ and increasing by multiples of 2. The angular resolution of the [[Hi]{}]{} data is $36{^{\prime\prime}}\times33{^{\prime\prime}}$. [*Right*]{}: DSS2 $R$-band image of NGC 1003 displayed on a logarithmic stretch and overlaid with HALOGAS [[Hi]{}]{} total intensity contours. The [[Hi]{}]{} from the entire galaxy is shown in white contours, starting at $N_\mathrm{HI}=5\times10^{18}\,\mathrm{cm^{-2}}$ and increasing by multiples of 4. The black contours show the HVC analogues discussed in the text (§\[subsection:accretion\]), starting at about the same column density and increasing by multiples of 2. The resolution of the [[Hi]{}]{} data is $39{^{\prime\prime}}\times34{^{\prime\prime}}$.[]{data-label="fig:fig1"}](heald_fig1b.pdf "fig:"){width="0.49\hsize"} Star formation and the origin of thick [[Hi]{}]{} disks ------------------------------------------------------- The entire HALOGAS sample has been inspected in order to characterize the presence and characteristics of extraplanar [[Hi]{}]{}. We use both the 3D modeling mentioned in §\[section:overview\], as well as an approximate disk-halo separation of the type presented by @fraternali_etal_2002. On this basis we have searched for evidence of a common origin of the extraplanar [[Hi]{}]{} by comparing with the general properties of the sample galaxies. From the HALOGAS sample we find a tentative connection between the star formation rate density and the presence of thick [[Hi]{}]{} disks, such that thick [[Hi]{}]{} disks appear to be present in galaxies above a threshold SF energy injection. We note that a similar relationship has been identified for radio continuum halos [@dahlem_etal_2006]. In the case of the [[Hi]{}]{}, there appears to be a strong dependence on the star formation rate density, but a weaker dependence on the stellar mass density that traces the strength of the gravitational potential. Cold gas accretion in galaxies {#subsection:accretion} ------------------------------ Another key result of the HALOGAS Survey is the identification of [[Hi]{}]{} clouds and streams in the vicinity of the sample galaxies that could originate through the cold gas accretion process, and to determine the impact on the star formation history of the sample. As previously noted, the occurrence of gas clouds in the outskirts of galaxies is low. The HALOGAS Survey, despite its high sensitivity, does not substantially change this picture. The HALOGAS catalog of isolated gas features is still in preparation (as we work toward a statistically sound census across the full sample), but we can already state qualitatively that there is a low incidence of clouds and streams. The contribution of visible [[Hi]{}]{} accretion to the star formation fueling process in nearby galaxies appears to be minimal. As a representative example, we highlight NGC 1003 as shown in Figure \[fig:fig1\]. A small number of morphologically and kinematically distinct gas features have been identified from an inspection of the 3D [[Hi]{}]{} dataset; these are shown with black contours in Fig. \[fig:fig1\]. These features do not appear to have stellar counterparts, even in our deep optical images. The [[Hi]{}]{} masses and distances from NGC 1003 of these clouds are similar to the properties of the HVCs around the MW. The mass of these HVC analogues adds up to only $M_\mathrm{HI}=4\times10^6\,M_\odot$, which over a dynamical time ($\tau_\mathrm{dyn}\approx500\,\mathrm{Myr}$) contributes only 2% of the star formation rate of NGC 1003 [$\mathrm{SFR}=0.40\,M_\odot\,\mathrm{yr^{-1}}$; @heald_etal_2012]. Alternatively, if the gas contained in these clouds were used to completely fuel the current SFR, they would need to be replenished after only $\tau_\mathrm{repl}\approx10\,\mathrm{Myr}$. These rates neglect a possible contribution from an ionized gas component to which we do not have observational constraints. We note however that the contribution of ionized gas to the total mass of MW HVCs can be substantial [e.g., @lehner_howk_2011; @fox_etal_2014]; QSO absorption line studies of low column density [[Hi]{}]{} probed by HALOGAS where possible would be very useful to constrain this aspect of the mass budget. Future prospects ================ The scientific themes addressed by the HALOGAS project will help to focus the groundbreaking science questions to be addressed with the Square Kilometre Array (SKA) and its pathfinder and precursor projects. For descriptions of surveys targeting deeper observations of larger nearby galaxy samples, as well as investigations of low column density IGM material, see the discussions by @deblok_etal_2015 and @popping_etal_2015, respectively. Acknowledgements {#acknowledgements .unnumbered} ================ The Westerbork Synthesis Radio Telescope is operated by ASTRON (Netherlands Institute for Radio Astronomy) with support from the Netherlands Foundation for Scientific Research (NWO). [99]{} de Blok, W. J. G., J[ó]{}zsa, G. I. G., Patterson, M., et al. 2014, [A&A]{}, 566, A80 de Blok, W. J. G., et al. 2015, in “Advancing Astrophysics with the Square Kilometre Array” Collins, J. A., Shull, J. M., & Giroux, M. L. 2007, [ApJ]{}, 657, 271 Dahlem, M., Lisenfeld, U., & Rossa, J. 2006, [A&A]{}, 457, 121 Fox, A. J., Wakker, B. P., Barger, K. A., et al. 2014, [ApJ]{}, 787, 147 Fraternali, F., van Moorsel, G., Sancisi, R., & Oosterloo, T. 2002, [AJ]{}, 123, 3124 Garc[í]{}a-Ruiz, I., Sancisi, R., & Kuijken, K. 2002, [A&A]{}, 394, 769 Gentile, G., J[ó]{}zsa, G. I. G., Serra, P., et al. 2013, [A&A]{}, 554, A125 Giovanelli, R., Haynes, M. P., Kent, B. R., et al. 2007, [AJ]{}, 133, 2569 Heald, G., J[ó]{}zsa, G., Serra, P., et al. 2011, [A&A]{}, 526, A118 Heald, G., J[ó]{}zsa, G., Serra, P., et al. 2012, [A&A]{}, 544, 1 Heald, G., et al. 2015, in prep. J[ó]{}zsa, G. I. G., Kenn, F., Klein, U., & Oosterloo, T. A. 2007, [A&A]{}, 468, 731 Jütte, E., et al. 2015, in prep. Kamphuis, P., Rand, R. J., J[ó]{}zsa, G. I. G., et al. 2013, [MNRAS]{}, 434, 2069 Kere[š]{}, D., Katz, N., Fardal, M., Dav[é]{}, R., & Weinberg, D. H. 2009, [MNRAS]{}, 395, 160 Lehner, N., & Howk, J. C. 2011, Science, 334, 955 Marinacci, F., Binney, J., Fraternali, F., et al. 2010, [MNRAS]{}, 404, 1464 Oosterloo, T., Fraternali, F., & Sancisi, R. 2007, [AJ]{}, 134, 1019 Popping, A., et al. 2015, in “Advancing Astrophysics with the Square Kilometre Array” Putman, M. E., Peek, J. E. G., & Joung, M. R. 2012, [ARA&A]{}, 50, 491 Sancisi, R., Fraternali, F., Oosterloo, T., & van der Hulst, T. 2008, [A&A Rev.]{}, 15, 189 Thilker, D. A., Braun, R., Walterbos, R. A. M., et al. 2004, [ApJ]{}, 601, L39 Wakker, B. P., Howk, J. C., Savage, B. D., et al. 1999, [Nature]{}, 402, 388 Wakker, B. P., & van Woerden, H. 1997, [ARA&A]{}, 35, 217 Walter, F., Brinks, E., de Blok, W. J. G., et al. 2008, [AJ]{}, 136, 2563 Zschaechner, L. K., Rand, R. J., Heald, G. H., Gentile, G., & Kamphuis, P. 2011, [ApJ]{}, 740, 35 Zschaechner, L. K., Rand, R. J., Heald, G. H., Gentile, G., & J[ó]{}zsa, G. 2012, [ApJ]{}, 760, 37
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that in supersymmetric left-right models (SUSYLR), the upper bound on the lightest neutral Higgs mass can be appreciably higher than that in minimal supersymmetric standard model (MSSM). The exact magnitude of the bound depends on the scale of parity restoration and can be 10-20 GeV above the MSSM bound if mass of the right-handed $W_R$ is in the TeV range. An important implication of our result is that since SUSYLR models provide a simple realization of seesaw mechanism for neutrino masses, measurement of the Higgs boson mass could provide an independent probe of a low seesaw scale.' author: - Yue Zhang - Haipeng An - Xiangdong Ji - 'Rabindra N. Mohapatra' date: 'April, 2008' title: 'Light Higgs Mass Bound in SUSY Left-Right Models' --- [**1. Introduction**]{}   One of the main missing links of the otherwise immensely successful Standard Model (SM) is the Higgs boson which plays the crucial role in giving masses to all elementary particles in nature. It is therefore rightly the focus of a great deal of theoretical [@dawson] and experimental enquiries. Even though the Higgs boson mass in SM is arbitrary, some ideas about how heavy the Higgs boson can be gained in the context of different plausible extensions of SM as well as from other considerations [@quigg; @sher]. Typical upper limits from, say unitarity considerations [@quigg] is in the TeV range. This bound is however considerably strengthened in one of the most widely discussed possibility for TeV scale physics, supersymmetry. Specifically in the minimal supersymmetric SM (MSSM), the upper bound on the Higgs boson mass is $M^{max}_h\leq 135$ GeV [@haber] when one and two loop radiative corrections are included. Present collider searches provide a lower bound on the SM Higgs mass [@lep] of 114 GeV leaving a narrow region which need to be probed to test MSSM. If the Higgs mass is found to be above this upper limit, does it mean that supersymmetry is not relevant for physics at TeV scale? The answer is of course “No” since there exist simple and well motivated extensions of MSSM, e.g. the next-to-MSSM, which extends the MSSM only by the addition of a singlet field [@wyler] where there is a relaxation of this bound to about 142 GeV or so [@hnmssm]. There are also other examples in literature[@babu] where simple modifications of the post-MSSM physics can provide additional room for Higgs mass. In this paper we discuss an alternative scenario motivated by neutrino mass as well as understanding of the origin of parity violation [@goran] where the upper limit on the light Higgs mass is relaxed compared to MSSM. The model is the supersymmetric left-right model (SUSYLR) [@susylr; @kuchi] with TeV scale parity restoration (or TeV right-handed gauge boson mass $W_R$). The change in the Higgs mass upper limit comes from the contribution of the D-terms and satisfies the decoupling theorem i.e. as the $W_R$ mass goes to infinity, the Higgs mass upper bound coincides with that for MSSM. This effect is to be expected on general grounds [@batra] in gauge extensions of MSSM.    The gauge group of this model is $SU(2)_L\times SU(2)_R\times U(1)_{B-L}\times SU(3)_c$. The chiral left-handed and right-handed quark superfields are denoted by $Q\equiv (u,d)$ and $Q^c\equiv (u^c, d^c)$ respectively and similarly the lepton superfields are given by $L\equiv (\nu, e)$ and $L^c\equiv (\nu^c, e^c)$. The $Q$ and $L$ transform as left-handed doublets with the obvious values for the $B-L$ and the $Q^c$ and $L^c$ transform as the right-handed doublets with opposite $B-L$ values. The symmetry breaking is achieved by the following set of Higgs superfields: $\Phi_a(2,2,0,1)$ ($a=1,2$); $\Delta (3, 1, +2, 1)$; $\bar{\Delta}(3,1,-2,1)$; $\Delta^c(1,3,-2,1)$ and $\bar{\Delta^c} (1,3,+2,1)$. We include a gauge singlet superfield $S$ to facilitate the right handed symmetry breaking. The symmetry breaking can also be carried out by $B-L=1$ doublet fields, for which our results also apply. A virtue of using triplet Higgs fields is that they lead to the see-saw mechanism for small neutrino masses using only renormalizable couplings. In addition, as was noted many years ago [@kuchi], low scale $W_R$ requires that R-parity must break spontaneously. This leads to many interesting phenomenological implications that we do not address here. The superpotential for the model is given by: $$\begin{aligned} W &=& h_Q^a Q^T \tau_2 \Phi_a \tau_2 Q^c + h_L^a L^T \tau_2 \Phi_a \tau_2 L^c \nonumber \\ &+& i f \left( L^T \tau_2 \Delta L + L^{cT} \tau_2 \Delta^c L^c \right) + \mu_{ab} {\rm Tr} \left( \Phi_a^T \tau_2 \Phi_b \tau_2 \right) \nonumber \\ &+& S \left[ {\rm Tr} \left( \Delta \bar \Delta + \Delta^c \bar \Delta^c \right) - v_R^2 \right]\end{aligned}$$ In order to analyze the Higgs mass spectrum, we write down the Higgs potential for the model including the soft SUSY-breaking terms: $$\begin{aligned} V~=~V_F~+~V_S~+~V_D\end{aligned}$$ where $V_F$ and $V_D$ are the standard F-term and D-term potential and $V_S$ is the soft-SUSY-breaking terms which can be found in the literature [@kuchi; @huitu1]. Minimization of the Higgs potential leads to the following vacuum configuration for the $\Delta^c$ and $\nu^c$ Higgs fields[@kuchi]: $$\begin{aligned} \langle \widetilde L_i^c \rangle = \left( \begin{array}{c} \langle \widetilde{\nu^c}\rangle \delta_{i1} \\ 0 \end{array} \right), \ \langle\Delta^c \rangle = \left( \begin{array}{cc} 0 & 0 \\ \frac{v_R}{\sqrt{2}} & 0 \end{array} \right), \ \langle\bar \Delta^c\rangle = \left( \begin{array}{cc} 0 & \frac{\bar v_R}{\sqrt{2}} \\ 0 & 0 \end{array} \right) \nonumber\end{aligned}$$ Note that in the SUSY limit $v_R=\bar v_R$ and $\langle \widetilde{\nu^c}\rangle=0$. In the presence of supersymmetry breaking terms however, $\langle \widetilde{\nu^c}\rangle$ is nonzero. On the other hand, if this model is extended to include a B-L=0 right handed triplet with nonzero vev, there appears a global minimum of the potential which has $\langle \widetilde{\nu^c}\rangle=0$ [@kuchi1] even in the presence of susy breaking terms. Since the vev of $\nu^c$ is not relevant to our discussion, we will work with B-L=0 triplet model and set $\langle \widetilde{\nu^c}\rangle=0$ henceforth. The SM symmetry remains unbroken at this stage and is broken by the vevs of the $\Phi$ fields. We can write these fields in terms of their MSSM Higgs content: $$\begin{aligned} \Phi_i &=& \left( \begin{array}{cc} \phi_{id}^0 & \phi_{iu}^+ \\ \phi_{id}^- & \phi_{iu}^0 \end{array}\right) \equiv \left( H_{di}, H_{ui} \right),\end{aligned}$$ with vevs $ \langle H_{ui}^0 \rangle = \kappa_i, \ \langle H_{di}^0 \rangle = \kappa'_i.$ Before proceeding to discuss upper bound on the light neutral Higgs mass in this model, we wish to make a few comments on the implications of the TeV scale $W_R$ models for neutrino masses. First, in the non-SUSY left-right model where neutrino mass has both type I and type II seesaw contributions, having a TeV scale $W_R$ is unnatural since the type II seesaw contribution then becomes extremely large. This is due to the presence of non-zero couplings of type ${\rm Tr} \phi\Delta_L\phi^\dagger\Delta^\dagger_R$, which are allowed by the symmetries of the theory. On the other hand, in the SUSYLR model, this coupling is absent due to supersymmetry and therefore there is no type II seesaw contribution to neutrino mass. As far as the type I contribution is concerned, if we choose $h_\nu\sim h_e$ where $h_\nu$ is the Dirac neutrino Yukawa coupling, then we can have a few TeV $W_R$ and neutrino masses of order of eV. Thus as far as neutrino masses go, low scale $W_R$ is a realistic model. [**3. Light Higgs mass bound: single bi-doublet case**]{}   We proceed to consider the bound on the light neutral Higgs mass in the SUSYLR model. We work in the limit where $v_R$ and $\bar v_R$ are much bigger than the SM scale. In this limit, we search for additional contributions to the MSSM Higgs potential, which will be at the heart of the change in the upper limit of the Higgs boson mass. We first illustrate this in a one bi-doublet model. This simple model leads to vanishing CKM angles at the tree level, which can be fixed in one of two ways: (i) by including radiative correction effects from squark mixings [@dutta] or (ii) by including a second bi-doublet which decouples from the low energy sector but it has a tadpole induced vev that generates the correct CKM angles. We discuss the case (ii) toward the end of the paper. The interesting point is that neither of these affects the Higgs mass upper bound that we derive. For the model under consideration, we first show that in the SUSY limit the low energy Higgs potential recovers that of MSSM in the same limit. When soft SUSY breaking terms is taken into account, there appear new contributions to the MSSM Higgs potential, serving to raise the upper limit on the light Higgs mass, which can be significant for TeV scale $W_R$. We start with a review of the well known symmetry breaking of the model by the triplet Higgs fields $\Delta^c$ and $\bar \Delta^c$ in the SUSY case. The gauge bosons get mass from the kinetic terms of triplets and after symmetry breaking, the massless gauge boson and gaugino corresponding to $U(1)_Y$ is the combination $B = \frac{g_{BL}}{\sqrt{g_R^2 + g_{BL}^2}} W_{3R} + \frac{g_R}{\sqrt{g_R^2 + g_{BL}^2}} V_{BL}$, with the hypercharge gauge coupling given by $ \displaystyle\frac{1}{g^2_Y}~=~\frac{1}{g^2_R}+\frac{1}{g^2_{BL}}$. The heavy $Z'$-boson has mass squared $M_{Z'}^2 = 2 (g_R^2 + g_{BL}^2) v_R^2$. There is a factor 2 compared with the charged $W_R$ boson mass $M_{W_R^\pm}^2 = g_R^2 v_R^2$ because the triplet vev breaks custodial symmetry for the right-handed sector. Since there is no coupling between the bidoublet Higgs and the triplet Higgs fields responsible for parity breaking in Eq. (1), any change in the effective MSSM doublet Higgs potential below the $v_R$ scale must originate from the D-terms, $$\begin{aligned} V_D &=& \frac{g_R^2}{8} \left| {\rm Tr} [ 2 \Delta^{c\dag} \tau_m \Delta^c + 2 \bar \Delta^{c\dag} \tau_m \bar \Delta^c + \Phi \tau_m^T \Phi^\dag ] \right|^2 \nonumber \\ &+& \frac{g_{BL}^2}{8} \left( {\rm Tr} [ 2 \Delta^{c\dag} \Delta^c - 2 \bar \Delta^{c\dag} \bar \Delta^c ] \right)^2 \ .\end{aligned}$$ The contribution to the neutral Higgs fields couplings is, $$\begin{aligned} \label{V} V_D^{\rm neut.} &=& \frac{g_R^2}{8} \left| {\rm Tr} [ \Phi \tau_3^T \Phi^\dag ] \right|^2 + 4 (g_R^2 + g_{BL}^2) v_R^2 \left| \frac{\Delta^{c0} - \bar \Delta^{c0}}{\sqrt{2}}\right|^2 \nonumber \\ &+& g_R^2 v_R {\rm Re}[\left(\Delta^{c0} - \bar \Delta^{c0}\right)] {\rm Tr} [ \Phi \tau_3^T \Phi^\dag ] \ .\end{aligned}$$ The coupling is linear in the field Re$[\left( \Delta^{c0} - \bar\Delta^{c0}\right)]$, which we will call $\sigma_-$. As $\sigma_-$ field becomes heavy, its coupling to $[\Phi \tau_3^T \Phi^\dag ]$ will generate new quartic term in the MSSM doublet field potential, which in turn will lead to new contributions to Higgs mass upper bound. Collecting this new effect, we get for the Higgs quartic term: $$\begin{aligned} \label{quartic} \delta V(\Phi)~= ~\frac{1}{8} \left( g_R^2 - \frac{g^4_Rv^2_R}{M^2_{\sigma_-}}\right) \left| {\rm Tr} [ \Phi \tau_3^T \Phi^\dag ] \right|^2\end{aligned}$$ To evaluate this new contribution, we need to know $M_{\sigma_-}$. This has two potential contributions: (i) from the D-term and (ii) from the F-term contribution to the Higgs potential. It turns out that in the SUSY limit, the only contribution to $M_{\sigma_-}$ is from the D-terms and we have $M^2_{\sigma_-}=2(g^2_R+g^2_{BL})v^2_R$. This follows not only from actual calculations but also from the fact that $\sigma_-$ is a member of the Goldstone supermultiplet, all members of which must have the same mass as $Z'$ in the SUSY limit. This result would hold even if the superpotential had a term of the form $\mu \Delta^c\bar{\Delta^c}$. Using this in Eq. (\[quartic\]), it is easy to see that the net contribution to the quartic term in the Higgs superpotential becomes $\displaystyle\frac{g^2_Y}{8}\left(|H_u|^2-|H_d|^2\right)^2$. This is nothing but the $D_Y$ contribution to quartic Higgs doublet term in MSSM. Since in the decoupling limit, we get MSSM, as expected from the decoupling theorem. Let us now switch on the supersymmetry breaking terms. In their presence, the $\sigma_-$ field has aditional contributions which lead to a shift in the Higgs masses. To see this we introduce soft mass term $m_S^2 S^\dag S$ as well as SUSY breaking mass terms for the $\Delta^c$ and $\bar{\Delta^c}$. Taking the same SUSY breaking terms for all the fields gives different value for the $\sigma_-$ field mass and we get for the contribution to the quartic Higgs term $\displaystyle\frac{g^2_{Y,eff}}{8}\left(|H_u|^2-|H_d|^2\right)^2$ where $$\begin{aligned} \label{xxx} g^{2}_{\rm Y, eff} = g_R^2 - \frac{g_R^4}{g_R^2 + g_{BL}^2 + \frac{m_0^2}{2 v_R^2}} = \frac{g_R^2 g_{BL}^2 + g_R^2 \frac{m_0^2}{2v_R^2}}{g_R^2 + g_{BL}^2 + \frac{m_0^2}{2v_R^2}}\end{aligned}$$ where $m_0$ in the above equation is a generic soft mass term for sparticles that breaks supersymmetry. This leads to an enhancement of the Higgs mass upper bound since $g^{2}_{\rm Y, eff} > g^{2}_{\rm Y, SM}$ . To get an idea about how large the change in the upper bound is likely to be, we take $m_0 = 1$ TeV, $v_R = 2$ TeV, then from $g_L^2 \approx 0.42$ and $g_Y^{2} \approx 0.13$, we get the ratio $ r = \frac{g_L^2 + g^{2}_{\rm Y, eff}}{g_L^2 + g^{2}_Y} \approx 1.1$, which will give 10% increase of the tree level upper bound on the light Higgs mass i.e. it increases from $M_Z = 90$ GeV to 100 GeV. It is also worth pointing out that as the scale of parity violation goes to infinity, this new contribution goes to zero and one recovers the MSSM result. This is an important consistency check on our result [@pandita].   In the following, we discuss the radiative corrections to the Higgs boson mass for the model above. It is well known that $$\begin{aligned} \label{standard} \Delta V_1 = \frac{1}{64 \pi^2} {\rm Str}\ \mathcal{M}^4 \left( \log \frac{\mathcal{M}^2}{Q^2} - \frac{3}{2} \right)\ ,\end{aligned}$$ where the supertrace ${\rm Str}$ means $ {\rm Str}\ f(\mathcal{M}^2) = \sum_i (-1)^{2 J_i} (2 J_i + 1) f(m_i^2)$, where the mass $m_i$ is calculated in background fields, and the sum counts all the fermionic and bosonic degrees of freedom. $J_i$ is the spin for particle $i$ and $Q$ is the renormalization scale on the order of electroweak symmetry breaking. In MSSM, top and stop contributes dominantly to $\Delta V_1$ because of large Yukawa coupling $y_t$, while the bottom and sbottom contribution is only important for $\tan \beta \gg 20$. In SUSYLR model with one bidoublet, we have $\tan\beta = m_t/m_b \approx 40$, so the sbottom quark can couples to $H_u$ with a large coupling $y_t$. This is from the F-term of bidoublet field $\Phi_1$. $$\begin{aligned} (F_{\Phi_1})_{ij} &=& h_1 (Q_L^T \tau_2)_i (\tau_2 Q_R^c)_j + \mu_{11} (\tau_2 \Phi_1^T \tau_2)_{ji} \\ |F_{\Phi_1}|^2 &=& \mu_{11} m_t \cot \beta \cdot \widetilde t_L^{\dag} \widetilde t_R + \mu_{11} m_t \cdot \widetilde b_L^{\dag} \widetilde b_R + \cdots \nonumber\end{aligned}$$ Note the second term in the second line differs from MSSM by a factor $m_t / (m_b \tan\beta)$, since $H_u$ and $H_d$ are unified into the same bidoublet. This means the sbottom mass receives a large LR mixing proportional to the top Yukawa couplng, even for low $\tan\beta$ (in the presence of a second bidoublet in the realistic model below). Therefore now there are three fields that have to be taken into account: top, stop and sbottom. If we neglect the small couplings except for $y_t$, and also neglect the $A$-terms their masses can be approximated by $$\begin{aligned} m_t^2 &=& y_t^2 |H_u|^2 \nonumber \\ m_{\widetilde t_1}^2 &\simeq& m_{\widetilde t_2}^2 \simeq m_{\widetilde Q}^2 + y_t^2 |H_u|^2 \nonumber \\ m_{\widetilde b_1}^2 &\simeq& m_{\widetilde Q}^2 + y_t \mu_{11} |H_u| + \cdots \nonumber \\ m_{\widetilde b_2}^2 &\simeq& m_{\widetilde Q}^2 - y_t \mu_{11} |H_u| + \cdots\end{aligned}$$ The $\cdots$ represents dependence on $|H_d|$, but since the lightest Higgs boson is mainly made up of $H_u$ for large $\tan\beta$ with only a small $H_d$ component, this dependence can be neglected. We can choose a proper scale $Q$ so that the first derivative vanishes, which will be important in eliminating the explicit $Q$ dependence of Higgs mass, i.e. it can only depend on $Q$ through depending on other parameters. Second derivative gives radiative corrections to the lightest Higgs mass $$\begin{aligned} \label{111} \delta M_{h}^2 &\simeq& \frac{1}{2}\frac{\partial^2 \Delta V_1}{\partial |H_u|^2} = \frac{3g_L^2}{8 \pi^2} \frac{m_t^4}{M_{W_L}^2} \log \frac{m_{\widetilde t}^2}{m_t^2} \nonumber \\ &-& \frac{3 g_L^2}{64 \pi^2} \frac{m_0^2 \mu_{11} m_t}{M_{W_L}^2} \log \frac{m_{\widetilde b_1}^2}{m_{\widetilde b_2}^2} + \frac{3g_L^2}{32\pi^2} \frac{\mu_{11}^2 m_t^2}{M_{W_L}^2}\end{aligned}$$ The first term is as the usual MSSM one. Now we have two new terms proportional to $\mu_{11}$. Their sum is an even function of $\mu_{11}$ since changing the sign also interchanges $\widetilde b_1 \leftrightarrow \widetilde b_2$. We find the sum of second and third term are negative definite for arbitrary $\mu_{11}$. Actually, for $m_0 \sim 1$ TeV, one can expand with $\frac{\mu_{11}m_t}{m_0^2}$. The net contribution is non-vanishing only up to the third order, which is $-\frac{g_L^2}{32 \pi^2} \frac{\mu_{11}^4 m_t^4}{M_{W_L}^2 m_0^4}$, depending on $\mu_{11}$ very mildly. For $\mu_{11}$ ranging from 100 GeV to 300 GeV around EW scale, this negative contribution is smaller than 1 GeV. Note that in this case, we did not have to discuss the details of EWSB since it is very similar to MSSM. Let us present the numerical results for the Higgs mass upper bound for this scenario. In Fig. 1, we plot the difference in the prediction of upper bound on the lightest Higgs boson mass between SUSYLR model and MSSM: $\Delta M_h = m_h^{\rm SUSYLR}-m_h^{\rm MSSM}$. For the right-handed scale near 2-3 TeV, the upward shift of the Higgs mass bound can be of a few GeV, increasing as $v_R$ decreases. (For symmetry breaking using Higgs doublets, $M_{\sigma^-}^2 = (g_R^2+g_{BL}^2)v_R^2$ and $g_{\rm Y, eff}$ in Eq. (\[xxx\]) gets increased to $\frac{g_R^2 g_{BL}^2 + g_R^2 m_0^2/v_R^2}{g_R^2 + g_{BL}^2 + m_0^2/v_R^2}$. Then $\Delta M_h$ can further increase by a factor of 2.) The bound also increases with the soft mass scale $m_0$ as expected. From the discussions below Eq. (\[111\]), non-zero $\mu_{11}$ always gives small and negative contribution. In order not to violate the lower bound on chargino mass at LEP2, we choose $\mu_{11} \geq 100$ GeV. ![Higgs mass bound from SUSYLR model shown as the difference from MSSM. Tree level results are shown as dashed curves and radiative corrected ones are solid curves. Upper, middle and lower two curves correspond to $m_0=1$ TeV (blue), $m_0=600$ GeV (red), $m_0=400$ GeV (green), respectively. For radiative correction, we choose $\mu_{11} = 100$ GeV. For all the curves, we choose $\tan \beta \approx 40$ as noted.](1a.eps){width="7cm"}   We can extend our discussion to the more realistic two bi-doublet case as needed to generate the correct CKM angles at the tree level. There are two possible ways to do that; both these we discuss below. [**Model A**]{}: In this case, we identify $\Phi_1$ as the bi-doublet of the previous section. We can diagonalize the corresponding Yukawa coupling matrix $h^1_Q$. We are then forced to have all elements of the second bi-doublet $\Phi_2$ Yukawa coupling, $h^2_Q\neq 0$. Once the second $\Phi_2$ has vev, by appropriate choice of this matrix, we can generate the desired quark masses and mixings. We will see that even though there are four real neutral Higgs fields in this case, the upper limit on the light Higgs field remains the same as in the single bi-doublet case. To see this, first we choose a basis in the $\Phi$ space such that the superpotential for $\Phi$’s has the form $$\begin{aligned} W_{extra}~=~\mu_{11} {\rm Tr}(\Phi_1\Phi_1)~+~\mu_{22} {\rm Tr}(\Phi_2\Phi_2)\end{aligned}$$ We assume that $\mu_{22} \gg \mu_{11}$. We have then no freedom to diagonalize the soft SUSY breaking terms. The sum of the $V_F+V_S$ can then in general be written as $$\begin{aligned} V_F~+~V_S(\Phi_i)&=&m^2_{ij}{\rm Tr}\phi^\dagger_i\phi_j~+~b_{11} {\rm Tr}(\phi_1\phi_1) \nonumber \\ &+&b_{22} {\rm Tr}(\phi_2\phi_2)+~{\rm h.c.}\end{aligned}$$ Note that if $m^2_{22} \gg m^2_{12}$, then the mixed term in the $\phi$’s will induce a vev for the $\phi_2$ field which is small compared to that for the $\phi_1$ i.e. $\kappa_2, \kappa'_2 \ll \kappa_1,\kappa'_1$. To generate the correct mass and mixing pattern for the quarks, it is sufficient to have the $\phi_2$ vevs of order of a 100 MeV. For instance if $m_{12}\leq 10 $ GeV and $m_{22} = 1$ TeV, then we can estimate $\kappa'_2 = \kappa'_1 m_{12} / m_{22} = \kappa_1'/100 \sim 100$ MeV, which is enough to generate the strange quark mass as well as other CKM angles. Note also that one should include the one loop effects coming from squark masses and mixings[@dutta]. While we do not give a detailed fit here, it seems clear that this is a realistic model where the new Higgs mixing parameter $m_{12}$ is in the 1-10 GeV range. When it is close to one GeV, the effect on the Higgs mass upper bound is also about a GeV lower due to off diagonal contributions. We can also keep the $\Phi_2$ Yukawa couplings sufficiently small so that their radiative corrections do not affect the one loop result. This vacuum then is a perturbation around the vacuum of the single bi-doublet case and furthermore due to large $\mu_{22}$, the $H_{u,d}$ coming from the second bi-doublet will acquire heavy mass and decouple without affecting the light Higgs mass upper bound except perhaps a small one GeV or so shift. This case corresponds to large tan$\beta \approx$40. [**Model B:**]{} In this case, we choose two bi-doublets with the vev pattern given by: $$\begin{aligned} \langle\Phi_1 \rangle &=& \left( \begin{array}{cc} \kappa & 0\\ 0 & 0 \end{array}\right), \ \ \ \langle\Phi_2 \rangle = \left( \begin{array}{cc} 0 & 0 \\0 & \kappa' \end{array}\right)\end{aligned}$$ Since the down quark masses in this case come from a second Yukawa coupling unlike the model A, we can have the value of tan$\beta ~\equiv\frac{\kappa'}{\kappa}$ much lower than 40 by appropriate choice of the second Yukawa coupling matrix. There are generally four electreweak scale Higgs doublets. Using the standard formula in Eq. (\[standard\]), we have calculated the 1-loop radiative corrections to the $4 \times 4$ neutral Higgs mass matrix in the presence of SUSY breaking thresholds. As before, we negleted all other couplings but $y_t$ when calculating $\Delta V_1$. We also keep the effect of non-zero vev for the right-handed sneutrino. However due to small neutrino Dirac Yukawa couplings, the mixing effect between the Higgs field and the left-handed sneutrino caused by the right-handed sneutrino vev is very small and does not affect our result. In order to estimate the upper bound, we have done a numerical study to obtain the Higgs mass for random choice of parameters. The results are the scatter points in Fig. 2 below for a choice of the generic soft mass scale $m_0=1$ TeV and right-handed scale $v_R=1.5$ TeV. Each point in the scatter plot represents the lightest Higgs mass for a specific choice of parameters. The upper limit therefore corresponds to the topmost set of points in Fig. 2. In contrast, the MSSM Higgs mass upper bound is plotted as the yellow (lower) curve, which is at most 130 GeV after 1-loop radiative corrections with the same choice of $m_0$. The red (upper) curve is for Model A. We find Fig. 2 that in general SUSYLR model the upper bound can be as high as 140 GeV or even more especially in the regime $5<\tan\beta<10$. This is higher than the prediction of MSSM. Clearly, as the right-handed scale goes down, the upper bound inceases. ![The scatter plot represents the Higgs mass values as a function of tan$\beta$ in the two bi-doublet SUSYLR model for random choice of parameters. The yellow (lower) curve is the prediction for the light Higgs mass upper bound in MSSM, while the red (upper) curve is for Model A in SUSYLR. The blue points are for Model B in the SUSYLR case. When plotting the figure, we choose $m_0=1$ TeV and $v_R=1.5$ TeV. We have used a Monte Carlo simulation for the parameter space to generate the plot for Model B. ](4HDM.eps){width="7cm"} [**6. Comments and Conclusion:**]{} Before concluding, we wish to make a few comments on the model: \(i) Low scale non-SUSY left-right models have strong constraints coming from the tree level Higgs contribution to flavor changing processes. In the SUSY version however, there are additional contributions to the same from squark and slepton sector which can be used to cancel this effect[@ji1]. While strictly this is not natural, from a phenomenological point of view, this makes the model consistent when both the $W_R$ and Higgs masses are in the few TeV range. \(ii) The second point is that unlike other models such as NMSSM where the light Higgs mass bound is changed by making additional assumptions about the Higgs couplings (e.g. not hitting the Landau pole at the GUT scale), in our model the increase in the bound is purely gauge coupling induced and is independent of the Higgs couplings. \(iii) It is also worth stressing the obvious point that observation of a Higgs with mass above the MSSM bound of 135 GeV is not necessarily an evidence for the SUSYLR model since there exist other models with which also relax this bound. One needs other direct evidences such as the mass of $W_R$ or $Z'$ produced at LHC which when combined with observed higher Higgs mass could provide evidence for low left-right seesaw scale. To conclude, we have pointed out that the upper bound on the light Higgs mass is higher if MSSM is assumed to be an effective low energy theory of a TeV scale SUSYLR model. The increase can be as much as 10 GeV or more depending on the scale of parity breaking. If the Higgs boson mass in the collider searches is found to exceed the MSSM upper limit of 135 GeV, one interpretation of that could be in terms of a TeV scale seesaw in the context of a SUSYLR model. We thank K. Agashe, P. Batra, and Z. Chacko for comments. This work was partially supported by the U. S. Department of Energy via grant DE-FG02-93ER-40762. R. N. M. is supported by NSF grant No. PHY-0652363. Y. Z. acknowledges the hospitality and support from the TQHN group at University of Maryland and a partial support from NSFC grants 10421503 and 10625521. [90]{} For a review, see S. Dawson, J. Gunion, H. Haber and G. Kane, [*Higgs Hunter’s guide*]{}, SCIPP-89/13, UCD-89-4, BNL-41644, Jun 1989. 404pp. for an upper limit from unitarity considerations, see B. W. Lee, C. Quigg and H. B. Thacker, Phys. Rev.  D [**16**]{}, 1519 (1977). M. Sher, Phys. Rept.  [**179**]{}, 273 (1989). H. Haber and R. Hemfling, Phys. Rev. Lett. [**66**]{}, 1815 (1991); J. R. Ellis, G. Ridolfi and F. Zwirner, Phys. Lett.  B [**257**]{}, 83 (1991). R. Barate et al. Phys. Lett. [**565**]{}, 61 (2003). H. P. Nilles, M. Srednicki and D. Wyler, Phys. Lett.  B [**120**]{}, 346 (1983). U. Ellwanger and C. Hugonie, Mod. Phys. Lett.  A [**22**]{}, 1581 (2007). Some examples of such models with relaxed Higgs mass limit are: M. Drees, Int. J. Mod. Phys.  A [**4**]{}, 3635 (1989); K. S. Babu, I. Gogoladze and C. Kolda, arXiv:hep-ph/0410085; G. Bhattacharyya, S. K. Majee and A. Raychaudhuri, Nucl. Phys.  B [**793**]{}, 114 (2008); S. Hesselbach, D. J. Miller, G. Moortgat-Pick, R. Nevzorov and M. Trusov, Phys. Lett.  B [**662**]{}, 199 (2008); M. Dine, N. Seiberg and S. Thomas, Phys. Rev.  D [**76**]{}, 095004 (2007); V. Barger, P. Langacker, H. S. Lee and G. Shaughnessy, Phys. Rev.  D [**73**]{}, 115010 (2006). R. N. Mohapatra and J. C. Pati, Phys. Rev. [**D 11**]{}, 566, 2558 (1975); G. Senjanović and R. N. Mohapatra, Phys. Rev. [**D 12**]{}, 1502 (1975). M. Cvetic and J. C. Pati, Phys. Lett.  B [**135**]{}, 57 (1984); R. M. Francis, M. Frank and C. S. Kalman, Phys. Rev.  D [**43**]{}, 2369 (1991). R. Kuchimanchi and R. N. Mohapatra, Phys. Rev.  D [**48**]{}, 4352 (1993). K. Huitu, P. N. Pandita and K. Puolamaki, arXiv:hep-ph/9904388. P. Batra, A. Delgado, D. E. Kaplan and T. M. P. Tait, JHEP [**0402**]{}, 043 (2004). R. Kuchimanchi and R. N. Mohapatra, Phys. Rev. Lett.  [**75**]{}, 3989 (1995) K. S. Babu, B. Dutta and R. N. Mohapatra, Phys. Rev.  D [**60**]{}, 095004 (1999). The upper bound on light Higgs mass in SUSYLR models derived in K. Huitu, P. N. Pandita and K. Puolamaki, Phys. Lett.  B [**423**]{}, 97 (1998) does not satisfy this decoupling consistency. Y. Zhang, H. An and X. d. Ji, arXiv:0710.1454 \[hep-ph\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'The energy-loss formula of the production of gravitons by the binary is derived in the source theory formulation of gravity . Then, the quantum energy loss formula involving radiative corrections is derived. We postulate an idea that gravitational pulsars are present in our universe and that radiative corrections play a role in the physics of the cosmological scale. In the last part of the article, we consider so called electromagnetic pulsar which is formed by two particles with the opposite electrical charges moving in the constant magnetic field and generating the electromagnetic pulses. We think that the cosmological analogue is possible.' author: - | **Miroslav Pardy\ INSTITUTE OF PLASMA PHYSICS ASCR\ PRAGUE ASTERIX LASER SYSTEM, PALS\ Za Slovankou 3, 182 21 Prague 8, Czech Republic\ and\ MASARYK UNIVERSITY\ DEPARTMENT OF PHYSICAL ELECTRONICS\ Kotlářská 2, 611 37 Brno, Czech Republic\ e-mail: [email protected]** title: | **RADIATION OF THE GRAVITATIONAL\ AND ELECTROMAGNETIC BINARY\ PULSARS** --- =500 =2500 PULSARS IN GENERAL ================== Pulsars are the specific cosmological objects which radiates electromagnetic energy in the form of short pulses. They were discovered by Hewish et all. in 1967, published by Hewish et all. (1968) and specified later as neutron stars. Now, it is supposed that they are fast rotating neutron stars with approximately Solar mass and with strong magnetic fields ($10^9 - 10^{14}$ Gauss). The neutron stars are formed during the evolution of stars and they are the product of the reaction $p + e^{-} \rightarrow n + \nu_{e}$, where the symbols in the last equation are as follows: proton, electron, neutron and the electron neutrino. The neutron stars were postulated by Landau (1932) at the year of the discovery of neutron in 1932. The pulsars composed from pions, or, hyperons, or, quarks probably also exist in universe, however it is not a measurement technique which rigorously determines these kinds of pulsars. Pulsars emit highly accurate periodic signals mostly in radio waves beamed in a cone of radiation centered around their magnetic axis. These signals define the period of rotation of the neutron star, which radiates as a light-house once per revolution. We know so called slow rotated pulsars, or, normal pulsars with period P $>$ 20ms and so called millisecond pulsars with period P $<$ 20ms. In 1974 a pulsar in a binary system was discovered by Hulse and Taylor and the discovery was published in 1975 (Hulse and Taylor, 1975). The period of rotation of normal pulsars increases with time and led to the rejection of the suggestion that the periodic signal could be due to the orbital period of binary stars. The orbital period of an isolated binary system decreases as it loses energy, whereas the period of a rotating body increases as it loses energy. The present literature concerns only the electromagnetic pulsars. The number of them is 1300 and they were catalogued with very precise measurements of their positions and rotation rates. The published pulse profiles are so called integrated profiles obtained by adding some hundred of thousands of individual pulses. The integration hides a large variation of size and shape from pulse to pulse of the individual pulsar. The radiation is emitted along the direction of the field lines, so that the observed duration of the integrated profile depends on the inclination of the dipole axis to the rotation axis, because it is supposed that the pulsar radiation is a radiation of the dipole in the magnetic field. It means that the radiation of these pulsars is the synchrotron radiation of charged particles. We know that the analogical ultrarelativistic charge moving in a constant magnetic field radiates the synchrotron radiation in a very narrow cone and such system can be considered as a free electron laser in case that the opening angle of the cone is very small. Of course the angle of emission of pulsars is not smaller than radians. In other words the observed pulsars are not the free electron lasers. The idea that pulsar can be a cosmical maser was also rejected. Pulsar radio emission is highly polarized, with linear and circular components. Individual pulses are often observed to be 100% polarized. The study of the polarization of pulsar is the starting point of the determination of their real structure. The only energy source of the pulsar is the rotational energy of the neutron star. The rate of the dissipation of the rotational energy can be determined. The moment of inertia is fairly accurately known from the theory of the internal structure and the rotational slowdown is very accurately measured for almost every pulsar. So most of energy is radiated as magnetic dipole radiation at the rotation frequency leading to a measure of the magnetic dipole and the surface field strength. The published articles on pulsars deal with the observation of the pulses and with the theoretical models. The observational results giving an insight into the behavior of matter in the presence of extreme gravitational and electromagnetic field are summarized for instance by Manchester (Manchester, 1992). The emission mechanism of photons are reviewed from a plasmatical viewpoint by Melrose (1992). The morfology of the radio pulsars is presented in the recent treatise by Seiradakis et. al. (2004). The review of properties of pulsars involving the radio propagation in the magnetosphere and of emission mechanism is summarized in the article by Graham-Smith (2003). At the same time there are, to our knowledge, no information on so called gravitational pulsars, or, on the models where the pulses are produced by the retrograde motion of the bodies moving around the central body. So, the question arises, if it is possible to define gravitational binary pulsar, where the gravitational energy is generated by the binary system, or, by the system where two components are in the retrograde motion. We suppose that in case of the massive binary system the energy is generated in a cone starting from the component of a binary and it can be seen only if the observer is present in the axis this cone. Then, the observer detects gravitational pulses when the detector is sufficiently sensitive. There are many methods for the detection of the gravitational waves. One method, based on the quantum states of the superfluid ring, was suggested by author (Pardy, 1989). We know that gravitational waves was indirectly confirmed by the observation of the period of the pulsar PSR 1913 + 16. The energy loss of this pulsar was calculated in the framework of the classical theory of gravitation. The quantum energy loss was given for instance by Manoukian (1990). His calculation was based on the so called Schwinger source theory where gravity is considered as a field theory of gravitons where graviton is a boson with spin 2, helicity $\pm$2 and zero mass. It is an analogue of photon in the electromagnetic theory. In the following text we start with the source derivation of the power spectral formula of the gravitational radiation of a binary. Then, we calculate the quantum energy loss of a binary and the gravitational power spectrum involving radiative corrections. In the last part of an article, we consider so called electromagnetic pulsar which is formed by two particles with the opposite electrical charges which move in the constant magnetic field and generate the electromagnetic pulses. THE QUANTUM GRAVITY ENERGY LOOS OF A BINARY SYSTEM ================================================== Introduction ------------ At the present time, the existence of gravitational waves is confirmed, thanks to the experimental proof of Taylor and Hulse who performed the systematic measurement of the motion of the binary with the pulsar PSR 1913+16. They found that the generalized energy-loss formula, which follows from the Einstein general theory of relativity, is in accordance with their measurement. This success was conditioned by the fact that the binary with the pulsar PSR 1913+16 as a gigantic system of two neutron stars, emits sufficient gravitational radiation to influence the orbital motion of the binary at the observable scale. Taylor and Hulse, working at the Arecibo radiotelescope, discovered the radiopulsar PSR 1913+16 in a binary, in 1974, and this is now considered as the best general relativistic laboratory (Taylor, 1993). Pulsar PSR 1913+16 is the massive body of the binary system where each of the rotating pairs is 1.4 times the mass of the Sun. These neutron stars rotate around each other with a period 7.8 hours, in an orbit not much larger than the Sun’s diameter. Every 59 ms, the pulsar emits a short signal that is so clear that the arrival time of a 5-min string of a set of such signals can be resolved within 15 $\mu$s. A pulsar model based on strongly magnetized, rapidly spinning neutron stars was soon established as consistent with most of the known facts (Huguenin et al, 1968); its electrodynamical properties were studied theoretically (Gold, 1968) and shown to be plausibly capable of generating broadband radio noise detectable over interstellar distances. The binary pulsar PSR 1913+16 is now recognized as the harbinger of a new class of unusually short-period pulsars, with numerous important applications. Because the velocities and gravitational energies in a high-mass binary pulsar system can be significantly relativistic, strong-field and radiative effects come into play. The binary pulsar PSR 1913+16 provides significant tests of gravitation beyond the weak-field, slow-motion limit (Goldreichet al., 1969; Damour et al., 1992). The goal of this section is not to repeat the derivation of the Einstein quadrupole formula, because this has been performed many times in general relativity and also in the Schwinger source theory in the weak-field limit (Manoukian, 1990). We show that just in the framework of the source theory it is easy to determine the quantum energy-loss formula of the binary system. The energy-loss formula can be generalized in such a way it involves also the radiative corrections. Since the measurement of the motion of the binaries goes on, we hope that future experiments will verify the quantum version of the energy-loss formula, involving also the radiative corrections. The source theory formulation of the problem -------------------------------------------- We show how the total quantum loss of energy caused by the production of gravitons, emitted by the binary system of two point masses moving around each other under their gravitational interaction, can be calculated in the framework of the source theory of gravity. Source theory (Schwinger, 1970, 1973, 1976) was initially constructed to describe the particle physics situations occurring in high-energy physics experiments. However, it was found that the original formulation simplifies the calculations in the electrodynamics and gravity, where the interactions are mediated by photon and graviton respectively. The source theory of gravity forms the analogue of quantum electrodynamics because, while in QED the interaction is mediated by the photon, the gravitational interaction is mediated by the graviton (Schwinger, 1976). The source theory of gravity invented by Schwinger is linear theory. So, the question arises if it is in the coincidence with the Einstein gravity equations which are substantially nonlinear. The answer is affirmative, because the coincidence is only with the linear approximation of the Einstein theory. The experimental results of the Schwinger theory are also in harmony with experiment. The quadrupole formula of Einstein also follows from the Schwinger version. The unification of gravity and electromagnetism is possible only in the Schwinger source theory. It is possible when, and only when it is possible the unification of forces. And this is performed in the Schwinger source theory of all interactions where force is of the Yukawa form. The problem of unification is not new. We know from the history of physics that the Ptolemy system could not be unified with the Galileo-Newton system because in the Ptolemy system it is not defined the force which is the fundamental quantity in the GN system and the primary cause of all phenomena in this system. Einstein gravity uses the Riemann space-time where the gravity force has not the Yukawa dynamical form. The curvature of space-time is defined as the origin of all phenomena. The gravity force in the Einstein theory is in the antagonistic contradistinction with the Yukawa force in the quantum field theory and therefore it seems that QFT, QED, QCD and EL.-WEAK theory cannot be unified with the Einstein gravity. Manoukian (1990) derivation of the Einstein quadrupole formula in the framework of the Schwinger source theory is possible because of the coincidence of the source theory with the linear limit of the Einstein theory. Our approach is different from Manoukian method because we derive the power spectral spectral formula $ P(\omega)$ of emitted gravitons with frequency $\omega$ and then using relation $(- dE/dt) = \int d\omega P(\omega)$ we determine the energy loss $E$. In case of the radiative correction, we derive only the power spectral formula in the general form. The mathematical structure of $P(\omega )$ follows directly from the action $W$ and while in the case of the gravitational radiation, the formula is composed from the tensor of energy-momentum, then, in the case of the electromagnetic radiation formula, it involves charged vector currents. The basic formula in the source theory is the vacuum-to-vacuum amplitude (Schwinger et al., 1976): $$\langle 0_{+}|0_{-}\rangle = e^{\frac {i}{\hbar}\*W(S)},\eqno(1)$$ where the minus and plus tags on the vacuum symbol are causal labels, referring to any time before and after region of space-time, where sources are manipulated. The exponential form is introduced with regard to the existence of the physically independent experimental arrangements, which has the simple consequence that the associated probability amplitudes multiply and the corresponding $W$ expressions add (Schwinger et al., 1976; Dittrich, 1978). In the flat space-time, the field of gravitons is described by the amplitude (1) with the action (Schwinger, 1970) ($c = 1$ in the following text) $$W(T) = 4\pi\*G\*\int (dx)(dx')$$ $$\times \quad \left[T^{\mu\nu}(x) \*D_{+}(x-x')T_{\mu\nu}(x') - \frac{1}{2}\*T(x)D_{+}(x-x')T(x')\right],\eqno(2)$$ where the dimensionality of $W(T)$ is the same as the dimensionality of the Planck constant $\hbar$; $T_{\mu\nu}$ is the tensor of momentum and energy. For a particle moving along the trajectory ${\mbox {\bf x}} = {\mbox {\bf x}}(t)$, it is defined by the equation (Weinberg, 1972): $$T^{\mu\nu}(x) = \frac{p^{\mu}p^{\nu}}{E}\*\delta({\mbox {\bf x}} - {\mbox {\bf x}}(t)), \eqno(3)$$ where $p^{\mu}$ is the relativistic four-momentum of a particle with a rest mass $m$ and $$p^{\mu} = (E,{\mbox {\bf p}}) \eqno(4)$$ $$p^{\mu}\*p_{\mu} = - m^2,\eqno(5)$$ and the relativistic energy is defined by the known relation $$E = \frac {m}{\sqrt{1 - {\mbox {\bf v}}^{2}}},\eqno(6)$$ where is the three-velocity of the moving particle. Symbol $T(x)$ in formula (2) is defined as $T = g_{\mu\nu}T^{\mu\nu}$, and $D_{+}(x-x')$ is the graviton propagator whose explicit form will be determined later. The action $W$ is not arbitrary because it must involve the attractive force between the gravity masses while in case of the electromagnetic situation the action must involve the repulsive force between charges of the same sign. It is very surprising that such form of Lagrangians follows from the quantum definition of the vacuum to vacuum amplitude. It was shown by Schwinger that Einstein gravity also follows from the source theory, however the method of derivation is not the integral part of the source theory because the source theory is linear and it is not clear how to establish the equivalence between linear and nonlinear theory. String theory tries to solve the problem of the unification of all forces, however, this theory is, at the present time, not predictable and works with so called extra-dimensions which was not observed. It is not clear from the viewpoint of physics, what the dimension is. It seems that many problems can be solved in the framework of the source theory. The power spectral formula in general ------------------------------------- It may be easy to show that the probability of the persistence of vacuum is given by the following formula (Schwinger et al., 1976): $$|\langle 0_{+}|0_{-}\rangle|^2 = \exp\left\{-\frac {2}{\hbar}{\mbox {\rm Im}}\* W\right\} \, \stackrel{d}{=}\, \exp\left\{-\int\,dtd\omega \frac {1}{\hbar\omega}P(\omega,t)\right\},\eqno(7)$$ where the so-called power spectral function $P(\omega,t)$ has been introduced (Schwinger et al., 1976). In order to extract this spectral function from Im $W$, it is necessary to know the explicit form of the graviton propagator $D_{+}(x-x')$. The physical content of this propagator is analogous to the content of the photon propagator. It involves the gravitons property of spreading with velocity $c$. It means that its explicit form is just the same as that of the photon propagator. With regard to the source theory (Schwinger et al., 1976) the $x$-representation of $D_{+}(x)$ in eq. (2) is as follows: $$D_{+}(x-x') = \int \frac {(dk)}{(2\pi)^4}\*e^{ik(x-x')}\*D(k),\eqno(8)$$ where $$D(k) = \frac {1}{|{\mbox {\bf k}}^2| - (k^0)^2 - i\epsilon},\eqno(9)$$ which gives $$D_{+}(x-x') =\frac {i}{4\pi^2}\*\int_{0}^{\infty}d\omega \frac {\sin\omega|{\mbox {\bf x}}-{\mbox {\bf x}}'|} {|{\mbox {\bf x}} - {\mbox {\bf x}}'|}\* e^{-i\omega|t-t'|}. \eqno(10)$$ Now, using formulas (2), (7) and (10), we get the power spectral formula in the following form: $$P(\omega,t) = 4\pi\*G\*\omega\int\, (d{\mbox {\bf x}}) (d{\mbox {\bf x}}')dt'\* \frac {\sin\omega|{\mbox {\bf x}}-{\mbox {\bf x}}'|} {|{\mbox {\bf x}}-{\mbox {\bf x}}'|}\*\cos\omega(t- t')$$ $$\times \quad \left[T^{\mu\nu}({\mbox {\bf x}},t)T_{\mu\nu} ({\mbox {\bf x}}',t') - \frac{1}{2}g_{\mu\nu}T^{\mu\nu}({\mbox {\bf x}},t) g_{\alpha\beta}T^{\alpha\beta}({\mbox {\bf x}}',t')\right]. \eqno(11)$$ The power spectral formula for the binary system ------------------------------------------------ In the case of the binary system with masses $m_{1}$ and $m_{2}$, we suppose that they move in a uniform circular motion around their center of gravity in the $xy$ plane, with corresponding kinematical coordinates: $${\mbox {\bf x}}_{1}(t) = r_{1}({\mbox {\bf i}}\cos(\omega_{0}t) + {\mbox {\bf j}}\sin(\omega_{0}t)) \eqno(12)$$ $${\mbox {\bf x}}_{2}(t) = r_{2}({\mbox {\bf i}}\cos(\omega_{0}t + \pi) + {\mbox {\bf j}}\sin(\omega_{0}t + \pi))\eqno(13)$$ with $${\mbox {\bf v}}_{i}(t) = d{\mbox {\bf x}}_{i}/dt, \hspace{5mm} \omega_{0} = v_{i}/r_{i}, \hspace{5mm} v_{i} = |{\mbox {\bf v}}_{i}| \quad (i = 1,\, 2). \eqno(14)$$ For the tensor of energy and momentum of the binary we have: $$T^{\mu\nu}(x) = \frac{p_{1}^{\mu}p_{1}^{\nu}}{E_{1}}\*\delta({\mbox {\bf x}} - {\mbox {\bf x}}_{1}(t)) + \frac{p_{2}^{\mu}p_{2}^{\nu}}{E_{2}}\*\delta({\mbox {\bf x}} - {\mbox {\bf x}}_{2}(t)), \eqno(15)$$ where we have omitted the tensor $t^G_{\mu\nu}$, which is associated with the massless, gravitational field distributed all over space and proportional to the gravitational constant $G$ (Cho et al., 1976): After insertion of eq. (15) into eq. (11), we get: $$P_{total}(\omega,t) = P_{1}(\omega,t) +P_{12}(\omega,t) + P_{2}(\omega,t), \eqno(16)$$ where ($t' - t = \tau$): $$P_{1}(\omega,t) = \frac {G\omega}{r_{1}\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\frac {\sin[2\omega\*r_{1}\*\sin(\omega_{0}\tau/2)]} {\sin(\omega_{0}\tau/2)}\*\cos\omega\tau$$ $$\times\quad \left(E_{1}^2(\omega_{0}^2\*r_{1}^2\*\cos\omega_{0}\tau - 1)^2 - \frac {m_{1}^4}{2E_{1}^2}\right),\eqno(17)$$ $$P_{2}(\omega,t) = \frac {G\omega}{r_{2}\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\frac {\sin[2\omega\*r_{2}\*\sin(\omega_{0}\tau/2)]} {\sin(\omega_{0}\tau/2)}\*\cos\omega\tau$$ $$\times\quad \left(E_{2}^2(\omega_{0}^2\*r_{2}^2\*\cos\omega_{0}\tau - 1)^2 - \frac {m_{2}^4}{2E_{2}^2}\right),\eqno(18)$$ $$P_{12}(\omega,t) = \frac {4G\omega}{\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\frac {\sin\omega\*[r_{1}^2 + r_{2}^2 + 2r_{1}\*r_{2}\cos(\omega_{0}\tau)]^{1/2}} {[r_{1}^2 + r_{2}^2 + 2r_{1}\* r_{2}\* \cos(\omega_{0}\tau)]^{1/2}} \*\cos\omega\tau$$ $$\times\quad \left(E_{1}\*E_{2}(\omega_{0}^2\*r_{1}\*r_{2} \*\cos\omega_{0}\tau + 1)^2 - \frac {m_{1}^2\*m_{2}^2}{2E_{1}\*E_{2}}\right) .\eqno(19)$$ The quantum energy loss of the binary ------------------------------------- Using the following relations $$\omega_{0}\tau = \varphi + 2\pi\*l, \hspace{7mm} \varphi\in(-\pi,\pi), \quad l = 0,\, \pm1,\, \pm2,\, ...\eqno(20)$$ $$\sum_{l=-\infty}^{l=\infty}\; \cos2\pi\*l \frac {\omega}{\omega_{0}} = \sum_{l=-\infty}^\infty \; \omega_{0}\delta(\omega - \omega_{0}l), \eqno(21)$$ we get for $P_{i}(\omega,t)$, with $\omega$ being restricted to positive: $$P_{i}(\omega,t) = \sum_{l=1}^{\infty}\;\delta(\omega-\omega_{0}\*l)\* P_{il}(\omega,t).\eqno(22)$$ Using the definition of the Bessel function $J_{2l}(z)$ $$J_{2l}(z) = \frac {1}{2\pi}\int_{-\pi}^{\pi}\,d\varphi\* \cos \left(z\sin\frac{\varphi}{2}\right)\*\cos{l\varphi}, \eqno(23)$$ from which the derivatives and their integrals follow, we get for $P_{1l}$ and $P_{2l}$ the following formulas: $$P_{il} = \frac {2G\omega}{r_{i}}\* \Bigl((E_{i}^2(1 - v_{i}^2)^{2} - \frac {m_{i}^4}{2E_{i}^2}\Bigr)\*\int_{0}^{2v_{i}\*l}\,dx\,J_{2l}(x)$$ $$+ \quad 4E_{i}^2(1 - v_{i}^2)\*v_{i}^2\*J'_{2l}(2v_{i}l) + 4E_{i}^2v_{i}^4\*J'''_{2l}(2v_{i}l)\Bigr),\quad i = 1,\, 2. \eqno(24)$$ Using $r_{2} = r_{1} + \epsilon $, where $\epsilon$ is supposed to be small in comparison with radii $r_{1}$ and $r_{2}$, we obtain $$[r_{1}^2 + r_{2}^2 + 2r_{1}\*r_{2}\cos\varphi]^{1/2} \approx 2a\cos\left(\frac {\varphi}{2}\right), \eqno(25)$$ with $$a = r_{1}\left(1 + \frac {\epsilon}{2r_{1}}\right). \eqno(26)$$ So, instead of eq. (19) we get: $$P_{12}(\omega,t) = \frac {2G\omega}{a\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\frac {\sin[2\omega\*a\*\cos(\omega_{0}\tau/2)]} {\cos(\omega_{0}\tau)/2]} \*\cos\omega\tau$$ $$\times\quad \left(E_{1}\*E_{2}(\omega_{0}^2\*r_{1}\*r_{2}\*\cos\omega_{0}\tau + 1)^2 - \frac {m_{1}^2\*m_{2}^2}{2E_{1}\*E_{2}}\right).\eqno(27)$$ Now, we can approach the evaluation of the energy-loss formula for the binary from the power spectral formulas (24) and (27). The energy loss is defined by the relation $$-\frac{dU}{dt} = \int\,P(\omega)d\omega =$$ $$\int\,d\omega\*\sum_{i,l}\delta(\omega - \omega_{0}l)P_{il} + \int\, P_{12}(\omega)d\omega = -\frac {d}{dt}(U_{1} + U_{2} + U_{12}). \eqno(28)$$ Or, $$-\frac {d}{dt}U_{i} = \int\,d\omega\*\sum_{l}\delta(\omega - \omega_{0}l)P_{il},\quad -\frac {d}{dt}U_{12} = \int\,d\omega\*\sum_{i,l}\delta(\omega - \omega_{0}l)P_{12l}.\eqno(29)$$ From Sokolov and Ternov (1983) we learn Kapteyn’s formulas: $$\sum_{l=1}^{\infty} 2l\*J'_{2l}(2lv) = \frac {v}{(1 - v^2)^2}, \eqno(30)$$ and $$\sum_{l=1}^{\infty} l\*\int_{0}^{2lv}\*J_{2l}(x)dx = \frac {v^3}{3(1-v^2)^3}. \eqno(31)$$ The formula $\sum_{l=1}^{\infty} l\*J'''_{2l}(2lv) = 0$ can be obtained from formula $$\sum_{l=1}^{\infty} \frac {1}{l}\*J'_{2l}(2lv) = \frac{1}{2}v^{2} \eqno(32)$$ by its differentiation with the respect to $v$ (Schott, 1912). Then, after application of eqs. (30), (31) and (32) to eqs. (24) and (28), we get: $$-\frac {dU_{i}}{dt} = \frac {2G\omega_{0}}{r_{i}}\* \left[\left(E_{i}^{2}( v_{i}^2 -1)^{2} - \frac{m_{i}^{4}}{2E_{i}^{2}}\right)\frac{v_{i}^{3}}{3(1 - v_{i}^{2})^{3}} - 2E_{i}^{2}v_{i}^{3} + 4E_{i}^{2}v_{i}^{4}\right]. \eqno(33)$$ Instead of using Kapteyn’s formulas for the interference term, we will perform a direct evaluation of the energy loss of the interference term by the $\omega$-integration in (27). So, after some elementary modification in the $\omega$-integral, we get: $$- \frac {dU_{12}}{dt} = \int_{0}^{\infty}\,P(\omega)d\omega =$$ $$A\*\int_{-\infty}^{\infty}d\tau\*\int_{-\infty}^{\infty}\,d\omega\*\omega \*e^{-i\omega\tau}\*\sin[2\omega\*a\*\cos \omega_{0}\tau]\* \left[\frac {B(C\cos\omega_{0}\tau + 1)^2 - D}{\cos(\omega_{0}\tau/2)} \right], \eqno(34)$$ with $$A = \frac {G}{a\pi},\quad B = E_{1}E_{2},\quad C = v_{1}v_{2}, \quad D = \frac {m_{1}^2\*m_{2}^2}{2E_{1}E_{2}}.\eqno(35)$$ Using the definition of the $\delta$-function and its derivative, we have, instead of eq. (34), with $v = a\omega_{0}$: $$- \frac {dU_{12}}{dt} = A\*\omega_{0}\pi\*\int_{-\infty}^{\infty}\,dx\, \*\frac {[B(C\*\cos x + 1)^{2} - D]}{\cos(x/2)}\quad \times$$ $$\left[\delta'(x - 2v\cos(x/2)) - \delta'(x + 2v\cos(x/2))\right]. \eqno(36)$$ Putting $$x - 2v\cos(x/2) = t, \eqno(37)$$ in the first $\delta'$-term and $$y + 2v\cos(y/2) = t, \eqno(38)$$ in the second $\delta'$-term, we get eq. (36) in the following form: $$-\frac{dU_{12}}{dt} = 2A\omega_{0}v\pi \* \int_{-\infty}^{\infty}\,dt\delta'(t) \times$$ $$\left\{\frac {[B(C\cos x + 1)^2 - D]}{(x-t)(1+v\sin(x/2))} - \frac {[B(C\cos y + 1)^2 - D]}{(y+t)(1-v\sin(y/2))}. \right\} \eqno(39)$$ Using the known relation for a $\delta$-function: $$\int\,dtf(t)\delta'(t) = - f'(0), \eqno(40)$$ we get the energy loss formula for the synergic term in the form: $$-\frac{dU_{12}}{dt} = -2A\omega_{0}v\pi \* \left. \frac{d}{dt}\left\{\frac {[B(C\cos x + 1)^2 - D]}{(x-t)(1+v\sin(x/2))} - \frac {[B(C\cos y + 1)^2 - D]}{(y-t)(1-v\sin(y/2))} \right\}\right |_{t = 0}, \eqno(41)$$ and we recommend the final calculation of the last formula to the mathematical students. Let us remark finally that the formulas derived for the energy loss of the binary (33) and (41) describes only the binary system and therefore their sum has not the form of the Einstein quadrupole formula. The sum forms the total produced gravitational energy, and involves not only the radiation of the individual bodies of the binary, but also the interference term. The problem of the coincidence with the Einstein quadrupole formula is open. THE POWER SPECTAL FORMULA IVOLVING RADIATIVE CORRECTIONS ======================================================== Introduction ------------ We here calculate the total quantum loss of energy caused by production of gravitons emitted by the binary system in the framework of the source theory of gravity for the situation with the gravitational propagator involving radiative corrections. We know from QED that photon can exist in the virtual state as the two body system in the form of the electron positron pair. It means that the photon propagator involves the additional process: $$\gamma \rightarrow e^{+} + e^{-} \rightarrow \gamma .\eqno(42)$$ In case of the graviton radiation, the situation is analogical with the situation in QED. Instead of eq. (42) we write $$g \rightarrow 2e^{+} + 2e^{-} \rightarrow g ,\eqno(43)$$ where $g$ is graviton and number 2 is there in order to conserve spin also during the virtual process. Equation (43) can be of course expressed in more detail: $$g \rightarrow \gamma + \gamma \rightarrow (e^{+} + e^{-}) + (e^{+} + e^{-}) \rightarrow \gamma + \gamma \rightarrow g. \eqno(44)$$ We will show that in the framework of the source theory it is easy to determine the quantum energy loss formula of the binary system both in case with the graviton propagator with radiative corrections. We will investigate how the spectrum of the gravitational radiation is modified if we involve radiation corrections corresponding to the virtual pair production and annihilation in the graviton propagator. Our calculation is an analogue of the photon propagator with radiative corrections for production of photons by the Čerenkov mechanism (Pardy, 1994c,d). Because the measurement of motion of the binaries goes on, we hope that the future experiments will verify the quantum version of the energy loss formula following from the source theory and that sooner or later the confirmation of this formula will be established. The binary power spectrum with radiative corrections ---------------------------------------------------- According to source theory (Schwinger, 1973; Dittrich, 1978; Pardy, 1994c,d), the photon propagator in the Minkowski space-time with radiative correction is in the momentum representation of the form: $$\tilde{D}(k) = D(k) + \delta D(k), \eqno(45)$$ or, $$\tilde{D}(k) = \frac {1}{|{\mbox {\bf k}}|^2-(k^0)^2-i\epsilon}$$ $$+ \quad \int_{4m^2}^\infty dM^2 \frac {a(M^2)} {|{\mbox {\bf k}}|^2-(k^0)^2+\frac {M^2\*c^2}{\hbar^{2}}-i\epsilon}, \eqno(46)$$ where $m$ is mass of electron and the last term in equation (44) is derived on the virtual photon condition $$|{\mbox {\bf k}}|^2 - (k^0)^2 = - \frac {M^2\*c^2}{\hbar^{2}}.\eqno(47)$$ The weight function $a(M^2)$ has been derived in the following form (Schwinger, 1973; Dittrich, 1976): $$a(M^2) = \frac {\alpha}{3\pi} \frac {1}{M^2} \left(1+\frac {2m^2}{M^2}\right) \left(1 - \frac {4m^2}{M^2}\right)^{1/2}. \eqno(48)$$ We suppose that the graviton propagator with the radiative corrections forms the analogue of the photon propagator. Now, with regard to the definition of the Fourier transform $$D_{+}(x-x') = \int \frac {(dk)}{(2\pi)^4}\* e^{ik(x-x')}\*D(k),\eqno(49)$$ we get for $\delta\*D_{+}$ the following relation ($c = \hbar = 1$): $$\delta\*D_{+}(x-x') = \frac {i}{4\pi^2} \*\int_{4m^2}^{\infty}\,dM^2\*a(M^2)$$ $$\times\; \int\,d\omega\,\frac {\sin\left\{[\omega^{2}- M^2]^{1/2}\* |{\mbox {\bf x}}-{\mbox {\bf x}'}|\right\}}{|{\mbox {\bf x}}- {\mbox {\bf x}}'|}\*e^{-i\omega\*|t-t'|}.\eqno(50)$$ The function (50) differs from the gravitational function “$D_{+}$” in (9) especially by the factor $$\left(\omega^2 - M^2 \right)^{1/2}\eqno(51)$$ in the function ’$\sin$’ and by the additional mass-integral which involves the radiative corrections to the original power spectrum formula. In order to determine the additional spectral function of produced gravitons, corresponding to the radiative corrections, we insert $D_{+}(x-x') + \delta D_{+}(x-x')$ into eq. (2), and using eq. (11) we obtain (factor 2 from the two photons is involved): $$\delta\*P(\omega,t) = \frac {4\*G\omega}{\pi}\int\, (d{\mbox {\bf x}})(d{\mbox {\bf x}}')dt'\* \int_{4m^2}^{\infty}\,dM^2\*a(M^2)$$ $$\times\quad \frac {\sin\left\{[\omega^2 - M^2]^{1/2}|{\mbox {\bf x}}-{\mbox {\bf x}}'|\right\}} {|{\mbox {\bf x}}-{\mbox {\bf x}}'|}\*\cos\omega(t- t')$$ $$\times \quad \Bigl [T^{\mu\nu}({\mbox {\bf x}},t)g_{\mu\alpha}g_{\nu\beta}T^{\alpha\beta} ({\mbox {\bf x}}',t') - \frac{1}{2}g_{\mu\nu}T^{\mu\nu}({\mbox {\bf x}},t) g_{\alpha\beta}T^{\alpha\beta}({\mbox {\bf x}}',t')\Bigr].\eqno(52)$$ Then using eqs. (16), (17), (18) and (19), we get $$\delta P_{total}(\omega,t) =\delta P_{1}(\omega,t) + \delta P_{2}(\omega,t) + \delta P_{12}(\omega,t), \eqno(53)$$ where ($t' - t = \tau$): $$\delta P_{1}(\omega,t) = \frac {2G\omega}{r_{1}\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\int_{4m^2}^{\infty}\,dM^2\*a(M^2) \frac {\sin\{2\left(\omega^2 - M^2 \right)^{1/2}\*r_{1}\*\sin(\omega_{0}\tau/2)\}} {\sin(\omega_{0}\tau/2)}\*\cos\omega\tau$$ $$\times \quad \left(E_{1}^2(\omega_{0}^2\*r_{1}^2\*\cos\omega_{0}\tau - 1)^2 - \frac {m_{1}^4}{2E_{1}^2}\right),\eqno(54)$$ $$\delta P_{2}(\omega,t) = \frac {2G\omega}{r_{2}\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\int_{4m^2}^{\infty}\,dM^2\*a(M^2) \frac {\sin\{2\left(\omega^2 - M^2 \right)^{1/2}\*r_{2}\*\sin(\omega_{0} \tau/2)\}} {\sin(\omega_{0}\tau/2)}\*\cos\omega\tau$$ $$\times \quad \left(E_{2}^2(\omega_{0}^2\*r_{2}^2\*\cos\omega_{0}\tau - 1)^2 - \frac {m_{2}^4}{2E_{2}^2}\right),\eqno(55)$$ $$\delta P_{12}(\omega,t) = \frac {8G\omega}{\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\int_{4m^2}^{\infty}\,dM^2\*a(M^2)$$ $$\frac {\sin\{\left(\omega^2 - M^2 \right)^{1/2}\*[r_{1}^2 + r_{2}^2 + 2r_{1}\*r_{2}\cos(\omega_{0}\tau)]^{1/2}\}} {[r_{1}^2 + r_{2}^2 + 2r_{1}\* r_{2}\* \cos(\omega_{0}\tau)]^{1/2}} \*\cos\omega\tau$$ $$\times \quad \left(E_{1}\*E_{2}(\omega_{0}^2\*r_{1}\*r_{2}\*\cos\omega_{0}\tau + 1)^2 - \frac {m_{1}^2\*m_{2}^2}{2E_{1}\*E_{2}}\right).\eqno(56)$$ The explicit determination of the power spectrum is the problem which was solved by author in 1994. The solution was performed only approximately. Here also can be expected only the approximative solution. From this solution can be then derived the energy loss as in the previous article. Let us show the possible way of the determination of the spectral formula. If we introduce the new variable $s$ by the relation $$\omega^{2} - M^{2} = s^{2}; \quad -dM^{2} = 2sds \eqno(57)$$ then, instead of eqs. (54), (55) and (56) we have $$\delta P_{1}(\omega,t) = \frac {2G\omega}{r_{1}\pi} \* \int_{-\infty}^{\infty}\, d\tau\*\int_{s_{1}}^{s_{2}}\,(2sds)\*a(\omega^{2}- s^{2}) \frac {\sin\{2 s\*r_{1}\*\sin(\omega_{0}\tau/2)\}} {\sin(\omega_{0}\tau/2)}\*\cos\omega\tau$$ $$\times \quad \left(E_{1}^2(\omega_{0}^2\*r_{1}^2\*\cos\omega_{0}\tau - 1)^2 - \frac {m_{1}^4}{2E_{1}^2}\right),\eqno(58)$$ $$\delta P_{2}(\omega,t) = \frac {2G\omega}{r_{2}\pi} \* \int_{-\infty}^{\infty}\, d\tau\*\int_{s_{1}}^{s_{2}}\,(2sds)\*a(\omega^2 -s^{2}) \frac {\sin \{2s\*r_{2}\*\sin(\omega_{0}\tau/2)\}} {\sin(\omega_{0}\tau/2)}\*\cos\omega\tau$$ $$\times \quad \left(E_{2}^2(\omega_{0}^2\*r_{2}^2\*\cos\omega_{0}\tau - 1)^2 - \frac {m_{2}^4}{2E_{2}^2}\right),\eqno(59)$$ $$\delta P_{12}(\omega,t) = \frac {8G\omega}{\pi}\* \int_{-\infty}^{\infty}\, d\tau\*\int_{s_{1}}^{s_{2}}\,(2sds)\*a(\omega^2 - s^{2}) \frac {\sin\{2 s \*[r_{1}^2 + r_{2}^2 + 2r_{1}\*r_{2}\cos(\omega_{0}\tau)]^{1/2}\}} {[r_{1}^2 + r_{2}^2 + 2r_{1}\* r_{2}\* \cos(\omega_{0}\tau)]^{1/2}}$$ $$\times \quad\*\cos\omega\tau \left(E_{1}\*E_{2}(\omega_{0}^2\*r_{1}\*r_{2}\*\cos\omega_{0}\tau + 1)^2 - \frac {m_{1}^2\*m_{2}^2}{2E_{1}\*E_{2}}\right),\eqno(60)$$ where $$s_{1} = \omega^{2} - 4m^{2}, \quad s_{2} = \infty .\eqno(61)$$ It seems that the rigorous procedure is the $\tau$-integration as the first step and then $s$-integration as the second step (Pardy, 1994c,d). While, in case of the linear motion the mathematical operations are easy (Pardy, 1994c), in case of the circular motion there are some difficulties (Pardy, 1994d). The final form of eq. (60) is recommended for the mathematical experts. The energy loss is given as follows: $$-\frac{dU_{i}}{dt} = \int_{0}^{\infty}d\omega P_{i}(\omega, t);\quad -\frac{dU_{12}}{dt} = \int_{0}^{\infty}d\omega P_{12}(\omega,t). \eqno(62)$$ Now, let us go to the discussion on the electromagnetic system of the two opposite charges moving in the constant magnetic field and producing the pulse synchrotron radiation. ELECTROMAGNETIC PULSAR ====================== Introduction ------------ Here, the power spectrum formula of the synchrotron radiation generated by the electron and positron moving at the opposite angular velocities in homogeneous magnetic field is derived in the Schwinger version of quantum field theory. It is surprising that the spectrum depends periodically on radiation frequency $\omega$ and time which means that the system composed from electron, positron and magnetic field forms the pulsar. We will show that the large hadron collider (LHC) which is at present time under construction in CERN can be considered in near future also as the largest electromagnetic terrestrial pulsar. We know that while the Fermilab’s Tevatron handles counter-rotating protons and antiprotons in a single beam channel, LHC will operate with proton and proton beams in such a way that the collision center of mass energy will be 14 TeV and luminosity $10^{34} {\rm cm}^{-1} {\rm s}^{-2}$. To achieve such large luminosity it must operate with more than 2800 bunches per beam and a very high density of particles in bunches. The LHC will also operate for heavy Pb ion physics at a luminosity of $10^{27} {\rm cm}^{-1} {\rm s}^{-2}$ (Evans, 1999). The collision of particles is caused by the opposite directional motion of bunches. Or, if one bunch has the angular velocity $\omega$, then the bunch with antiparticles has angular velocity $-\omega$. Here we will determine the spectral density of emitted photons in the simplified case where one electron and one positron move in the opposite direction on a circle. We will show that the synergic spectrum depends periodically on time. This means that the behavior of the system is similar to the behavior of electromagnetic pulsar. The derived spectral formula describes the spectrum of photons generated by the Fermilab Tevatron. In case that the particles in bunches are of the same charge as in LHC, then, it is necessary to replace the function sine by cosine in the final spectral formula. Now let us approach the theory and explicit calculation of the spectrum. This process is the generalization of the one-charge synergic synchrotron-Čerenkov radiation which has been calculated in source theory two decades ago by Schwinger et al. (1976). We will follow the Schwinger article and also the author articles (Pardy, 1994d, 2000, 2002) as the starting point. Although our final problem is the radiation of the two-charge system in vacuum, we consider, first in general, the presence of dielectric medium, which is represented by the phenomenological index of refraction $n$ and it is well known that this phenomenological constant depends on the external magnetic field. Introducing the phenomenological constant enables to consider also the Čerenkovian processes. Later we put $n = 1$. We will investigate here how the original Schwinger (et al.) spectral formula of the synergic c synchrotron Čerenkov radiation of the charged particle is modified if we consider the electron and positron moving at the opposite angular velocities. This problem is an analogue of the linear (Pardy, 1997) and circular problem solved by author (Pardy, 2000). We will show that the original spectral formula of the synergic synchrotron-Čerenkov radiation is modulated by function $4\sin^{2}(\omega t)$ where $\omega$ is the frequency of the synergistic radiation produced by the system and it does not depend on the orbital angular frequency of electron or positron. We will use here the fundamental ingredients of Schwinger source theory (Schwinger, 1970, 1973; Dittrich, 1978; Pardy, 1994c, d, e) to determine the power spectral formula. Formulation of the electromagnetic problem ------------------------------------------ The basic formula of the Schwinger source theory is the so called vacuum to vacuum amplitude:$\langle 0_{+}|0_{-} \rangle = \exp\{\frac{i}{\hbar}\*W\},$ where in case of the electromagnetic field in the medium, the action $W$ is given by the following formula: $$W = \frac{1}{2c^2}\*\int\,(dx)(dx')J^{\mu}(x){D}_{+\mu\nu}(x-x')J^{\nu}(x'),\eqno(63)$$ where $${D}_{+}^{\mu\nu} = \frac{\mu}{c}[g^{\mu\nu} + (1-n^{-2})\beta^{\mu}\beta^{\nu}]\*{D}_{+}(x-x'),\eqno(64)$$ where $\beta^{\mu}\, \equiv \, (1,{\bf 0})$, $J^{\mu}\, \equiv \,(c\varrho,{\bf J})$ is the conserved current, $\mu$ is the magnetic permeability of the medium, $\epsilon$ is the dielectric constant od the medium and $n=\sqrt{\epsilon\mu}$ is the index of refraction of the medium. Function ${D}_{+}$ is defined as in eq. (10) (Schwinger et al., 1976): $$D_{+}(x-x') =\frac {i}{4\pi^2\*c}\*\int_{0}^{\infty}d\omega \frac {\sin\frac{n\omega}{c}|{\bf x}-{\bf x}'|}{|{\bf x} - {\bf x}'|}\* e^{-i\omega|t-t'|}.\eqno(65)$$ The probability of the persistence of vacuum follows from the vacuum amplitude (1) where ${\rm Im}\;W$ is the basis for the following definition of the spectral function $P(\omega,t)$: $$-\frac{2}{\hbar}\*{\rm Im}\;W \;\stackrel{d}{=} \; -\, \int\,dtd\omega\frac{P(\omega,t)}{\hbar\omega}.\eqno(66)$$ Now, if we insert eq. (64) into eq. (66), we get after extracting $P(\omega,t)$ the following general expression for this spectral function: $$P(\omega,t) = -\frac{\omega}{4\pi^2}\*\frac{\mu}{n^2}\*\int\,d{\bf x} d{\bf x}'dt'\left[\frac{\sin\frac{n\omega}{c}|{\bf x} - {\bf x}'|}{|{\bf x} - {\bf x}'|}\right]$$ $$\times \quad \cos[\omega\*(t-t')]\*[\varrho({\bf x},t)\varrho({\bf x}',t') - \frac{n^2}{c^2}\*{\bf J}({\bf x},t)\cdot{\bf J}({\bf x}',t')],\eqno(67)$$ which is an analogue of the formula (11). Let us recall that the last formula can be derived also in the classical electrodynamic context as it is shown for instance in the Schwinger article (Schwinger, 1949). The derivation of the power spectral formula from the vacuum amplitude is more simple. The radiation of two opposite charges ------------------------------------- Now, we will apply the formula (67) to the two-body system with the opposite charges moving at the opposite angular velocities in order to get in general synergic synchrotron-Čerenkov radiation of electron and positron moving in a uniform magnetic field While the synchrotron radiation is generated in a vacuum, the synergic synchrotron-Čerenkov radiation can produced only in a medium with dielectric constant $n$. We suppose the circular motion with velocity ${\bf v}$ in the plane perpendicular to the direction of the constant magnetic field ${\bf H}$ (chosen to be in the $+z$ direction). We can write the following formulas for the charge density $\varrho$ and for the current density ${\bf J}$ of the two-body system with opposite charges and opposite angular velocities: $$\varrho({\bf x},t) = e\*\delta\*({\bf x}-{\bf x_{1}}(t)) -e\*\delta\*({\bf x}-{\bf x_{2}}(t))\eqno(68)$$ and $${\bf J}({\bf x},t) = e\*{\bf v}_{1}(t)\*\delta\*({\bf x}-{\bf x_{1}}(t)) -e\*{\bf v}_{2}(t)\*\delta\*({\bf x}-{\bf x_{2}}(t))\eqno(69)$$ with $${\bf x}_{1}(t) = {\bf x}(t) = R({\bf i}\cos(\omega_{0}t) + {\bf j}\sin(\omega_{0}t)),\eqno(70)$$ $${\bf x}_{2}(t) = R({\bf i}\cos(-\omega_{0}t) + {\bf j}\sin(-\omega_{0}t) = {\bf x}(-\omega_{0},t) = {\bf x}(-t).\eqno(71)$$ The absolute values of velocities of both particles are the same, or $|{\bf v}_{1}(t)| = |{\bf v}_{2}(t)| = v$, where ($H = |{\bf H}|, E =$ energy of a particle) $${\bf v}(t) = d{\bf x}/dt, \hspace{5mm} \omega_{0} = v/R, \hspace{5mm} R = \frac {\beta\*E}{eH}, \hspace{5mm} \beta = v/c, \hspace{5mm} v = |{\bf v}|.\eqno(72)$$ After insertion of eqs. (68)–(71) into eq. (67), and after some mathematical operations we get $$P(\omega,t) = -\frac{\omega}{4\pi^2}\*\frac{\mu}{n^2}e^{2}\*\int_{-\infty}^{\infty}\, dt'\cos(t-t')\sum_{i,j = 1}^{2}(-1)^{i+j}$$ $$\times \quad \left[1 - \frac {{\bf v}_{i}(t)\cdot {\bf v}_{j}(t')}{c^{2}}n^{2}\right] \left\{\frac{\sin\frac {n\omega}{c}|{\bf x}_{i}(t) -{\bf x}_{j}(t')|} {|{\bf x}_{i}(t) -{\bf x}_{j}(t')|}\right\}.\eqno(73)$$ Let us remark, that for situation of the identical charges, the factor $(-1)^{i + j}$ must be replaced by 1. Using $t' = t + \tau$, we get for $${\bf x}_{i}(t) -{\bf x}_{j}(t') \stackrel{d}{=} {\bf A}_{ij},\eqno(74)$$ $$|{\bf A}_{ij}| = [R^{2} + R^{2} - 2RR\cos(\omega_{0}\tau + \alpha_{ij})]^{1/2} = 2R\left|\sin\left(\frac {\omega_{0}\tau + \alpha_{ij}}{2}\right)\right|, \eqno(75)$$ where $\alpha_{ij}$ were evaluated as follows: $$\alpha_{11} = 0,\quad \alpha_{12 } = 2\omega_{0}t, \quad \alpha_{21} = 2\omega_{0}t, \quad \alpha_{22} = 0.\eqno(76)$$ Using $${\bf v}_{i}(t)\cdot{}{\bf v}_{j}(t+\tau) = \omega_{0}^{2}R^{2} \cos(\omega_{0}\tau + \alpha_{ij}),\eqno(77)$$ and relation (75) we get with $v= \omega_{0}R$ $$P(\omega,t) = -\frac{\omega}{4\pi^2}\*\frac{\mu}{n^2}e^{2}\*\int_{-\infty}^{\infty}\, d\tau \cos\omega\tau \sum_{i,j = 1}^{2}(-1)^{i+j}$$ $$\times \quad \left[1 - \frac {n^{2}}{c^{2}}v^{2}\cos(\omega_{0}\tau + \alpha_{ij})\right] \left\{\frac{\sin\left[\frac {2Rn\omega}{c} \sin\left(\frac {(\omega_{0}\tau + \alpha_{ij})} {2}\right)\right]} {2R\sin\left(\frac {(\omega_{0}\tau + \alpha_{ij})}{2}\right)}\right\}. \eqno(78)$$ Introducing new variable $T$ by relation $$\omega_{0}\tau + \alpha_{ij} = \omega_{0}T\eqno(79)$$ for every integral in eq. (78), we get $P(\omega,t)$ in the following form $$P(\omega,t) = -\frac{\omega}{4\pi^2}\frac {e^{2}}{2R} \*\frac{\mu}{n^2}\*\int_{-\infty}^{\infty} dT \sum_{i,j=1}^{2}(-1)^{i+j}$$ $$\times\quad \cos(\omega T - \frac {\omega}{\omega_{0}}\alpha_{ij}) \left[1 - \frac {c^{2}}{n^{2}}v^{2}\cos(\omega_{0} T \right] \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\omega_{0}T}{2}\right)\right]} {\sin\left(\frac {\omega_{0}T}{2}\right)}\right\}.\eqno(80)$$ The last formula can be written in the more compact form, $$P(\omega,t) = -\frac {\omega}{4\pi^{2}}\frac {\mu}{n^{2}}\frac {e^{2}}{2R} \sum_{i,j=1}^{2}(-1)^{i+j}\left\{P_{1}^{(ij)} -\frac {n^{2}}{c^{2}}v^{2} P_{2}^{(ij)}\right\},\eqno(81)$$ where $${P}^{(ij)} = J_{1a}^{(ij)}\cos\frac {\omega}{\omega_{0}}\alpha_{ij} + J_{1b}^{(ij)}\sin\frac {\omega}{\omega_{0}}\alpha_{ij}\eqno(82)$$ and $$P_{2}^{(ij)} = J_{2A}^{(ij)} \cos\frac {\omega}{\omega_{0}}\alpha_{ij} + J_{2B}^{(ij)}\sin\frac {\omega}{\omega_{0}}\alpha_{ij},\eqno(83)$$ where $$J_{1a}^{(ij)} = \int_{-\infty}^{\infty}dT\cos\omega T \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\omega_{0}T}{2}\right)\right]} {\sin\left(\frac {\omega_{0}T}{2}\right)}\right\},\eqno(84)$$ $$J_{1b}^{(ij)} = \int_{-\infty}^{\infty}dT\sin\omega T \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\omega_{0}T}{2}\right)\right]} {\sin\left(\frac {\omega_{0}T}{2}\right)}\right\},\eqno(85)$$ $$J_{2A}^{(ij)} = \int_{-\infty}^{\infty}dT\cos\omega_{0}T\cos\omega T \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\omega_{0}T}{2}\right)\right]} {\sin\left(\frac {\omega_{0}T}{2}\right)}\right\},\eqno(86)$$ $$J_{2B}^{(ij)} = \int_{-\infty}^{\infty}dT\cos\omega_{0}T\sin\omega T \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\omega_{0}T}{2}\right)\right]} {\sin\left(\frac {\omega_{0}T}{2}\right)}\right\},\eqno(87)$$ Using $$\omega_{0}T = \varphi + 2\pi\*l, \hspace{7mm} \varphi\in(-\pi,\pi),\; \quad l = 0,\, \pm1,\, \pm2,\, ... ,\eqno(88)$$ we can transform the $T$-integral into the sum of the telescopic integrals according to the scheme: $$\int_{-\infty}^{\infty}dT\quad\longrightarrow \quad\frac {1}{\omega_{0}} \sum_{l = -\infty}^{l = \infty}\int_{-\pi}^{\pi}d\varphi.\eqno(89)$$ Using the fact that for the odd functions $f(\varphi)$ and $g(l)$, the relations are valid $$\int_{-\pi}^{\pi}f(\varphi)d\varphi = 0; \quad \sum_{l=-\infty}^{l = \infty}g(l) = 0,\eqno(90)$$ we can write $$J_{1a}^{(ij)} = \frac {1}{\omega_{0}}\sum_{l}\int_{-\pi}^{\pi} d\varphi\left\{\cos{\frac {\omega}{\omega_{0}}\varphi\cos{2\pi l} \frac{\omega}{\omega_{0}}}\right\} \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\varphi}{2}\right)\right]} {\sin\left(\frac {\varphi}{2}\right)}\right\},\eqno(91)$$ $$J_{1b}^{(ij)} = 0.\eqno(92)$$ For integrals with indices A, B we get: $$J_{2A}^{(ij)} = \frac {1}{\omega_{0}}\sum_{l}\int_{-\pi}^{\pi} d\varphi\cos\varphi \left\{\cos{\frac {\omega}{\omega_{0}}\varphi\cos{2\pi l} \frac{\omega}{\omega_{0}}}\right\} \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\varphi}{2}\right)\right]} {\sin\left(\frac {\varphi}{2}\right)}\right\},\eqno(93)$$ $$J_{2B}^{(ij)} = 0,\eqno(94)$$ So, the power spectral formula (80) is of the form: $$P(\omega,t) = -\frac {\omega}{4\pi^{2}}\frac {\mu}{n^{2}}\frac {e^{2}}{2R} \sum_{i,j=1}^{2}(-1)^{i+j}\left\{P_{1}^{(ij)} - n^{2}\beta^{2} P_{2}^{(ij)}\right\};\quad \beta = \frac {v}{c},\eqno(95)$$ where $$P_{1}^{(ij)} = J_{1a}^{(ij)}\cos\frac {\omega}{\omega_{0}}\alpha_{ij} \eqno(96)$$ and $$P_{2}^{(ij)} = J_{2A}^{(ij)}\cos\frac {\omega}{\omega_{0}}\alpha_{ij}. \eqno(97)$$ Using the Poisson theorem $$\sum_{l = -\infty}^{\infty}\cos 2\pi\frac {\omega}{\omega_{0}}l = \sum_{k=-\infty}^{\infty}\omega_{0}\delta(\omega - \omega_{0}l),\eqno(98)$$ the definition of the Bessel functions $J_{2l}$ and their corresponding derivations and integrals $$\frac {1}{2\pi}\int_{-\pi}^{\pi}d\varphi\cos\left(z\sin\frac {\varphi}{2} \right)\cos l\varphi = J_{2l}(z),\eqno(99)$$ $$\frac {1}{2\pi}\int_{-\pi}^{\pi}d\varphi\sin\left(z\sin\frac {\varphi}{2} \right)\sin(\varphi/2)\cos l\varphi = - J'_{2l}(z),\eqno(100)$$ $$\frac {1}{2\pi}\int_{-\pi}^{\pi}d\varphi \frac{\sin\left(z\sin\frac{\varphi}{2}\right)} {\sin(\varphi/2)}\cos l\varphi = \int_{0}^{z}J_{2l}(x)dx,\eqno(101)$$ and using equations $$\sum_{i,j = 1}^{2}(-1)^{i+j}\cos\frac {\omega}{\omega_{0}}\alpha_{ij} = 2(1-\cos 2\omega t) = 4\sin^{2}\omega t,\eqno(102)$$ we get with the definition of the partial power spectrum $P_{l}$ $$P(\omega) = \sum_{l=1}^{\infty} \delta(\omega - l\omega_{0})P_{l},\eqno(103)$$ the following final form of the partial power spectrum generated by motion of two-charge system moving in the cyclotron: $$P_{l}(\omega,t) = [4(\sin\omega t)^{2}] \frac {e^2}{\pi\*n^2}\*\frac {\omega\mu\omega_{0}}{v}\* \left(2n^2\beta^2J'_{2l}(2ln\beta) - (1 - n^2\*\beta^2)\*\int_{0}^{2ln\beta}dxJ_{2l}(x)\right).\eqno(104)$$ So we see that the spectrum generated by the system of electron and positron is formed in such a way that the original synchrotron spectrum generated by electron is modulated by function $4\sin^{2}(\omega t)$. The derived formula involves also the synergic process composed from the synchrotron radiation and the Čerenkov radiation for electron velocity $v > c/n$ in a medium. Our goal is to apply the last formula in situation where there is a vacuum. In this case we can put $\mu = 1, n = 1$ in the last formula and so we have $$P_{l}(\omega,t) = 4 \sin^{2}\left(\omega t\right) \frac {e^2}{\pi}\*\frac {\omega\omega_{0}}{v}\* \left(2\beta^2J'_{2l}(2l\beta) - (1 - \beta^2)\*\int_{0}^{2l\beta}dxJ_{2l}(x)\right).\eqno(105)$$ So, we see, that final formula describing the opposite motion of electron and positron in accelerator is of the form $$P_{l,pair}(\omega,t) = 4 \sin^{2}\left(\omega t\right)P_{l(electron)}\left(\omega\right),\eqno(106)$$ where $P_{electron}$ is the spectrum of radiation only of electron. For the same charges it is necessary to replace sine by cosine in the final formula. The result (106)is surprising because we naively expected that the total radiation of the opposite charges should be $$P_{l}(\omega,t) = P_{l(electron)}\left(\omega, t \right) + P_{l(positron)}\left(\omega, t \right).\eqno(107)$$ So, we see that the resulting radiation can not be considered as generated by the isolated particles but by a synergic production of a system of particles and magnetic field. At the same time we cannot interpret the result as a result of interference of two sources because the distance between sources radically changes and so, the condition of an interference is not fulfilled. The classical electrodynamics formula (106) changes our naive opinion on the electrodynamic processes in the magnetic field. From the last formula it follows that at time $t = \pi k/\omega $ there is no radiation of the frequency $\omega$. The spectrum oscillates with frequency $\omega$. If the radiation were generated not in the synergic way, then the spectral formula would be composed from two parts corresponding to two isolated sources. The two center circular motions =============================== The situation which we have analyzed was the ideal situation where the angle of collision of positron and electron was equal to $\pi$. Now, the question arises what is the modification of a spectral formula when the collision angle between particles differs from $\pi$. It can be easily seen that if the second particle follows the shifted circle trajectory, then the collision angle differs from $\pi$. Let us suppose that the center of the circular trajectory of the second particle has coordinates $(a,0)$. It can be easy to see from the geometry of the situation and from the plane geometry that the collision angle is $ \pi - \alpha, \alpha \approx \tan \alpha \approx a/R$ where $R$ is a radius of the first or second circle. The same result follows from the analytical geometry of the situation. While the equation of the first particle is the equation of the original trajectory, or this is eq. (70) $${\bf x}_{1}(t) = {\bf x}(t) = R({\bf i}\cos(\omega_0 t) + {\bf j}\sin(\omega_0 t)),\eqno(108)$$ the equation of a circle with a shifted center is as follows: $${\bf x}_{2}(t) = {\bf x}(t) = R({\bf i}(\frac {a}{R}+\cos(-\omega_0 t)) + {\bf j}\sin(-\omega_0 t)) = {\bf x(-t)} + {\bf i}a.\eqno(109)$$ The absolute values of velocities of both particles are equal and the relation (72) is valid. Instead of equation (74) we have for radius vectors of particle trajectories: $${\bf x}_{i}(t) -{\bf x}_{j}(t') \stackrel{d}{=} {\bf B}_{ij},\eqno(110)$$ where ${\bf B}_{11} = {\bf A}_{11}, {\bf B}_{12} = {\bf A}_{12} - {\bf i}a, {\bf B}_{21} = {\bf A}_{21} + {\bf i}a, {\bf B}_{22} = {\bf A}_{22}$. In general, we can write the last information on coefficients ${\bf B}_{ij}$ as follows: $${\bf B}_{ij} = {\bf A}_{ij} + \varepsilon_{ij}{\bf i}a ,\eqno(111)$$ where $\varepsilon_{11} = 0, \varepsilon_{12} = -1, \varepsilon_{21} = 1, \varepsilon_{22} = 0.$ For motion of particles along trajectories the absolute value of vector ${\bf A}_{ij} \gg a$ during the most part of the trajectory. It means, we can determine $B_{ij}$ approximatively. After elementary operations, we get: $$|{\bf B_{ij}}| = (A_{ij}^{2} + 2|{\bf A}_{ij}|\varepsilon_{ij}a\cos\varphi_{ij} + a^{2}\varepsilon_{ij}^{2})^{1/2},\eqno(112)$$ where $\cos\varphi_{ij}$ can be expressed by the $x$-component of vector ${\bf A}_{ij}$ and $|{\bf A}_{ij}|$ as follows: $$\cos\varphi_{ij} = \frac{(A_{ij})_{x}}{|{\bf A}_{ij}|}. \eqno(113)$$ After elementary trigonometric operations, we derive the following formula for $(A_{ij})_{x}$: $$(A_{ij})_{x} = 2R\sin\frac {2\omega_{0}t + \omega_{0}\tau}{2} \sin\frac {\omega_{0}\tau}{2}. \eqno(114)$$ Then, using equation (114), we get with $\varepsilon = a/R$ $$|{\bf B_{ij}}| = 2R (\sin^{2}\frac {\omega_{0}\tau + \alpha_{ij}}{2} + \varepsilon\varepsilon_{ij}\sin\frac {2\omega_{0}t + \omega_{0}\tau}{2} \sin\frac {\omega_{0}\tau}{2} + \varepsilon^{2}\frac{\varepsilon_{ij}^{2}}{4})^{1/2}.\eqno(115)$$ In order to perform the $\tau$-integration the substitution must be introduced. However, the substitution $\omega_{0}\tau + \alpha_{ij} = \omega_{0}T$ does not work. So we define the substitution $\tau = \tau(T)$ by the following transcendental equation (we neglect the term with $\varepsilon^{2}$): $$\left[\sin^{2}\frac {\omega_{0}\tau + \alpha_{ij}}{2} + \varepsilon\varepsilon_{ij}\sin\frac {\omega_{0}t + \omega_{0}\tau}{2} \sin\frac {\omega_{0}\tau}{2}\right]^{1/2} = \sin\frac {\omega_{0}T}{2}. \eqno(116)$$ Or, after some trigonometrical modifications and using the approximative formula $(1 + x)^{1/2} \approx 1 + x/2$ for $x\ll 1$ $$\left[ ./. \right]^{1/2} \approx \sin \left(\frac {\omega_{0}\tau + \alpha_{ij}}{2}\right) + \frac{\varepsilon}{2}\varepsilon_{ij} \sin\left(\frac {2\omega_{0}\tau + 2\omega_{0}t -\alpha_{ij}}{2}\right) = \sin\frac {\omega_{0}T}{2}. \eqno(117)$$ We se that for $\varepsilon = 0$ the substitution is $\omega_{0}\tau + \alpha_{ij} = \omega_{0}T$. The equation (117) is the transcendental equation and the exact solution is the function $\tau = \tau(T)$. We are looking for the solution of equation (117) in the approximative form using the approximation $\sin x \approx x$. Then, instead of (117) we have: $$\left(\frac {\omega_{0}\tau + \alpha_{ij}}{2}\right) + \frac{\varepsilon}{2}\varepsilon_{ij} \left(\frac {2\omega_{0}\tau + 2\omega_{0}t -\alpha_{ij}}{2}\right) = \frac {\omega_{0}T}{2}\eqno(118)$$ Using substitution $$\omega_{0}\tau + \alpha_{ij} = \omega_{0}T + \omega_{0}\varepsilon A \eqno(119)$$ in eq. (118) we get, to the first order in $\varepsilon$-term: $$A = -\frac{\varepsilon_{ij}}{2\omega_{0}}(\omega_{0}T - 2\alpha_{ij} + 2\omega_{0}t). \eqno(120)$$ Then, after some algebraic manipulation we get: $$\omega_{0}\tau + \alpha_{ij} = \omega_{0}T(1 -\frac{\varepsilon}{2} \varepsilon_{ij}) - \varepsilon\varepsilon_{ij}\omega_{0}t(-1)^{i+j}\eqno(121)$$ and $$\omega\tau = \omega T (1 - \frac{\varepsilon}{2}\varepsilon_{ij}) - \frac{\omega}{\omega_{0}} \left(\varepsilon\varepsilon_{ij}(-1)^{i+j}\omega_{0}t + \alpha_{ij}\right).\eqno(122)$$ For small time $t$, we can write approximately: $$\cos(\omega_{0}\tau + \alpha_{ij}) \approx \cos\omega_{0} T(1 - \frac{\varepsilon}{2}\varepsilon_{ij})\eqno(123)$$ and from eq. (122) $$d\tau = dT(1 - \frac{\varepsilon}{2}\varepsilon_{ij}). \eqno(124)$$ So, in case of the eccentric circles the formula (118) can be obtained from non-perturbative formula (80) only by transformation $$T \quad \longrightarrow \quad T(1 - \frac{\varepsilon}{2}\varepsilon_{ij}) ; \quad \alpha_{ij} \quad\longrightarrow \quad \left(\varepsilon\varepsilon_{ij}(-1)^{i+j}\omega_{0}t + \alpha_{ij}\right) = \tilde {\alpha}_{ij}, \eqno(125)$$ excepting specific term involving sine functions. Then, instead of formula (80) we get: $$P(\omega,t) = -\frac{\omega}{4\pi^2}\frac {e^{2}}{2R} \*\frac{\mu}{n^2}\*\int_{-\infty}^{\infty} dT \sum_{i,j=1}^{2}(-1)^{i+j}$$ $$\times \quad \cos(\omega T - \frac {\omega}{\omega_{0}}\tilde{\alpha}_{ij}) \left[1 - \frac {c^{2}}{n^{2}}v^{2} \cos(\omega_{0} T) \right] \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\omega_{0}{\tilde T}}{2}\right)\right]} {\sin\left(\frac {\omega_{0}{\tilde T}}{2}\right)}\right\}.\eqno(126)$$ where $\tilde T = T(1 - \frac {\varepsilon}{2}\varepsilon_{ij})$. We see that only $\tilde T$ and the $\alpha$ term are the new modification of the original formula (80). However, because $\varepsilon$ term in the sine functions is of very small influence on the behavior of the total function for finite time $t$, we can neglect it and write approximatively: $$P(\omega,t) = -\frac{\omega}{4\pi^2}\frac {e^{2}}{2R} \*\frac{\mu}{n^2}\*\int_{-\infty}^{\infty} dT \sum_{i,j=1}^{2}(-1)^{i+j}$$ $$\times\quad \cos(\omega T - \frac {\omega}{\omega_{0}}\tilde{\alpha}_{ij}) \left[1 - \frac {c^{2}}{n^{2}}v^{2}\cos(\omega_{0} T) \right] \left\{\frac{\sin\left[\frac {2Rn\omega}{c}\sin \left(\frac {\omega_{0}T}{2}\right)\right]} {\sin\left(\frac {\omega_{0}T}{2}\right)}\right\}.\eqno(127)$$ So, we se that only difference with the original radiation formula is in variable ${\tilde \alpha}_{ij}$. It means that instead of sum (102) we have the following sum: $$\sum_{i,j = 1}^{2}(-1)^{i+j}\cos\frac {\omega}{\omega_{0}} {\tilde\alpha_{ij}} = 2(1-\cos 2\omega t \cos\varepsilon\omega t). \eqno(128)$$ It means that the one electron radiation formula is not modulated by $[\sin\omega t]^{2}$ but by the formula (128) and the final formula of for the power spectrum is as follows: $$P_{l}(\omega,t) = 2(1-\cos 2\omega t \cos\varepsilon\omega t) P_{l(electron)}\left(\omega\right). \eqno(129)$$ For $\varepsilon \to 0$, we get the original formula (106). SUMMARY AND DISCUSSION ====================== We have derived, in the first part of the article, the total quantum loss of energy of the binary. The energy loss is caused by the emission of gravitons during the motion of the two binary bodies around each other under their gravitational interaction. The energy-loss formulas of the production of gravitons are derived here in the source theory. It is evident that the production of gravitons by the binary system is not homogenous and isotropical in space. So, the binary forms the “gravitational light house” where instead of the light photons of the electromagnetic pulsar are the gravitons. The detector of the gravitational waves evidently detects the gravitational pulses. This section is an extended and revised version of the older author’s article (Pardy, 1983a) and preprints (Pardy,1994a,b), in which only the spectral formulas were derived. Here, in the first part of the article, we have derived the quantum energy-loss formulas for the linear gravitational field. Linear field corresponds to the weak field limit of the Einstein gravity. The power spectrum formulas involving radiative corrections are derived in the following part of this article, also in the framework of the source theory. The general relativity necessarily does not contain the method how to express the quantum effects together with the radiative corrections by the geometrical language. So, it cannot give the answer on the production of gravitons and on the graviton propagator with radiative corrections. This section therefore deals with the quantum energy loss caused by the production of gravitons and by the radiative corrections in the graviton propagator in case of the motion of a binary. We believe the situation in the gravity problems with radiative corrections is similar to the QED situation many years ago when the QED radiative corrections were theoretically predicted and then experimentally confirmed for instance in case of he Lamb shift, or, of the anomalous magnetic moment of electron. Astrophysics is, in a crucial position in proving the influence of radiative corrections on the dynamics in the cosmic space. We hope that the further astrophysical observations will confirm the quantum version of the energy loss of the binary with graviton propagator with radiative corrections. In the last part of this article on pulsars we have derived the power spectrum formula of the synchrotron radiation generated by the electron and positron moving at the opposite angular velocities in homogeneous magnetic field. It forms an analogue of the author article (Pardy, 1997) where only comoving electrons, or positrons was considered, and it forms the modified author preprints (Pardy, 2000a; 2001) and articles (Pardy, 200b; 2002,) where the power spectrum is calculated for two charges performing the retrograde motion in a magnetic field. The frequency of motion was the same because the diameter of the circle was considered the same for both charges. The retrograde motion with different diameters was not considered. It is surprising that the spectrum depends periodically on radiation frequency $\omega$ and time which means that the system composed from electron, positron and magnetic field behaves as a pulsating system. While such pulsar can be represented by a terrestrial experimental arrangement it is possible to consider also the cosmological existence in some modified conditions. To our knowledge, our result is not involved in the classical monographs on the electromagnetic theory and at the same time it was not still studied by the accelerator experts investigating the synchrotron radiation of bunches. This effect was not described in textbooks on classical electromagnetic field and on the synchrotron radiation. We hope that sooner or later this effect will be verified by the accelerator physicists. The radiative corrections obviously influence the synergistic spectrum of photons (Pardy, 1994c,d). The particle laboratories used instead of the single electron and positron the bunches with 10$^{10}$ electrons or positrons in one bunch of volume 300$ \mu$m $\times$ 40$ \mu$m $\times$ 0.01 m. So, in some approximation we can replace the charge of electron and positron by the charges [Q]{} and [-Q]{} of both bunches in order to get the realistic intensity of photons. Nevertheless the synergic character of the radiation of two bunches moving at the opposite direction in a magnetic field is conserved. [**REFERENCES**]{} Damour, T.; Taylor, J. H. Phys. Rev. [**D 45**]{} No. 6 1868 (1992).\ Dittrich, W. Fortschritte der Physik [**26**]{} 289 (1978).\ Evans, L. R. [*The Large Hadron Collider - Present Status and Prospects*]{} CERN-OPEN-99-332 CERN Geneva (1999).\ Gold, T. Nature [**218**]{} 731 (1968).\ Goldreich, P.; Julian, W. H. Astrophys. J. [**157**]{} 869 (1969).\ Graham-Smith, F. Rep. Prog. Phys. [**66**]{} 173 (2003).\ Hewish, A.; Bell, S. J.; Pilkington, J. D. H. ; et al. Nature [**217**]{} 709 (1968).\ Huguenin, G. R; Taylor, J. H.; Goad, L. E.; Hartai, A.; Orsten, G. S. F.; Rodman, A. K. Nature [**219**]{} 576 (1968).\ Hulse, R. A.; Taylor, J. H. Astrophys. J. Lett., [**195**]{} L51-L53 (1975).\ Cho C. F. and Harri Dass, N. D. Ann. Phys. (NY) [**90**]{} 406 (1976).\ Landau, L. D. Phys. Zs. Sowjet [**1**]{} 285 (1932).\ Manchester, R. N. Phil. Trans. R. Soc. Lond. [**A 341**]{} 3 (1992).\ Manoukian, E. B. GRG [**22**]{} 501 (1990).\ Melrose, D. B. Phil. Trans. R. Soc. Lond. [**A 341**]{} 105 (1992).\ Pardy, M. GRG [**15**]{} No. 11 1027 (1983a).\ Pardy, M. Phys. Lett. [**94A**]{} 30 No. 1 (1983b).\ Pardy, M. Phys. Lett. [**140A**]{} 51 Nos. 1,2 (1989).\ Pardy, M. CERN-TH.7239/94 (1994a).\ Pardy, M. CERN-TH.7299/94 (1994b).\ Pardy, M. Phys. Lett. [**B 325**]{} 517 (1994c).\ Pardy, M. Phys. Lett. [**A 189**]{} 227 (1994d).\ Pardy, M. Phys. Lett. [**B 336**]{} 362 (1994e).\ Pardy, M. Phys. Rev. [**A 55**]{} No. 3 1647 (1997).\ Pardy, M. hep-ph/0001277 (2000a).\ Pardy, M. Int. Journal of Theor. Phys. [**39**]{} No. 4 1109 (2000b).\ Pardy, M. hep-ph/011036 (2001).\ Pardy, M. Int. Journal of Theor. Phys. [**41**]{} No. 6 1155 (2002).\ Seiradakis, J. H.; Wielebinski, R. Astronomy & Astrophysics Review manuscript (2004), hep-ph/0410022 (2004).\ Schott, G. A. [*Electromagnetic Radiation*]{} (Cambridge University Press, 1912).\ Schwinger, J. Phys. Rev. [**75**]{} 1912 (1949).\ Schwinger, J. GRG [**7**]{} No. 3 251 (1976).\ Schwinger, J. [*Particles, Sources and Fields*]{}, Vol. I, (Addison-Wesley, Reading, Mass., 1970).\ Schwinger, J. [*Particles, Sources and Fields*]{}, Vol. II, (Addison-Wesley, Reading, Mass., 1973).\ Schwinger, J. Phys. Rev. [**75**]{} 1912 (1949).\ Schwinger, J.; Tsai, W. Y; Erber, T. Ann. Phys. (NY) [**96**]{} 303 (1976).\ Sokolov, A. A.; Ternov, I. M. The relativistic electron (Moscow, Nauka, 1983). (in Russian).\ Taylor, J. H.; Wolszczan, A.; Damour, T.; Weisberg, J. M. Nature [**355**]{} 132 (1992).\ Taylor, J. H. Jr. Binary Pulsars and Relativistic Gravity (Nobel Lecture, 1993).\ Weinberg, S. [*Gravitation and Cosmology*]{} (John Wiley and Sons, Inc., New York, 1972).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a renewed look at M31’s Giant Stellar Stream along with the nearby structures Stream C and Stream D, exploiting a new algorithm capable of fitting to the red giant branch (RGB) of a structure in both colour and magnitude space. Using this algorithm, we are able to generate probability distributions in distance, metallicity and RGB width for a series of subfields spanning these structures. Specifically, we confirm a distance gradient of approximately 20 kpc per degree along a 6 degree extension of the Giant Stellar Stream, with the farthest subfields from M31 lying $\sim$ 120 kpc more distant than the inner-most subfields. Further, we find a metallicity that steadily increases from $-0.7^{+0.1}_{-0.1}$ dex to $-0.2^{+0.2}_{-0.1}$ dex along the inner half of the stream before steadily dropping to a value of $-1.0^{+0.2}_{-0.2}$ dex at the farthest reaches of our coverage. The RGB width is found to increase rapidly from $0.4^{+0.1}_{-0.1}$ dex to $1.1^{+0.2}_{-0.1}$ dex in the inner portion of the stream before plateauing and decreasing marginally in the outer subfields of the stream. In addition, we estimate Stream C to lie at a distance between $794$ and $862$ kpc and Stream D between $758$ kpc and $868$ kpc. We estimate the median metallicity of Stream C to lie in the range $-0.7$ to $-1.6$ dex and a metallicity of $-1.1^{+0.3}_{-0.2}$ dex for Stream D. RGB widths for the two structures are estimated to lie in the range $0.4$ to $1.2$ dex and $0.3$ to $0.7$ dex respectively. In total, measurements are obtained for 19 subfields along the Giant Stellar Stream, 4 along Stream C, 5 along Stream D and 3 general M31 spheroid fields for comparison. We thus provide a higher resolution coverage of the structures in these parameters than has previously been available in the literature.' author: - | A. R. Conn$^{1}$[^1], B. McMonigal$^{1}$, N. F. Bate$^{2}$, G. F. Lewis$^{1}$, R. A. Ibata$^{3}$, N. F. Martin$^{3, 4}$, A. W. McConnachie$^{5}$, A. M. N. Ferguson$^{6}$, M. J. Irwin$^{2}$, P. J. Elahi$^{1}$, K. A. Venn$^{7}$, A. D. Mackey$^{8}$\ $^{1}$Sydney Institute for Astronomy, School of Physics, A28, The University of Sydney, Sydney, NSW 2006, Australia\ $^{2}$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\ $^{3}$Observatoire astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l’Université, F-67000 Strasbourg\ $^{4}$Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany\ $^{5}$ NRC Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, British Columbia, Canada V9E 2E7\ $^{6}$ Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK\ $^{7}$ Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Victoria, British Columbia, Canada V8P 5C2\ $^{8}$ RSAA, Australian National University, Mt. Stromlo Observatory, Cotter Road, Weston Creek, ACT 2611, Australia\ \ $^{\ddag}$Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT)\ which is operated by the National Research Council (NRC) of Canada, the Institute National des Sciences de l’Univers of the Centre National de la Recherche\ Scientifique of France, and the University of Hawaii. date: 'Accepted year month day; Received year month day; in original form year month day' title: | Major Substructure in the M31 Outer Halo:\ Distances and Metallicities along the Giant Stellar Stream$^{\ddag}$ --- \[firstpage\] *(galaxies:)* Local Group – galaxies: structure. Introduction ============ The Giant Stellar Stream (GSS - also known as the Giant Southern Stream) constitutes a major substructure in the halo of our neighbor galaxy M31. It was discovered in 2001 from a survey of the southeastern inner halo of M31 undertaken with the Wide Field Camera on the 2.5m Isaac Newton Telescope (@Ibata2001; @Ferguson2002). Followup observations with the 3.6m Canada-France-Hawaii Telescope (CFHT) further revealed the enormous extent of the stream, spanning at least $4^\circ$ of sky (@McConn2003; @Ibata2007). This corresponds to a projected size in excess of $50$ kpc at M31 halo distances. A high-density stellar stream of these proportions is a structure seldom seen in the Local Group and its importance for understanding the evolution of the M31 system cannot be overestimated. The GSS has proven to exhibit a complex morphology, with a wide spread in metallicities and evidence for more than one stellar population. Based on stellar isochrone fitting, @Ibata2007 found evidence for a more metal rich core, surrounded by a sheath of bluer metal poor stars, which combine to produce a luminosity of $1.5 \times 10^8$ $L_{\odot}$ (a total absolute magnitude of $M_V \approx -15.6$). Similarly, studies such as @Kalirai2006 and later @Gilbert2009 find two kinematically separated populations in several inner fields of the stream, using data obtained with the DEIMOS spectrograph on the 10m Keck II telescope. @Gilbert2009 again report a more metal poor envelope enclosing the core. @Guhat2006 use data from the same source to deduce a mean metallicity of $[Fe/H] = -0.51$ toward the far end of the stream, suggesting the GSS is slightly more metal rich than the surrounding halo stars in this region. Using deep photometry obtained of an inner stream field via the Hubble Space Telescope’s Advanced Camera for Surveys, @Brown2006 compare their data with isochrone grids to ascertain a mean age of $\sim 8.8$ Gyr and a mean metallicity of $[Fe/H] = -0.7$ (slightly more metal poor than the spheroid population studied) but note a large spread in both parameters. Further to this, @Bernard2015 have shown that star formation in the stream started early and quenched about 5 Gyr ago, by which time the metallicity of the stream progenitor had already reached Solar levels. On the basis of this, they propose an early type system as the stream progenitor, perhaps a dE or spiral bulge. Detailed age and metallicity distributions are also included in this contribution. By combining distance estimates for the stream, particularly those presented in @McConn2003, with kinematic data, it is possible to constrain the orbit of the stream progenitor, and also to measure the dark matter halo potential within the orbit. Numerous studies have been dedicated to these aims, such as that of @Font2006 which uses the results of @Guhat2006 to infer a highly elliptical orbit for the progenitor, viewed close to edge-on. Both @Ibata2004 and, more recently, @Fardal2013 have obtained mass estimates for M31 using the GSS, with the latter incorporating a mass estimate for the progenitor comparable to the mass of the Large Magellanic Cloud. Whilst the distance information presented in @McConn2003 has been of great benefit to past studies, a more extensive data set, namely the Pan-Andromeda Archaeological Survey (PAndAS - @McConn2009) is now available. This data set provides comprehensive coverage along the full extent of the GSS, as well as other structures in the vicinity, notably Stream C and Stream D [@Ibata2007]. Stream C is determined in that study to be a little brighter and substantially more metal rich than Stream D. Both streams exhibit distinct properties to the GSS and hence must be considered separate structures, despite their apparent intersection with the GSS on the sky. Given the reliance of the aforementioned orbital studies on high quality distance and metallicity information, and given the prominent role played by stellar streams as diagnostic tools within the paradigm of hierarchical galaxy formation, it is highly advantageous to further constrain the distance and metallicity as a function of position along the stream using these data. The following sections hence outline the results of a new tip of the red giant branch (TRGB) algorithm as applied to subfields lining the GSS and streams C and D. In section §2, we provide a description of this method, in §3 we present the results of this study and in §4 and §5 we conclude with a discussion and summary respectively. Note that this publication forms part of a series focusing on key substructure identified in the M31 outer halo. This series includes @Bate2014, @Mackey2014 and @McMonigal2016. $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics \begin{overpic}[width = 0.35\textwidth,angle=-90]{GSS3_Best_Fit_CMD.ps} \put(10,65){\small a)} \end{overpic} \\ \begin{overpic}[width = 0.35\textwidth,angle=-90]{GSS3_Best_Fit_LF_coarse_W.ps} \put(10,65){\small b)} \end{overpic} \end{array}$ A New Two-Dimensional TRGB algorithm {#method} ==================================== Obtaining distances at closely spaced intervals along the Giant Stellar Stream has proven quite challenging, owing largely to the contrast of the stream with respect to the surrounding M31 halo stars, and also due to the wide spread in metallicities. Whilst the TRGB method presented in @Conn2011 and @Conn2012 provided the basis for the method we employ here, that method has its niche in application to metal poor populations with a low spread in metallicities. Hence for the GSS, a significant adaptation was necessary, as now discussed. In the earlier method, the luminosity function of the object in question was modeled using a truncated power law to represent the contribution from the object’s red giant branch (RGB), as per Eq. \[eq1\]: $$\begin{split} \; L (m \ge m_{TRGB}) &= 10^{a(m - m_{TRGB})} \\ \; L (m < m_{TRGB}) &= 0 \end{split} \label{eq1}$$ where $L$ represents the probability of finding a star at a given magnitude, $m$ is the (CFHT) $i$-band magnitude of the star in question, $m_{TRGB}$ is the TRGB magnitude and $a$ is the slope of the power law. To this power law was then added a polynomial fit to the luminosity function of a nearby field chosen to represent the contamination from non-object stars in the object field. This contamination component was then scaled relative to the object RGB component based on a comparison of the stellar density between the object and contamination fields. As this method is solely concerned with the $i$-band magnitude of a star, and does not take into account its color information, it is effectively a one-dimensional method in two-dimensional color-magnitude space. This means that the only metallicity information incorporated into the fit is that from the color-cut imposed on the stars beforehand. The dependence of the CFHT $i$-band TRGB magnitude on metallicity becomes an important consideration however for metallicities greater than $-1$ (see for example Fig. 6 of @Bellazzini2008 for the SDSS $i$-band which is comparable). For this reason, we have developed a two-dimensional approach to identifying the TRGB, one that incorporates a star’s position in both color and magnitude space into the fitted model. For our two-dimensional model of the object RGB, we draw our basis from the isochrones provided in the Dartmouth Stellar Evolution Database [@Dotter2008]. Therein are provided the necessary theoretical isochrones for the CFHT $i$-band and $g$-band photometry provided by the PAndAS survey. Within this database, isochrones are provided for a range of ages ($1 \le age \le 15$ Gyr), metallicities ($-2.5 \le [Fe/H] \le 0.5$), helium abundances $y$ and alpha-enhancement $[\alpha/Fe]$ values. For use with our algorithm, we have generated a large set of $2257$ isochrones in CFHT $i$ vs $g-i$ space with $[Fe/H] = -2.50, -2.45, ..., 0.50$ for each of $age = 1.00, 1.25, ..., 5.00$ Gyr where $age \le 5$ Gyr and $age = 5.5, 6.0, ..., 15.0$ Gyr where $age > 5$ Gyr. All isochrones are generated with $y = 0.245 + 1.5 z$ and $[\alpha/Fe] = 0.00$. The model RGB can then be constructed via an interpolation of the isochrone grid corresponding to a given age. Using the set of Dartmouth isochrones as generated for any given age, we essentially have a field of points in 2D (i.e. those corresponding to the color and magnitude of a particular mass value within a given isochrone) which form the framework of our model. Each of these points can then be scaled relative to each other point, thus adding a third dimension which represents the model height or density at that location in the CMD. This model height can then be manipulated by a Markov Chain Monte Carlo (MCMC) algorithm by altering a number of parameters, as outlined below. The model surface in between the resulting points is then interpolated by taking adjacent sets of 3 points and fitting a triangular plane segment between them. In order to manipulate the model height at each point in color-magnitude space, 3 parameters are implemented. The first is the slope of a power law *a* applied as a function of *i*-band magnitude, as per Eq \[eq1\]. The second and third denote the centre and width of a Gaussian weighting distribution applied as a function of metallicity (i.e. a function of both colour and magnitude). The slope parameter *a* is a convenient, if crude measure for accounting for the increase in the stellar population as you move faint-ward from the TRGB. Significant time was invested in an effort to devise a more sophisticated approach taking into account the specific tracks of the isochrones, but the simplest approach of applying the slope directly as a function of *i*-band magnitude remained the most effective and hence was used for all fits presented in this contribution. The Gaussian distribution applied as a function of metallicity is used to weight each isochrone based on the number of object stars lying along that isochrone. Each isochrone is hence given some constant height along all its constituent masses, with the slope parameter being used to discriminate between model heights within a single isochrone. The isochrones are weighted as follows: $$W_{iso} = exp\left(- \frac{([Fe/H]_{iso} - [Fe/H]_0)^2}{2 \times w_{RGB}^2} \right) \label{eq2}$$ where $W_{iso}$ is the weight applied to isochrone $iso$, $[Fe/H]_0$ is the central metallicity of the population, $[Fe/H]_{iso}$ is the metallicity of the isochrone being weighted, and $w_{RGB}$ is the one sigma spread in the metallicity of the isochrones, which we shall refer to as the RGB width. We note that the metallicity distribution function can be far from Gaussian, but nevertheless hold that this simplified model is both efficient and adequate in its simplicity. In particular, the distribution for the general M31 spheroid is far from Gaussian and hence this component is essentially folded into the normalization of the field contamination. Our fitted streams are in contrast represented by far more Gaussian distribution functions, and hence are fitted as the signal component by our algorithm. With the model CMD for the object constructed in the aforementioned fashion, we now require the addition of a contamination model component. Here we use the PAndAS contamination models as provided in @Martin2013. Essentially they provide a measure of the intensity of the integrated Milky Way contamination in any given pixel in the PAndAS survey. Likewise, they allow the user to generate a model contamination CMD for any pixel in the survey. Whilst it is possible to derive a measure of the object-to-contamination ratio directly from these models, we find that given the low contrast in many of the GSS subfields, it is preferable to fit this ratio as a free parameter determined by the MCMC process. To generate our MCMC chains, we employ the Metropolis-Hastings algorithm. In summary, we determine the likelihood ${\cal L}_{proposed}$ of the model for a given set of parameters and compare with the likelihood of the most recent set of parameters in the chain ${\cal L}_{current}$. We then calculate the Metropolis Ratio $r$: $$r = \frac{{\cal L}_{proposed}}{{\cal L}_{current}} \label{e_MetroRat}$$ and accept the proposed parameter set as the next in the chain if a new, uniform random deviate drawn from the interval $[0,1]$ is less than or equal to $r$. In order to step through the parameter space, we choose a fixed step size for each parameter that is large enough to traverse the whole probability space yet small enough to sample small features at a suitably high resolution. The new parameters are drawn from Gaussian distributions centered on the most recent accepted values in the chain, and with their width set equal to the step size. Upon the completion of the MCMC run, the chains are then inspected to insure that they are well mixed. Thus, we now have everything we need for our model CMD. At each iteration of the MCMC, we generate a model of the GSS red giant branch by using a grid of isochrones and manipulating their relative strengths using free parameters representing the central metallicity and RGB width of the stellar population combined with a parameter representing the slope in density as a function of *i*-band magnitude. We then slide this model component over the top of the contamination model component, with their respective ratio set via a fourth free parameter. We restrict the fitted magnitude range to $20 \le i \le 22$ to provide adequate coverage of the range of distances we expect to encounter whilst retaining a relatively narrow, more easily simulated band across the CMD. The final fitted parameter then is the TRGB magnitude itself, which determines how far along the *i*-band axis to slide the isochrone grid from it’s default position at 10 pc (i.e. the isochrones are initially set to their absolute *i*-band magnitudes). Thus it is actually the distance modulus of the population that we measure directly, since there is no fixed TRGB magnitude, but rather it is variable in color as exemplified in Fig. \[Fit2GSS3\]. For the sake of presenting a specific TRGB magnitude (as all TRGB investigations traditionally have done), we define a reference TRGB apparent magnitude ($m_{TRGB}$), derived from the distance modulus assuming a fixed absolute magnitude of the TRGB ($M_{TRGB}$) of $i = -3.44$. This is a good approximation to the roughly constant value of $M_{TRGB}$ for intermediate to old, metal poor populations for which the TRGB standard candle has traditionally been used ($[Fe/H] \le -1$, see Fig. 6 of @Bellazzini2008) and allows for direct comparison with other publications in this series. Clearly for the present study we are fitting populations that are often more metal rich than this, but it must be stressed that this adopted value is purely cosmetic with no bearing on the derived distance or any other determined parameter. The age of the isochrone grid is fixed at an appropriate value determined from the literature (9 Gyr in the case of the GSS, 9.5 Gyr for streams C and D and general spheroid fields and 7.5 Gyr for the M31 disk - all rounded from the values given in @Brown2006). Initial tests of the algorithm with the population age added as a sixth free MCMC parameter revealed that the choice of age had no effect on the location of the parameter probability peaks returned by the MCMC, but only on their relative strengths. It was hence decided more efficient to fix the age at a suitable value for the target population, as determined from the literature. As an additional consideration, the model RGB is further convolved with a 2D Gaussian kernel to simulate the blurring effects of the photometric uncertainties. We assume a photometric uncertainty of 0.015 magnitudes for both $i$ and $g$ bands and set the dimensions of the Gaussian kernel accordingly. We note that whilst in the fitted range the photometric uncertainty lies in the range 0.005 to 0.025, the tip will generally be located in the range $20.5 \le i \le 21.5$ for the structures studied in this contribution, making the assumed uncertainty value the most suitable. Any issues of photometric blending must be resolved by excising any regions above some suitable density threshold, although such issues have only been observed at the centers of the densest structures in the PAndAS survey and were not an issue for this study. Similarly, care must be taken to insure that data incompleteness does not effect the fitted sample of stars, which was achieved in the present study by restricting the magnitude range of selected stars. Finally, at the conclusion of the MCMC run, a probability distribution function (PDF) in each free parameter is obtained by marginalizing over the other parameters. As an example, the distance PDF for the GSS3 subfield, which was obtained via sampling from the PDF in the reference TRGB magnitude, is presented in Fig. \[GSS3\_distance\_pdf\]. The distance probability distribution is derived from that in the reference TRGB magnitude using the following equation: $$D = 10^ \frac{5 + m_{TRGB} - m_{ext} - M_{TRGB}}{5} \label{eq3}$$ where $D$ is the distance in parsecs; $m_{TRGB}$ is the reference TRGB apparent magnitude, sampled from the PDF in this parameter produced by the MCMC; $m_{ext}$ is the extinction in magnitudes for the center of the field, as sampled from a Gaussian with a central value determined from the Schlegel extinction maps [@Schlegel1998] and a width equal to 10 % of the central value; and $M_{TRGB}$ is the absolute magnitude of the TRGB. The uncertainty in $M_{TRGB}$ is a systematic quantity and we thus omit it from our calculations since we are primarily concerned in relative distances between subfields as opposed to absolute distances from Earth. We hence ignore any uncertainty in the absolute magnitude of the tip and note that all distances will have a systematic offset of not more than $50$ kpc (assuming an uncertainty of approximately $0.1$ magnitudes). All MCMC runs used for the results presented in this contribution were of $200,000$ iterations whilst the distance distributions are generated using $500,000$ samples of the $m_{TRGB}$ and $m_{ext}$ distributions. In conjunction with the results we present in the following section, we also provide an appendix to inform the interested reader as to any degeneracy between the key parameters of tip magnitude, metallicity and the RGB width. In Appendix A, we present contour plots illustrating the covariance between the tip magnitude and the metallicity for the GSS and Streams C and D. In Appendix B we present similar plots for the covariance between metallicity and RGB width for the same structures. In Appendix C we present both types of plot for our halo comparison fields which shall be referred to in the next section. It can be seen from these plots that any covariance between parameters is only minor. These plots are also extremely useful for visualizing the true probability space of the key parameters for each field, and provide an informative compliment to the results plotted in Figs. \[GSSDist\] through \[Stream\_D\_parameters\]. [GSS\_combined\_helix.eps]{} Results ======= The results we present in this section pertain to a number of separate structures. A field map illustrating the GSS subfields and Andromeda I exclusion zone as well as the fields utilized by @McConn2003, is presented in Fig. \[FieldMaps\]. The subfield placements along Stream C and Stream D are also indicated in this figure. Our principal focus is the Giant Stellar Stream, which is contained within our field labeled ‘GSS’. Fields C and D enclose Streams C and D respectively; and Fields H1 through H3 are separate halo fields adjacent to our target fields which sample the general M31 spheroid for comparison purposes. As discussed in §\[method\], for each subfield we obtain estimates of the heliocentric distance, the metallicity \[$Fe/H$\] and the RGB width ($w_{RGB}$), as well as the contamination fraction from Milky Way stars ($f_{cont}$). These are quantified in Tables \[par\_table1\] and \[par\_table2\], as are the distance modulus, extinction ($E(B-V)$) and M31 distance for each subfield. Distances along the GSS (both heliocentric and M31-centric) are plotted as a function of their M31-centric tangent plane coordinates $\xi$ and $\eta$ in Figure \[GSSDist\]. Metallicities and RGB widths for the GSS are plotted as a function of $\xi$ and $\eta$ in Figure \[GSSMet\]. Figures \[Stream\_C\_parameters\] and \[Stream\_D\_parameters\] present the distances (heliocentric and M31-centric), metallicities and RGB widths for Stream C and Stream D respectively. All data points are plotted together with their one-sigma ($68.2 \%$) uncertainties. Note that for the GSS, an overlapping system of fields was implemented such that a given field GSS$X.5$ contains the stars from the lower half of field GSS$X$ and the upper half of field GSS$X+1$. For this reason, data points are shown in between the numbered fields in Figure \[GSSDist\] and Figure \[GSSMet\]. In each of the Figures \[GSSDist\] through \[Stream\_D\_parameters\], basis splines are over plotted on each structure to aid the eye - they are not intended as a fit to the data. The splines are simply a smoothing function weighted by the errors in each data point - they are not constrained to pass through any specific data point. Each combination of parameters is smoothed separately and smoothing does not take into account the full three dimensions ($\xi$, $\eta$, $<$parameter$>$). Cubic splines are used for our GSS measurements whilst quadratic splines are used for all other measurements. For the derivation of the M31 distance for each subfield, a new distance to M31 of $773^{+6}_{-5}$ kpc was determined via our new method, by fitting to stars within an elliptical annulus centered on M31 and defined by inner and outer ellipses with ellipticities of $0.68$, position angles of $39.8^{\circ}$ and semi-major axes of $2.45^{\circ}$ and $2.55^{\circ}$ respectively (as indicated in Fig. \[FieldMaps\]). This distance is a little smaller than the $779^{+19}_{-18}$ kpc determined by the 1D predecessor of our current method [@Conn2012] and larger than the $752 \pm 27$ kpc determined from Cepheid Variables [@Riess2012] or the $744 \pm 33$ kpc determined from eclipsing binaries [@Vilardell2010] but nevertheless well within the uncertainties of each of these measurements. It is immediately clear, both from the large error bars in Figures \[GSSDist\] through \[Stream\_D\_parameters\] and in particular from the last column ($f_{cont}$) of Tables \[par\_table1\] and \[par\_table2\], that our parameter estimates for most subfields are derived from heavily contaminated structures. Nevertheless, on closer inspection, much can be inferred from the estimates returned by our algorithm. Our results support the same general distance gradient reported by @McConn2003, as can be seen in Fig. \[GSSDist\], although we note a slightly greater increase in distance as a function of angular separation from M31. We also find no evidence of the sudden distance increase between fields $7$ and $8$ of that study, and importantly, we note that the stream appears to emerge from a small distance in front of the M31 disk center. It should be noted that the results reported in @McConn2003 determine distance shifts of each field with respect to field $8$ - taken as the M31 distance - whereas our estimates are independent of any inter-field correlations. Our data is also the product of a different imager to that used in this earlier study and of a different photometric calibration. We also stress that the technique used in the earlier contribution did not take metallicity changes into account on a field-by-field basis. The mid GSS fields are in fact slightly more metal rich than the inner most fields (see Fig. \[GSSMet\]) which would yield inflated distance estimates for those fields. It is evident from Fig. \[GSSDist\] that our distance estimates appear to depart markedly from the general trend between subfields GSS4 and GSS5.5 as well as between GSS6 and GSS7.5. These subfields coincide with the intersection (on the sky) between the GSS and streams D and C respectively. With the exception of subfield GSS4.5, each of these anomalous subfields contain parameter probability distributions that are double peaked, with the second peak more in keeping with the GSS trend and thus presumably attributable to the GSS. In the case of subfield GSS5, Stream D would appear to be consistent with the additional peak in so far as distance is concerned, but the same cannot be said for either the metallicity or the RGB width. In the case of subfields GSS6.5 and GSS7, the additional peak is roughly consistent with the secondary peak derived for subfield C3 in terms of distance and RGB width but the metallicity is different. For all fields where a restriction on the TRGB probability distribution proved informative (namely subfields GSS5, GSS6.5, GSS7, GSS8.5 and GSS9), parameter estimates are provided for both the restricted and unrestricted case. The fields are denoted in the restricted case with the symbol ${\dagger}$$^*$ in Table \[par\_table1\] and in Appendix A and Appendix B, whilst ${\dagger}$ is used in the unrestricted case. Fields denoted ${\dagger}$$^*$ will be represented as black triangle symbols in Figs \[GSSDist\] and \[GSSMet\] whilst those denoted ${\dagger}$ will be represented as red square symbols. We note that even when the GSS subfield distances are determined from the full parameter distributions, they remain in general keeping with the trend when the full uncertainties are considered. Moving on to the outer most portion of the GSS, it is interesting to observe that the distance seems to plateau and even diminish beyond the brightest portion of the stream covered in @McConn2003, although caution must be exercised with inferences made from the outermost subfields, due to the extremely low signal available. For streams C and D, we find average distances of $\sim 828^{+9}_{-30}$ kpc and $\sim 789^{+26}_{-18}$ kpc respectively. We are unable to determine any reliable distance gradient along either of these structures. In addition to Streams C and D, consideration had been given to the possibility of an arching segment of the GSS, extending outward from subfields GSS8, GSS9 and GSS10 and falling back onto the M31 disk in the vicinity of subfields C4 and D4/ D5. Despite the conceivable existence of such a feature based on visual inspection of stellar density plots, no distinct population could be reliably determined in any of the fitted parameters. If such a continuation of the GSS exists, it is heavily contaminated by the much brighter Stream C and Stream D and beyond the reach of our method in its present form. When we examine the metallicity and RGB width estimates returned by our algorithm (see Figure \[GSSMet\]), we observe an unusual trend as we move out along the main part of the GSS. Closest to the M31 disk, the stream is found to be moderately metal poor, with metallicities in the range $-0.7 > [Fe/H] > -0.8$ whilst midway along the stream we find more metal rich stars with $[Fe/H] > -0.5$. Then, as we move out still further, the metallicity diminishes again, falling below the levels in the inner part of the stream with $[Fe/H] \approx -1$ at the furthest reaches in subfield GSS10. A similar trend is observed for the RGB width. This would suggest that the range of metallicities present is relatively small in the inner part of the stream, whilst increasing significantly as we move toward the middle part of the stream. Once again, in the outer most parts of the stream, we observe a return to lower values, although not to the same degree as we observed for the metallicity. Once again, we must stress however that the contamination fraction is exceedingly high in the outermost subfields and thus the metallicity and RGB width estimates for these subfields should be treated with caution. We find streams C and D to be consistently more metal poor than the GSS, with average metallicities of $-1.0^{+0.1}_{-0.1}$ dex and $-1.1^{+0.1}_{-0.1}$ dex respectively. They are also generally less diverse in terms of the range of metallicities present. When we compare our halo fields to our GSS and Stream C and D fields, we find a clear indication that we are indeed picking up the signal of the intended structures. When we examine the contour plots in Appendix C, we find distributions that are markedly different from those of our target structures presented in Appendices A and B. These fields were carefully chosen to be of comparable size to our target fields, and to traverse the approximate M31 halo radii spanned by our target structures. The lack of any clear structure to fit to in fields H1 and H2 is clear from the breadth of the distributions in all parameters, whereas clearly such poor parameter constraints are not observed for any of our target fields. Likewise, we find little correlation between the location of the distribution maxima. Halo field H3 is somewhat different to fields H1 and H2 in that it is expected to be heavily contaminated by the M31 disk. More overlap in the distributions is found between the H3 field and our target fields (the Stream D subfields for instance), particularly in tip magnitude and metallicity, but the signal-to-noise ratio is much higher for our inner fields, suggesting that any correlations are real and not merely the result of contamination. We should also note that we expect any parameter gradients across the halo to be diffuse and unsuited to our method which works most favorably with sharply defined structure boundaries along the line of sight. This is indeed exemplified by the plots in Appendix C. Discussion ========== The key findings of our method lie in the spatially resolved metallicities and distances along the main inner-halo structures around M31. Our metallicity measurements are consistent with all prior published measurements. Whilst these measurements utilize data from a variety of instruments, we note that our method was not tuned to be consistent with any of these prior results. The initial discovery of the GSS by @Ibata2001 in the Issac Newton Telescope (INT) Survey measured a metallicity of slightly higher than $[Fe/H]=-0.71$ at a position consistent with our innermost GSS subfields (GSS1 to GSS3). Of the 16 Hubble Space Telescope (HST) WFPC2 fields analyzed by @Bellazzini2003, those overlapping our fields correspond to our innermost GSS subfields (GSS1 to GSS3), and have metallicity measurements in the range $[Fe/H]=-0.7$ to $-0.5$, with a tendency towards increasing metallicity moving South-East, in the same sense as our results. Further out, at a location consistent with our GSS subfield GSS4, Keck DEIMOS spectra analyzed by @Guhat2006 gave a higher mean metallicity measurement of $[Fe/H]=-0.51$, matching our findings. A detailed analysis by @Ibata2014 is in broad agreement with our results, with the GSS dominating the inner halo down to a metallicity of $[Fe/H]=-1.1$, the lowest metallicity we find for the GSS. @Ibata2014 also found the inner halo streams (including Streams C and D) to be dominant in the metallicity range $[Fe/H]=-1.7$ to $-1.1$, where our results for Stream D and one subfield of Stream C are situated, although there are also signs of a significant population of Stream C members in the range $[Fe/H]=-1.1$ to $-0.6$, where the bulk of our Stream C results lie. This lends support to the suggestion by @Chapman2008 that there are two, potentially completely separate populations that make up Stream C. These populations are found separable by their velocity measurements, and also by their metallicities of $[Fe/H]=-1.3$ and $-0.7$ in the aforesaid publication, which match our findings for subfield C2, and the rest of Stream C respectively. Indeed, @Gilbert2009 also find evidence of two populations in Stream C, separable into a more metal rich component ($[Fe/H]_{mean} = -0.79 \pm 0.12$ dex) and metal poor component ($[Fe/H]_{mean} = -1.31 \pm 0.18$ dex). We caution however that our detection of two populations is tentative and independent velocity measurements for our field locations are warranted if a clear distinction is to be confirmed. @Chapman2008 additionally measured the metallicity of Stream D to be $[Fe/H]=-1.1\pm 0.3$, in good agreement with our results. A key finding of this paper is the extraordinary extent of the GSS to the South-East, reaching a full degree further away from M31 in projection than previously measured, at a $5.5$ degree separation for subfield GSS10. @Fardal2008 was able to find a model for the GSS which sufficiently matched observations of some of the inner structures, however the low velocity dispersions, physical thickness and narrow metallicity ranges of Streams C and D found by @Chapman2008 suggest that a single accretion event is unlikely to be sufficient to form both of these structures as well as the GSS. One possible scenario that might explain the difference in metallicity between Streams C and D and the main GSS structure is a spinning disk galaxy progenitor with a strong metallicity gradient following a radial plunging orbit into M31 resulting in the outer portion ending up on a counter orbit with a lower metallicity [@Chapman2008]. Although each new observation makes explanations such as this increasingly contrived. None of the current simulations of this system predict or include an extension of the GSS as far out as we find it, or the existence of any arching segment to the GSS (@Fardal2008; @Fardal2013; @Sadoun2014). Although the latest simulations of @Fardal2013 include distances for the main GSS, which while consistent with the distances presented by @McConn2003, are also highly consistent with the distances presented here, particularly for the innermost and outermost portions of the GSS. This suggests that finding a simulation consistent with our much more restrictive distance constraints for the GSS may only require minor alterations. Whilst our method has been very successful fitting these structures, particularly considering the high levels of contamination in this region (over 85 per cent for most subfields), it is in some instances difficult to resolve all the populations, especially for the fainter structures. Some additional information will be gleaned by running a full multi-population fit (Martin et al in prep), but to fully uncover the history of this system, we will need detailed simulations of the formation and evolution of the GSS and associated structures. These simulations should take into account realistic gas physics, combined with next generation observations including wide field kinematic surveys. --------------------------------- ---------- ---------- ------------------------- ---------- ---------------------- ---------------------- ---------------------- --------------------- --------------------------- Subfield Xi Eta Distance Modulus $E(B-V)$ Distance (kpc) M31 Distance (kpc) $Fe/H$ (dex) RGB width (dex) $f_{cont}$ \[0.5ex\] GSS1 $-0.390$ $-0.988$ $24.39^{+0.01}_{-0.01}$ $0.076$ $756.^{+5.}_{-5.}$ $21.^{+7.}_{-4.}$ $-0.7^{+0.1}_{-0.1}$ $0.4^{+0.1}_{-0.1}$ $0.170^{+0.003}_{-0.001}$ \[1.0ex\] GSS1.5 $-0.219$ $-1.225$ $24.41^{+0.01}_{-0.01}$ $0.073$ $762.^{+5.}_{-5.}$ $17.^{+6.}_{-1.}$ $-0.8^{+0.1}_{-0.1}$ $0.4^{+0.1}_{-0.1}$ $0.263^{+0.004}_{-0.004}$ \[1.0ex\] GSS2 $-0.047$ $-1.462$ $24.40^{+0.03}_{-0.02}$ $0.070$ $760.^{+10.}_{-6.}$ $20.^{+5.}_{-1.}$ $-0.7^{+0.1}_{-0.1}$ $0.6^{+0.1}_{-0.1}$ $0.416^{+0.008}_{-0.006}$ \[1.0ex\] GSS2.5 $0.125$ $-1.699$ $24.45^{+0.02}_{-0.02}$ $0.058$ $778.^{+6.}_{-7.}$ $23.^{+2.}_{-1.}$ $-0.7^{+0.1}_{-0.1}$ $0.7^{+0.1}_{-0.1}$ $0.523^{+0.008}_{-0.008}$ \[1.0ex\] GSS3 $0.297$ $-1.937$ $24.48^{+0.01}_{-0.02}$ $0.053$ $787.^{+5.}_{-7.}$ $27.^{+4.}_{-1.}$ $-0.6^{+0.1}_{-0.1}$ $0.8^{+0.1}_{-0.1}$ $0.572^{+0.008}_{-0.008}$ \[1.0ex\] GSS3.5 $0.469$ $-2.174$ $24.50^{+0.01}_{-0.01}$ $0.050$ $795.^{+5.}_{-5.}$ $36.^{+5.}_{-3.}$ $-0.6^{+0.1}_{-0.1}$ $0.8^{+0.1}_{-0.1}$ $0.617^{+0.006}_{-0.008}$ \[1.0ex\] GSS4 $0.641$ $-2.411$ $24.52^{+0.02}_{-0.02}$ $0.050$ $800.^{+8.}_{-7.}$ $43.^{+7.}_{-5.}$ $-0.4^{+0.1}_{-0.1}$ $0.9^{+0.2}_{-0.1}$ $0.628^{+0.008}_{-0.009}$ \[1.0ex\] GSS4.5 $0.812$ $-2.648$ $24.45^{+0.02}_{-0.02}$ $0.054$ $776.^{+6.}_{-6.}$ $38.^{+1.}_{-1.}$ $-0.2^{+0.2}_{-0.1}$ $1.1^{+0.2}_{-0.1}$ $0.622^{+0.009}_{-0.009}$ \[1.0ex\] GSS5${\dagger}$$^*$ $0.984$ $-2.885$ $24.57^{+0.02}_{-0.02}$ $0.058$ $821.^{+7.}_{-9.}$ $63.^{+7.}_{-7.}$ $-0.4^{+0.2}_{-0.1}$ $1.0^{+0.1}_{-0.1}$ $0.665^{+0.008}_{-0.009}$ \[1.0ex\] GSS5.5 $1.156$ $-3.121$ $24.61^{+0.02}_{-0.02}$ $0.057$ $836.^{+7.}_{-9.}$ $77.^{+8.}_{-8.}$ $-0.6^{+0.1}_{-0.1}$ $0.9^{+0.2}_{-0.1}$ $0.693^{+0.008}_{-0.008}$ \[1.0ex\] GSS6 $1.328$ $-3.358$ $24.67^{+0.02}_{-0.05}$ $0.051$ $859.^{+7.}_{-21.}$ $99.^{+8.}_{-18.}$ $-0.4^{+0.2}_{-0.2}$ $1.0^{+0.2}_{-0.1}$ $0.728^{+0.008}_{-0.009}$ \[1.0ex\] GSS6.5${\dagger}$$^*$ $1.500$ $-3.594$ $24.58^{+0.09}_{-0.02}$ $0.053$ $825.^{+35.}_{-8.}$ $74.^{+28.}_{-6.}$ $-0.3^{+0.2}_{-0.2}$ $1.0^{+0.2}_{-0.2}$ $0.762^{+0.009}_{-0.010}$ \[1.0ex\] GSS7${\dagger}$$^*$ $1.671$ $-3.830$ $24.58^{+0.05}_{-0.02}$ $0.052$ $826.^{+18.}_{-8.}$ $79.^{+14.}_{-6.}$ $-0.5^{+0.1}_{-0.1}$ $0.8^{+0.1}_{-0.1}$ $0.780^{+0.009}_{-0.009}$ \[1.0ex\] GSS7.5 $1.843$ $-4.066$ $24.71^{+0.01}_{-0.03}$ $0.053$ $873.^{+6.}_{-12.}$ $117.^{+8.}_{-11.}$ $-0.7^{+0.2}_{-0.1}$ $0.8^{+0.2}_{-0.2}$ $0.812^{+0.009}_{-0.008}$ \[1.0ex\] GSS8 $2.015$ $-4.302$ $24.70^{+0.01}_{-0.02}$ $0.054$ $871.^{+6.}_{-7.}$ $118.^{+7.}_{-7.}$ $-0.8^{+0.1}_{-0.1}$ $0.8^{+0.2}_{-0.1}$ $0.827^{+0.008}_{-0.008}$ \[1.0ex\] GSS8.5${\dagger}$$^*$ $2.186$ $-4.537$ $24.65^{+0.03}_{-0.02}$ $0.055$ $853.^{+10.}_{-9.}$ $108.^{+8.}_{-8.}$ $-0.8^{+0.1}_{-0.1}$ $0.6^{+0.1}_{-0.1}$ $0.841^{+0.008}_{-0.009}$ \[1.0ex\] GSS9${\dagger}$$^*$ $2.358$ $-4.772$ $24.63^{+0.02}_{-0.02}$ $0.050$ $844.^{+8.}_{-7.}$ $103.^{+7.}_{-6.}$ $-0.8^{+0.2}_{-0.1}$ $0.7^{+0.2}_{-0.1}$ $0.875^{+0.008}_{-0.009}$ \[1.0ex\] GSS9.5 $2.530$ $-5.007$ $24.64^{+0.05}_{-0.06}$ $0.047$ $847.^{+18.}_{-24.}$ $107.^{+14.}_{-15.}$ $-0.9^{+0.2}_{-0.2}$ $1.0^{+0.4}_{-0.2}$ $0.900^{+0.008}_{-0.009}$ \[1.0ex\] GSS10 $2.701$ $-5.242$ $24.70^{+0.06}_{-0.10}$ $0.047$ $870.^{+25.}_{-41.}$ $128.^{+21.}_{-29.}$ $-1.0^{+0.2}_{-0.2}$ $0.8^{+0.3}_{-0.2}$ $0.924^{+0.008}_{-0.008}$ \[1.0ex\] GSS5${\dagger}$ $0.984$ $-2.885$ $24.46^{+0.05}_{-0.02}$ $0.058$ $780.^{+19.}_{-7.}$ $42.^{+4.}_{-1.}$ $-0.4^{+0.2}_{-0.1}$ $0.9^{+0.2}_{-0.1}$ $0.652^{+0.011}_{-0.009}$ \[1.0ex\] GSS6.5${\dagger}$ $1.500$ $-3.594$ $24.41^{+0.22}_{-0.03}$ $0.053$ $762.^{+83.}_{-10.}$ $53.^{+31.}_{-1.}$ $-0.4^{+0.2}_{-0.1}$ $0.9^{+0.2}_{-0.2}$ $0.753^{+0.013}_{-0.011}$ \[1.0ex\] GSS7${\dagger}$ $1.671$ $-3.830$ $24.36^{+0.23}_{-0.04}$ $0.052$ $744.^{+83.}_{-12.}$ $61.^{+18.}_{-3.}$ $-0.4^{+0.1}_{-0.1}$ $0.7^{+0.2}_{-0.1}$ $0.763^{+0.015}_{-0.010}$ \[1.0ex\] GSS8.5${\dagger}$ $2.186$ $-4.537$ $24.37^{+0.04}_{-0.08}$ $0.055$ $749.^{+15.}_{-26.}$ $68.^{+11.}_{-1.}$ $-0.6^{+0.1}_{-0.1}$ $0.5^{+0.1}_{-0.1}$ $0.824^{+0.009}_{-0.010}$ \[1.0ex\] GSS9${\dagger}$ $2.358$ $-4.772$ $24.63^{+0.02}_{-0.04}$ $0.050$ $845.^{+8.}_{-15.}$ $103.^{+7.}_{-12.}$ $-0.8^{+0.2}_{-0.1}$ $0.7^{+0.2}_{-0.1}$ $0.872^{+0.009}_{-0.010}$ \[1.0ex\] --------------------------------- ---------- ---------- ------------------------- ---------- ---------------------- ---------------------- ---------------------- --------------------- --------------------------- \[par\_table1\] This table quantifies the MCMC-fitted parameter estimates for the Giant Stellar Stream subfields - i.e. labelled ‘GSS$X$’. Parameters are given with their one-sigma (68.2%) uncertainties. Field boundaries are illustrated in Fig. \[FieldMaps\]. Note that subfields labelled GSS$X.5$ include the lower half of Subfield GSS$X$ and the upper half of Subfield GSS$X+1$. Subfields with probability peaks omitted for the determination of their best fit parameter estimates (due to the presence of prominent peaks that are inconsistent with the overwhelming trend) are denoted $\dagger$$^*$. The alternative estimates derived from the unrestricted distributions are denoted $\dagger$ and appear at the bottom of the table below the double line. For fields external to the GSS, see Table \[par\_table2\]. -------------- --------- ---------- ------------------------- ---------- ---------------------- --------------------- ---------------------- --------------------- --------------------------- Subfield Xi Eta Distance Modulus $E(B-V)$ Distance (kpc) M31 Distance (kpc) $Fe/H$ (dex) RGB width (dex) $f_{cont}$ \[0.5ex\] C1 $2.558$ $-3.676$ $24.54^{+0.02}_{-0.04}$ $0.050$ $809.^{+9.}_{-15.}$ $68.^{+8.}_{-4.}$ $-0.9^{+0.1}_{-0.1}$ $0.5^{+0.1}_{-0.1}$ $0.846^{+0.011}_{-0.013}$ \[1.0ex\] C2 $3.182$ $-3.023$ $24.66^{+0.02}_{-0.07}$ $0.050$ $854.^{+8.}_{-28.}$ $101.^{+8.}_{-21.}$ $-1.4^{+0.2}_{-0.2}$ $1.0^{+0.2}_{-0.2}$ $0.889^{+0.008}_{-0.011}$ \[1.0ex\] C3 $3.580$ $-1.896$ $24.57^{+0.02}_{-0.15}$ $0.048$ $819.^{+9.}_{-56.}$ $55.^{+13.}_{-1.}$ $-0.9^{+0.2}_{-0.2}$ $0.7^{+0.3}_{-0.1}$ $0.871^{+0.010}_{-0.013}$ \[1.0ex\] C4 $3.715$ $-0.499$ $24.60^{+0.02}_{-0.05}$ $0.054$ $831.^{+8.}_{-20.}$ $69.^{+12.}_{-9.}$ $-0.9^{+0.1}_{-0.1}$ $0.6^{+0.1}_{-0.1}$ $0.889^{+0.009}_{-0.009}$ \[1.0ex\] D1 $2.174$ $-2.142$ $24.46^{+0.02}_{-0.06}$ $0.049$ $779.^{+7.}_{-21.}$ $42.^{+3.}_{-1.}$ $-1.2^{+0.1}_{-0.1}$ $0.6^{+0.1}_{-0.1}$ $0.867^{+0.010}_{-0.013}$ \[1.0ex\] D2 $2.728$ $-1.423$ $24.47^{+0.09}_{-0.07}$ $0.057$ $782.^{+32.}_{-26.}$ $42.^{+14.}_{-1.}$ $-1.2^{+0.2}_{-0.1}$ $0.5^{+0.1}_{-0.1}$ $0.902^{+0.009}_{-0.013}$ \[1.0ex\] D3 $2.947$ $-0.579$ $24.46^{+0.07}_{-0.04}$ $0.055$ $781.^{+26.}_{-13.}$ $41.^{+5.}_{-1.}$ $-1.1^{+0.1}_{-0.2}$ $0.6^{+0.1}_{-0.1}$ $0.884^{+0.010}_{-0.013}$ \[1.0ex\] D4 $3.097$ $0.469$ $24.56^{+0.13}_{-0.04}$ $0.056$ $818.^{+50.}_{-15.}$ $61.^{+42.}_{-10.}$ $-1.1^{+0.1}_{-0.2}$ $0.4^{+0.1}_{-0.1}$ $0.907^{+0.010}_{-0.011}$ \[1.0ex\] D5 $2.932$ $1.198$ $24.47^{+0.04}_{-0.05}$ $0.081$ $783.^{+13.}_{-17.}$ $43.^{+4.}_{-1.}$ $-1.1^{+0.1}_{-0.1}$ $0.5^{+0.1}_{-0.1}$ $0.869^{+0.009}_{-0.010}$ \[1.0ex\] H1 $4.8$ $-4.5$ $24.54^{+0.14}_{-0.27}$ $0.048$ $809.^{+53.}_{-96.}$ $89.^{+29.}_{-1.}$ $-1.5^{+0.5}_{-0.4}$ $0.9^{+0.3}_{-0.2}$ $0.967^{+0.010}_{-0.014}$ \[1.0ex\] H2 $5.2$ $0.2$ $24.43^{+0.21}_{-0.15}$ $0.062$ $768.^{+77.}_{-51.}$ $70.^{+26.}_{-1.}$ $-0.9^{+0.2}_{-1.0}$ $1.0^{+0.5}_{-0.3}$ $0.971^{+0.008}_{-0.010}$ \[1.0ex\] H3 $1.586$ $-0.823$ $24.50^{+0.01}_{-0.04}$ $0.052$ $795.^{+5.}_{-13.}$ $25.^{+9.}_{-1.}$ $-1.3^{+0.1}_{-0.1}$ $0.8^{+0.1}_{-0.1}$ $0.792^{+0.009}_{-0.008}$ \[1.0ex\] -------------- --------- ---------- ------------------------- ---------- ---------------------- --------------------- ---------------------- --------------------- --------------------------- \[par\_table2\] Conclusions =========== We have presented the distances and metallicities for the major inner-halo streams of M31 using the highest quality data currently available. There is a great deal of overlap between many of these features, making clear measurements troublesome, however the new method we developed to fit populations to the data have allowed some details to be revealed. There is a clear need for a wide field kinematic survey of the stellar substructure within the halo of M31, which combined with the superb PAndAS photometric data, would allow for a complete decomposition of these structures. This would bring a much greater understanding of the current and past accretion history of our nearest neighbour analogue, and would represent a great leap forward in galactic archaeology. The conclusion of this work then, is that the GSS, Stream C, and Stream D, are in general extremely faint, and can not be completely separated using the currently available photometric data. Our method however, allows for even the lowest contrast structures to be partially resolved into separate populations, providing both distance and metallicity probability distributions. These values will be invaluable for future simulations of the M31 system, placing much stronger constraints on the three dimensional present day positions of the major inner-halo structures. A full population fit based on this data, will lead to a deeper understanding, and will be the subject of a future contribution. Acknowledgments {#acknowledgments .unnumbered} =============== ARC thanks the University of Sydney for funding via a 2014 Laffan Fellowship. BM acknowledges the support of an Australian Postgraduate Award. NFB and GFL thank the Australian Research Council (ARC) for support through Discovery Project (DP110100678). GFL also gratefully acknowledges financial support through his ARC Future Fellowship (FT100100268). PJE is supported by the SSimPL programme and the Sydney Institute for Astronomy (SIfA), and [*Australian Research Council*]{} (ARC) grants DP130100117 and DP140100198. [99]{} Bate N. F., et al., 2014, MNRAS, 437, 3362 Bellazzini M., Cacciari C., Federici L., Fusi Pecci F., Rich M., 2003, A&A, 405, 867 Bellazzini M., 2008, MmSAI, 79, 440 Bernard E. J., et al., 2015, MNRAS, 446, 2789 Brown T. M., Smith E., Ferguson H. C., Rich R. M., Guhathakurta P., Renzini A., Sweigart A. V., Kimble R. A., 2006, ApJ, 652, 323 Chapman S. C., et al., 2008, MNRAS, 390, 1437 Conn A. R., et al., 2011, ApJ, 740, 69 Conn A. R., et al., 2012, ApJ, 758, 11 Dotter A., Chaboyer B., Jevremovi[ć]{} D., Kostov V., Baron E., Ferguson J. W., 2008, ApJS, 178, 89 Fardal M. A., et al., 2013, MNRAS, 434, 2779 Fardal M. A., Babul A., Guhathakurta P., Gilbert K. M., Dodge C., 2008, ApJ, 682, L33 Ferguson A. M. N., Irwin M. J., Ibata R. A., Lewis G. F., Tanvir N. R., 2002, AJ, 124, 1452 Font A. S., Johnston K. V., Guhathakurta P., Majewski S. R., Rich R. M., 2006, AJ, 131, 1436 Gilbert K. M., et al., 2009, ApJ, 705, 1275 Guhathakurta P., et al., 2006, AJ, 131, 2497 Ibata R., Irwin M., Lewis G., Ferguson A. M. N., Tanvir N., 2001, Natur, 412, 49 Ibata R., Chapman S., Ferguson A. M. N., Irwin M., Lewis G., McConnachie A., 2004, MNRAS, 351, 117 Ibata R., Martin N. F., Irwin M., Chapman S., Ferguson A. M. N., Lewis G. F., McConnachie A. W., 2007, ApJ, 671, 1591 Ibata R. A., et al., 2014, ApJ, 780, 128 Kalirai J. S., Guhathakurta P., Gilbert K. M., Reitzel D. B., Majewski S. R., Rich R. M., Cooper M. C., 2006, ApJ, 641, 268 Lewis G. F., et al., 2013, ApJ, 763, 4 Mackey A. D., et al., 2014, MNRAS, 445, L89 Martin N. F., Ibata R. A., McConnachie A. W., Mackey A. D., Ferguson A. M. N., Irwin M. J., Lewis G. F., Fardal M. A., 2013, ApJ, 776, 80 McConnachie A. W., Irwin M. J., Ibata R. A., Ferguson A. M. N., Lewis G. F., Tanvir N., 2003, MNRAS, 343, 1335 McConnachie A. W., et al., 2009, Natur, 461, 66 McMonigal B., et al., 2016, MNRAS, 456, 405 Riess A. G., Fliri J., Valls-Gabaud D., 2012, ApJ, 745, 156 Sadoun R., Mohayaee R., Colin J., 2014, MNRAS, 442, 160 Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525 Vilardell F., Ribas I., Jordi C., Fitzpatrick E. L., Guinan E. F., 2010, A&A, 509, A70 \[lastpage\] $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics %\includegraphics[width=0.50\textwidth,clip=true,trim={3cm 0cm 6cm 0cm}]{Chap.jpg} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS1.ps} \put(15,56){\small GSS1} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS1H.ps} \put(15,58){\small GSS1.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS2.ps} \put(15,58){\small GSS2} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS2H.ps} \put(15,55){\small GSS2.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS3.ps} \put(15,58){\small GSS3} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS3H.ps} \put(15,58){\small GSS3.5} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS4.ps} \put(15,55){\small GSS4} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS4H.ps} \put(15,58){\small GSS4.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS5_re.ps} \put(15,58){\small GSS5$\dagger$$^*$} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS5H.ps} \put(15,55){\small GSS5.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS6.ps} \put(15,58){\small GSS6} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS6H_re.ps} \put(15,58){\small GSS6.5$\dagger$$^*$} \end{overpic} \\ \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{m_vs_FeH_contour_GSS7_re.ps} \put(15,63){\small GSS7$\dagger$$^*$} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{m_vs_FeH_contour_GSS7H.ps} \put(15,66){\small GSS7.5} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{m_vs_FeH_contour_GSS8.ps} \put(15,67){\small GSS8} \end{overpic} %\\ \includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_PPD.ps} & %\includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_LF.ps} \end{array}$ **Appendix A Part I:** Contour plots illustrating the correlation between tip magnitude and metallicity probability distributions for the fields listed in Table \[par\_table1\].\ Contours are drawn at $10\%$ intervals (as is the case for all subsequent Appendix plots). Fields GSS1 through GSS8 are represented here.\ Plots denoted $\dagger$$^*$ are generated by sampling only the parameter values consistent with a restricted TRGB range. The full, unrestricted versions denoted $\dagger$ are shown on the next page. The restricted ranges are: GSS5$\dagger$$^*$, $21.08 \le TRGB \le 21.18$; GSS6.5$\dagger$$^*$, $21.08 \le TRGB \le 21.30$; GSS7$\dagger$$^*$, $21.08 \le TRGB \le 21.30$. \[AppAI\] $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS8H_re.ps} \put(15,54){\small GSS8.5$\dagger$$^*$} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS9_re.ps} \put(15,57){\small GSS9$\dagger$$^*$} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS9H.ps} \put(15,57){\small GSS9.5} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS10.ps} \put(15,54){\small GSS10} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS5.ps} \put(15,57){\small GSS5$\dagger$} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_GSS6H.ps} \put(15,57){\small GSS6.5$\dagger$} \end{overpic} \\ \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{m_vs_FeH_contour_GSS7.ps} \put(15,60){\small GSS7$\dagger$} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{m_vs_FeH_contour_GSS8H.ps} \put(15,63){\small GSS8.5$\dagger$} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{m_vs_FeH_contour_GSS9.ps} \put(15,63){\small GSS9$\dagger$} \end{overpic} %\\ \includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_PPD.ps} & %\includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_LF.ps} \end{array}$ **Appendix A Part II:** Contour plots illustrating the correlation between tip magnitude and metallicity for the fields listed in Table \[par\_table1\].\ Fields GSS8.5 through GSS10 are represented here. Plots denoted $\dagger$$^*$ are generated by sampling only the parameter values consistent with a restricted TRGB range. The full, unrestricted versions (for both Appendix A Parts I and II) are displayed here also and are denoted $\dagger$.\ The restricted range plots are generated with the following limits: GSS8.5$\dagger$$^*$, $21.15 \le TRGB \le 21.30$; GSS9$\dagger$$^*$, $21.10 \le TRGB \le 21.30$. \[AppAII\] $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_C1.ps} \put(15,55){\small C1} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_C2.ps} \put(15,58){\small C2} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_C3.ps} \put(15,55){\small C3} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_C4.ps} \put(15,58){\small C4} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_D1.ps} \put(15,55){\small D1} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_D2.ps} \put(15,58){\small D2} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{m_vs_FeH_contour_D3.ps} \put(15,58){\small D3} \end{overpic} \\ \begin{overpic}[width = 0.252\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{m_vs_FeH_contour_D4.ps} \put(15,61){\small D4} \end{overpic} \begin{overpic}[width = 0.252\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{m_vs_FeH_contour_D5.ps} \put(15,64){\small D5} \end{overpic} %\\ \includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_PPD.ps} & %\includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_LF.ps} \end{array}$ **Appendix A Part III:** Contour plots illustrating the correlation between tip magnitude and metallicity for the fields listed in Table \[par\_table2\].\ Fields from Streams C & D are represented here. \[AppAIII\] $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics %\includegraphics[width=0.50\textwidth,clip=true,trim={3cm 0cm 6cm 0cm}]{Chap.jpg} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS1.ps} \put(20,62){\small GSS1} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS1H.ps} \put(15,65){\small GSS1.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS2.ps} \put(15,65){\small GSS2} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS2H.ps} \put(20,62){\small GSS2.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS3.ps} \put(15,65){\small GSS3} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS3H.ps} \put(15,65){\small GSS3.5} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS4.ps} \put(20,62){\small GSS4} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS4H.ps} \put(15,65){\small GSS4.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS5_re.ps} \put(15,65){\small GSS5$\dagger$$^*$} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS5H.ps} \put(20,62){\small GSS5.5} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS6.ps} \put(15,65){\small GSS6} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS6H_re.ps} \put(15,65){\small GSS6.5$\dagger$$^*$} \end{overpic} \\ \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{FeH_vs_dFeH_contour_GSS7_re.ps} \put(20,70){\small GSS7$\dagger$$^*$} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{FeH_vs_dFeH_contour_GSS7H.ps} \put(15,73){\small GSS7.5} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{FeH_vs_dFeH_contour_GSS8.ps} \put(15,74){\small GSS8} \end{overpic} %\\ \includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_PPD.ps} & %\includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_LF.ps} \end{array}$ **Appendix B Part I:** Contour plots illustrating the correlation between RGB width and metallicity for the fields listed in Table \[par\_table1\].\ Fields GSS1 through GSS8 are represented here.\ Plots denoted $\dagger$$^*$ are generated by sampling only the parameter values consistent with a restricted TRGB range. The full, unrestricted versions denoted $\dagger$ are shown on the next page. The restricted ranges are: GSS5$\dagger$$^*$, $21.08 \le TRGB \le 21.18$; GSS6.5$\dagger$$^*$, $21.08 \le TRGB \le 21.30$; GSS7$\dagger$$^*$, $21.08 \le TRGB \le 21.30$. \[AppBI\] $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS8H_re.ps} \put(20,60){\small GSS8.5$\dagger$$^*$} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS9_re.ps} \put(15,64){\small GSS9$\dagger$$^*$} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS9H.ps} \put(15,64){\small GSS9.5} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS10.ps} \put(20,60){\small GSS10} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS5.ps} \put(15,64){\small GSS5$\dagger$} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_GSS6H.ps} \put(15,64){\small GSS6.5$\dagger$} \end{overpic} \\ \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{FeH_vs_dFeH_contour_GSS7.ps} \put(20,68){\small GSS7$\dagger$} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{FeH_vs_dFeH_contour_GSS8H.ps} \put(15,71){\small GSS8.5$\dagger$} \end{overpic} \begin{overpic}[width = 0.258\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{FeH_vs_dFeH_contour_GSS9.ps} \put(15,71){\small GSS9$\dagger$} \end{overpic} %\\ \includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_PPD.ps} & %\includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_LF.ps} \end{array}$ **Appendix B Part II:** Contour plots illustrating the correlation between RGB width and metallicity for the fields listed in Table \[par\_table1\].\ Fields GSS8.5 through GSS10 are represented here. Plots denoted $\dagger$$^*$ are generated by sampling only the parameter values consistent with a restricted TRGB range. The full, unrestricted versions (for both Appendix B Parts I and II) are displayed here also and are denoted $\dagger$.\ The restricted range plots are generated with the following limits: GSS8.5$\dagger$$^*$, $21.15 \le TRGB \le 21.30$; GSS9$\dagger$$^*$, $21.10 \le TRGB \le 21.30$. \[AppBII\] $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_C1.ps} \put(20,63){\small C1} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_C2.ps} \put(15,67){\small C2} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_C3.ps} \put(20,63){\small C3} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_C4.ps} \put(15,67){\small C4} \end{overpic} \\ \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_D1.ps} \put(20,64){\small D1} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_D2.ps} \put(15,67){\small D2} \end{overpic} \begin{overpic}[width = 0.23\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_D3.ps} \put(15,67){\small D3} \end{overpic} \\ \begin{overpic}[width = 0.252\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{FeH_vs_dFeH_contour_D4.ps} \put(20,69){\small D4} \end{overpic} \begin{overpic}[width = 0.252\textwidth,angle=-90,clip=true,trim={0cm 1.1cm 0cm 0cm}]{FeH_vs_dFeH_contour_D5.ps} \put(15,73){\small D5} \end{overpic} %\\ \includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_PPD.ps} & %\includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_LF.ps} \end{array}$ **Appendix B Part III:** Contour plots illustrating the correlation between RGB width and metallicity for the fields listed in Table \[par\_table2\].\ Fields from Streams C & D are represented here. \[AppBIII\] $ \begin{array}{c} %\includegraphics[height=1.0cm,width=15.0cm]{test} %Include ONE of these three %\includegraphics[]{test} %commands for setting graphics \begin{overpic}[width = 0.305\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_T1_new.ps} \put(18,56){\small H1} \end{overpic} \begin{overpic}[width = 0.3\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_T1_new.ps} \put(19,62){\small H1} \end{overpic} \\ \begin{overpic}[width = 0.305\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{m_vs_FeH_contour_T2_new.ps} \put(18,56){\small H2} \end{overpic} \begin{overpic}[width = 0.3\textwidth,angle=-90,clip=true,trim={0cm 0cm 1.6cm 0cm}]{FeH_vs_dFeH_contour_T2_new.ps} \put(19,62){\small H2} \end{overpic} \\ \begin{overpic}[width = 0.335\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{m_vs_FeH_contour_T3.ps} \put(18,60){\small H3} \end{overpic} \begin{overpic}[width = 0.33\textwidth,angle=-90,clip=true,trim={0cm 0cm 0cm 0cm}]{FeH_vs_dFeH_contour_T3.ps} \put(19,68){\small H3} \end{overpic} %\\ \includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_PPD.ps} & %\includegraphics[width = 0.35\textwidth,angle=-90]{Figures/ANDXXIII_LF.ps} \end{array}$ **Appendix C:** Contour plots illustrating the correlation between tip magnitude and metallicity (left column) and between RGB width and metallicity (right column) for the 3 halo reference fields (see Table \[par\_table2\]). Note that the field H3 is much closer to the M31 disk than are H1 and H2 (see Fig. \[FieldMaps\]), hence the markedly different distributions. \[AppC\] [^1]: E-mail: anthony\[email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'Magnetization measurements of Mn$_{12}$ molecular nanomagnets with spin ground states of $S = 10$ and $S = 19/2$ show resonance tunneling at avoided energy level crossings. The observed oscillations of the tunnel probability as a function of the magnetic field applied along the hard anisotropy axis are due to topological quantum phase interference of two tunnel paths of opposite windings. Spin-parity dependent tunneling is established by comparing the quantum phase interference of integer and half-integer spin systems.' author: - 'W. Wernsdorfer$^1$, N. E. Chakov$^2$, and G. Christou$^2$' title: 'Quantum phase interference and spin parity in Mn$_{12}$ single-molecule magnets' --- Studying the limits between classical and quantum physics has become a very attractive field of research. Single-molecule magnets (SMMs) are among the most promising candidates to observe these phenomena since they have a well defined structure with well characterized spin ground state and magnetic anisotropy. The first molecule shown to be a SMM was Mn$_{12}$acetate [@Sessoli93b; @Sessoli93]. It exhibits slow magnetization relaxation of its $S$ = 10 ground state which is split by axial zero-field splitting. It was the first system to show thermally assisted tunneling of magnetization  [@Friedman96; @Thomas96] and Fe$_8$ and Mn$_4$ SMMs were the first to exhibit ground state tunneling [@Sangregorio97; @Aubin98]. Tunneling was also found in other SMMs (see, for instance, [@Caneschi99; @Price99; @Yoo_Jae00]). Quantum phase interference [@Garg93] is among the most interesting quantum phenomena that can be studied at the mesoscopic level in SMMs. This effect was recently observed in Fe$_8$ and \[Mn$_{12}$\]$^{2-}$ SMMs [@WW_Science99; @WW_JAP02]. It has led to new theoretical studies on quantum phase interference in spin systems  [@Garg99a; @Garg99b; @Garg99c; @Garg00b; @Garg00d; @Barnes99; @Villain00; @Liang00; @Yoo00; @Yoo00b; @Leuenberger00; @Leuenberger01b; @Lu00a; @Lu00b; @Lu00c; @Zhang99; @Jin00; @Chudnovsky00a]. The spin-parity effect is another fundamental prediction which has rarely been observed at the mesoscopic level [@WW_PRB02]. It predicts that quantum tunneling is suppressed at zero applied field if the total spin of the magnetic system is half-integer but is allowed in integer spin systems. Enz, Schilling, Van Hemmen and S$\ddot{\rm u}$to [@Enz86; @VanHemmen86] were the first to suggest the absence of tunneling as a consequence of Kramers degeneracy [@note1]. It was then shown that tunneling can even be absent without Kramers degeneracy [@Loss92; @Delft92; @Garg93]; quantum phase interference can lead to destructive interference and thus suppression of tunneling [@Garg93]. This effect was recently seen in Fe$_8$ and Mn$_{12}$ SMMs [@WW_Science99; @WW_JAP02]. ![Unit sphere showing degenerate minima [**A**]{} and [**B**]{} which are joined by two tunnel paths (heavy lines). The hard, medium, and easy axes are taken in $x$-, $y$-, and $z$-direction, respectively. The constant transverse field $H_{trans}$ for tunnel splitting measurements is applied in the $xy$-plane at an azimuth angle $\varphi$. At zero applied field $\vec{H} = 0$, the giant spin reversal results from the interference of two quantum spin paths of opposite direction in the easy anisotropy $yz$-plane. For transverse fields in the direction of the hard axis, the two quantum spin paths are in a plane which is parallel to the $yz$-plane, as indicated in the figure. By using Stokes’theorem it has been shown [@Garg93] that the path integrals can be converted into an area integral, leading to destructive interference—that is a quench of the tunneling rate—occurring whenever the shaded area is $k \pi / S$, where $k$ is an odd integer. The interference effects disappear quickly when the transverse field has a component in the $y$-direction because the tunneling is then dominated by only one quantum spin path.[]{data-label="sphere"}](sphere.eps){width=".3\textwidth"} There are several reasons why quantum phase interference and spin-parity effects are difficult to observe. The main reason reflects the influence of environmental degrees of freedom that can induce or suppress tunneling: hyperfine and dipolar couplings can induce tunneling via transverse field components; intermolecular superexchange coupling may enhance or suppress tunneling depending on its strength; phonons can induce transitions via excited states; and faster-relaxing species can complicate the interpretation [@WW_EPL99]. We present here the first half-integer spin SMM that clearly shows quantum phase interference and spin-parity effects. The syntheses, crystal structures and magnetic properties of the studied complexes are reported elsewhere [@Chakov05]. The compounds are \[Mn$_{12}$O$_{12}$(O$_2$CC$_6$F$_5$)$_{16}$(H$_2$O)$_4$\], (NMe$_4$)\[Mn$_{12}$O$_{12}$(O$_2$CC$_6$F$_5$)$_{16}$(H$_2$O)$_4$\], and (NMe$_4$)$_2$\[Mn$_{12}$O$_{12}$(O$_2$CC$_6$F$_5$)$_{16}$(H$_2$O)$_4$\] (called Mn$_{12}$, \[Mn$_{12}$\]$^{-}$, and \[Mn$_{12}$\]$^{2-}$, respectively). Reaction of Mn$_{12}$ with one and two equivalents of NMe$_4$I affords the one- and two-electron reduced analogs, \[Mn$_{12}$\]$^{-}$ and \[Mn$_{12}$\]$^{2-}$, respectively. The three complexes crystallize in the triclinic P1bar, monoclinic P2/c and monoclinic C2/c space groups, respectively. The molecular structures are all very similar, each consisting of a central \[Mn$^{\rm IV}$O$_4$\] cubane core that is surrounded by a non-planar ring of eight Mn$^{\rm III}$ ions. Bond valence sum calculations establish that the added electrons in \[Mn$_{12}$\]$^{-}$ and \[Mn$_{12}$\]$^{2-}$ are localized on former Mn$^{\rm III}$ ions giving trapped-valence Mn$_4^{\rm IV}$Mn$_7^{\rm III}$Mn$^{\rm II}$ and Mn$_4^{\rm IV}$Mn$_6^{\rm III}$Mn$_2^{\rm II}$ anions, respectively. Magnetization studies yield $S = 10$, $D = 0.58$ K, $g = 1.87$ for Mn$_{12}$, $S = 19/2$, $D = 0.49$ K, $g = 2.04$, for \[Mn$_{12}$\]$^{-}$, and $S = 10$, $D = 0.42$ K, $g = 2.05$, for \[Mn$_{12}$\]$^{2-}$, where $D$ is the axial zero-field splitting parameter [@Chakov05]. AC susceptibility and relaxation measurements give Arrhenius plots from which were obtained the effective barriers to magnetization reversal: 59 K for Mn$_{12}$, 49 K for \[Mn$_{12}$\]$^{-}$, and 25 K for \[Mn$_{12}$\]$^{2-}$. ![Hysteresis loops of single crystals of (a) Mn$_{12}$, (b) \[Mn$_{12}$\]$^{-}$, and (c) \[Mn$_{12}$\]$^{2-}$ molecular clusters at different temperatures and a constant field sweep rate indicated in the figure. Note the large zero field step of \[Mn$_{12}$\]$^{-}$ which is due to fast-relaxing species [@note2].[]{data-label="hyst"}](hyst_Mn12.eps){width=".45\textwidth"} The simplest model describing the spin system of the three Mn$_{12}$ SMMs has the following Hamiltonian $$H = -D S_z^2 + E \left(S_x^2 - S_y^2\right) - g \mu_{\rm B} \mu_0 \vec{S}\cdot\vec{H} \label{eq_H_biax}$$ $S_x$, $S_y$, and $S_z$ are the three components of the spin operator, $D$ and $E$ are the anisotropy constants, and the last term describes the Zeeman energy associated with an applied field $\vec{H}$. This Hamiltonian defines hard, medium, and easy axes of magnetization in $x$, $y$, and $z$ directions, respectively (Fig. 1). It has an energy level spectrum with $(2S+1)$ values which, to a first approximation, can be labeled by the quantum numbers $m = -S, -(S-1), ..., S$ taking the $z$-axis as the quantization axis. The energy spectrum can be obtained by using standard diagonalisation techniques of the $[(2S+1) \times (2S+1)]$ matrix. At $\vec{H} = 0$, the levels $m = \pm S$ have the lowest energy. When a field $H_z$ is applied, the levels with $m < 0$ increase in energy, while those with $m > 0$ decrease. Therefore, energy levels of positive and negative quantum numbers cross at certain values of $H_z$, given by $\mu_0 H_z \approx n D/g \mu_{\rm B}$, with $n = 0, 1, 2, 3, ...$. When the spin Hamiltonian contains transverse terms (for instance $E(S_x^2 - S_y^2)$), the level crossings can be avoided level crossings. The spin $S$ is in resonance between two states when the local longitudinal field is close to an avoided level crossing. The energy gap, the so-called tunnel spitting $\Delta$, can be tuned by a transverse field (Fig. 1) via the $S_xH_x$ and $S_yH_y$ Zeeman terms. In the case of the transverse term $E(S_x^2 - S_y^2)$, it was shown that $\Delta$ oscillates with a period given by [@Garg93] $$\mu_0\Delta H = \frac {2 k_{\rm B}}{g \mu_{\rm B}} \sqrt{2 E (E + D)} \label{eq_Garg}$$ The oscillations are explained by constructive or destructive interference of quantum spin phases (Berry phases) of two tunnel paths [@Garg93] (Fig. 1). All measurements were performed using an array of micro-SQUIDs [@WW_ACP01]. The high sensitivity of this magnetometer allows the study of single crystals of SMMs with sizes of the order of 10 to 500 $\mu$m. The field can be applied in any direction by separately driving three orthogonal coils. The field was aligned using the transverse field method [@WW_PRB04]. Fig. 2 shows typical hysteresis loop measurements on a single crystal of the three Mn$_{12}$ samples. The effect of avoided level crossings can be seen in hysteresis loop measurements. When the applied field is near an avoided level crossing, the magnetization relaxes faster, yielding steps separated by plateaus. As the temperature is lowered, there is a decrease in the transition rate as a result of reduced thermally assisted tunneling. Below about $T_{\rm c}$ = 0.65 K, 0.5 K, 0.35 K, respectively for Mn$_{12}$, \[Mn$_{12}$\]$^{-}$, \[Mn$_{12}$\]$^{2-}$, the hysteresis loops become temperature independent which suggests that the ground state tunneling is dominating. The field between two resonances allows an estimation of the anisotropy constants $D$, and values of $D \approx $ 0.64 K, 0.44 K, 0.42 K were determined (supposing $g = 2$), respectively for Mn$_{12}$, \[Mn$_{12}$\]$^{-}$, \[Mn$_{12}$\]$^{2-}$, being in good agreement with other magnetization studies [@Chakov05]. ![Fraction of Mn$_{12}$ molecules which reversed their magnetization after the field was swept over the zero field resonance at a rate of 0.28 T/s and at several temperatures.[]{data-label="P_Mn12_n"}](P_Mn12_n.eps){width=".45\textwidth"} ![Fraction of \[Mn$_{12}$\]$^{-}$ molecules which reversed their magnetization after the field was swept over the zero field resonance at a rate of 0.28 T/s (a) at several temperatures and (b) at 1.7 K and two azimuth angles $\varphi$. The contribution of the fast-relaxing species is substracted. The observed oscillations are direct evidence for quantum phase interference. The minimum of the tunnel rate at zero transvers field is due to Kramers spin parity.[]{data-label="P_Mn12_e"}](P_Mn12_e.eps){width=".45\textwidth"} ![Fraction of \[Mn$_{12}$\]$^{2-}$ molecules which reversed their magnetization after the field was swept over the zero field resonance at a rate of 0.28 T/s (a) at several temperatures and (b) at 0.1 K and two azimuth angles $\varphi$.[]{data-label="P_Mn12_2e"}](P_Mn12_2e.eps){width=".45\textwidth"} We have tried to use the Landau–Zener method [@Landau32; @Zener32] to measure the tunnel splitting as a function of transverse field as previously reported for Fe$_8$ [@WW_Science99],. However, the tunnel probability in the pure quantum regime (below $T_{\rm c}$) was too small for our measuring technique [@note2] for Mn$_{12}$ and \[Mn$_{12}$\]$^{-}$. We therefore studied the tunnel probability in the thermally activated regime [@WW_EPL00]. In order to measure the tunnel probability, the crystals of Mn$_{12}$ SMMs were first placed in a high negative field, yielding a saturated initial magnetization. Then, the applied field was swept at a constant rate of 0.28 T/s over the zero field resonance transitions and the fraction of molecules which reversed their spin was measured. In the case of very small tunnel probabilities, the field was swept back and forth over the zero field resonance until a larger fraction of molecules reversed their spin. A scaling procedure yields the probability of one sweep. This experiment was then repeated but in the presence of a constant transverse field. A typical result is presented in Fig. 3 for Mn$_{12}$ showing a monotonic increase of the tunnel probability. Measurements at different azimuth angles $\varphi$ (Fig. 1) did not show a significant difference. However, similar measurements on \[Mn$_{12}$\]$^{-}$ (Fig. 4) and \[Mn$_{12}$\]$^{2-}$ (Fig. 5) showed oscillations of the tunnel probability as a function of the magnetic field applied along the hard anisotropy axis $\varphi = 0^{\circ}$ whereas no oscillations are observed for $\varphi = 90^{\circ}$. These oscillations are due to topological quantum interference of two tunnel paths of opposite windings [@Garg93]. The measurements of \[Mn$_{12}$\]$^{2-}$ are similar to the result on the Fe$_8$ molecular cluster [@WW_Science99; @WW_EPL00]; however, those of \[Mn$_{12}$\]$^{-}$ show a minimum of the tunnel probability at zero transverse field. This is due to the spin-parity effect that predicts the absence of tunneling as a consequence of Kramers degeneracy [@note1]. The period of oscillation allows an estimation of the anisotropy constant $E$ (see Eq. 2) and values of $E \approx $ 0 , 0.047 K, and 0.086 K were obtained for Mn$_{12}$, \[Mn$_{12}$\]$^{-}$, \[Mn$_{12}$\]$^{2-}$, respectively. In conclusion, magnetization measurements of three molecular Mn$_{12}$ clusters with a spin ground state of $S = 10$ and $S = 19/2$ show resonance tunneling at avoided energy level crossings. The observed oscillations of the tunnel probability as a function of a transverse field are due to topological quantum phase interference of two tunnel paths of opposite windings. Spin-parity dependent tunneling is established by comparing the quantum phase interference of integer and half-integer spin systems. This work was supported by the EC-TMR Network ÒQuEMolNaÓ (MRTN-CT-2003-504880), CNRS and Rhone-Alpe funding. [10]{} R. Sessoli, H.-L. Tsai, A. R. Schake, S. Wang, J. B. Vincent, K. Folting, D. Gatteschi, G. Christou, and D. N. Hendrickson, J. Am. Chem. Soc. [**115**]{}, 1804 (1993). R. Sessoli, D. Gatteschi, A. Caneschi, and M. A. Novak, Nature [**365**]{}, 141 (1993). J. R. Friedman, M. P. Sarachik, J. Tejada, and R. Ziolo, Phys. Rev. Lett. [ **76**]{}, 3830 (1996). L. Thomas, F. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, and B. Barbara, Nature (London) [**383**]{}, 145 (1996). C. Sangregorio, T. Ohm, C. Paulsen, R. Sessoli, and D. Gatteschi, Phys. Rev. Lett. [**78**]{}, 4645 (1997). S. M. J. Aubin, N. R. Dilley, M. B. Wemple, G. Christou, and D. N. Hendrickson, J. Am. Chem. Soc. [**120**]{}, 839 (1998). A. Caneschi, D. Gatteschi, C. Sangregorio, R. Sessoli, L. Sorace, A. Cornia, M. A. Novak, C. Paulsen, and W. Wernsdorfer, J. Magn. Magn. Mat. [**200**]{}, 182 (1999). D. J. Price, F. Lionti, R. Ballou, P.T. Wood, and A. K. Powell, Phil. Trans. R. Soc. Lond. A [**357**]{}, 3099 (1999). J. Yoo, E. K. Brechin, A. Yamaguchi, M. Nakano, J. C. Huffman, A.L. Maniero, L.-C. Brunel, K. Awaga, H. Ishimoto, G. Christou, and D. N. Hendrickson, Inorg. Chem. [**39**]{}, 3615 (2000). A. Garg, EuroPhys. Lett. [**22**]{}, 205 (1993). W.Wernsdorfer and R. Sessoli, Science [**284**]{}, 133 (1999). W. Wernsdorfer, M. Soler, G. Christou, and D.N. Hendrickson, J. Appl. Phys. [**1**]{}, 1 (2002). A. Garg, J. Math. Phys. [**39**]{}, 5166 (1998). A. Garg, Phys. Rev. Lett. [**83**]{}, 4385 (1999). A. Garg, Phys. Rev. B [**60**]{}, 6705 (1999). E. Kececioglu and A. Garg, Phys. Rev. B [**63**]{}, 064422 (2001). A. Garg, EuroPhys. Lett. [**50**]{}, 382 (2000). S.E. Barnes, cond-mat/9907257 [**0**]{}, 0 (1999). J. Villain and A. Fort, Euro. Phys. J. B [**17**]{}, 69 (2000). J.-Q. Liang, H.J.W. Mueller-Kirsten, D.K. Park, and F.-C. Pu, Phys. Rev. B [ **61**]{}, 8856 (2000). Sahng-Kyoon Yoo and Soo-Young Lee, Phys. Rev. B [**62**]{}, 3014 (2000). Sahng-Kyoon Yoo and Soo-Young Lee, Phys. Rev. B [**62**]{}, 5713 (2000). M. N. Leuenberger and D. Loss, Phys. Rev. B [**61**]{}, 12200 (2000). M. N. Leuenberger and D. Loss, Phys. Rev. B [**63**]{}, 054414 (2001). Rong L$\ddot{\rm u}$, Hui Hu, Jia-Lin Zhu, Xiao-Bing Wang, Lee Chang, and Bing-Lin Gu, Phys. Rev. B [**61**]{}, 14581 (2000). Rong L$\ddot{\rm u}$, Su-Peng Kou, Jia-Lin Zhu, Lee Chang, and Bing-Lin Gu, Phys. Rev. B [**62**]{}, 3346 (2000). Rong L$\ddot{\rm u}$, Jia-Lin Zhu, Yi Zhou, and Bing-Lin Gu, Phys. Rev. B [ **62**]{}, 11661 (2000). Y.-B. Zhang, J.-Q. Liang, H. J. W. M$\ddot{\rm u}$ller-Kirsten S.-P. Kou, X.-B. Wang, and F.-C. Pu, Phys. Rev. B [**60**]{}, 12886 (2000). Yan-Hong Jin, Yi-Hang Nie, J.-Q. Liang, Z.-D. Chen, W.-F. Xie, and F.-C. Pu, Phys. Rev. B [**62**]{}, 3316 (2000). E. M. Chudnovsky and X. Martines Hidalgo, EuroPhys. Lett. [**50**]{}, 395 (2000). W. Wernsdorfer, S. Bhaduri, C. Boskovic, G. Christou, and D.N. Hendrickson, Phys. Rev. B [**65**]{}, 180403 (2002). M. Enz and R. Schilling, J. Phys. C [**19**]{}, L711 (1986). J. L. Van Hemmen and S. S$\ddot{\rm u}$to, EuroPhys. Lett. [**1**]{}, 481 (1986). The Kramers theorem asserts that no matter how unsymmetric the crystal field, an ion possessing an odd number of electrons must have a ground state that is at least doubly degenerate, even in the presence of crystal fields and spin-orbit interactions \[H. A. Kramers, Proc. Acad. Sci. Amsterdam [**33**]{}, 959 (1930)\] The Kramers theorem can be found in standard textbooks on quantum mechanics L. D. Landau and E. M. Lifschitz, Quantum Mechanics (Pergamon, London, 1959). D. Loss, D. P. DiVincenzo, and G. Grinstein, Phys. Rev. Lett. [**69**]{}, 3232 (1992). J. von Delft and C. L. Henley, Phys. Rev. Lett. [**69**]{}, 3236 (1992). W. Wernsdorfer, R. Sessoli, and D. Gatteschi, EuroPhys. Lett. [**47**]{}, 254 (1999). N. E. Chakov, M. Soler, W. Wernsdorfer, K. A. Abboud, and G. Christou, in preparation [**0**]{}, 0 (2005). W. Wernsdorfer, Adv. Chem. Phys. [**118**]{}, 99 (2001). W. Wernsdorfer, N. E. Chakov, and G. Christou, Phys. Rev. B [**70**]{}, 132413 (2004). L. Landau, Phys. Z. Sowjetunion [**2**]{}, 46 (1932). C. Zener, Proc. R. Soc. London, Ser. A [**137**]{}, 696 (1932). As observed for Mn$_{12}$ acetate [@WW_EPL99], the crystals of Mn$_{12}$, \[Mn$_{12}$\]$^{-}$, \[Mn$_{12}$\]$^{2-}$ contain a small fraction of faster-relaxing species, which are probably molecules having a defect. The signals of these species were very large compared to the ground state relaxation rate of the major species. W. Wernsdorfer, A. Caneschi, R. Sessoli, D. Gatteschi, A. Cornia, V. Villar, and C. Paulsen, EuroPhys. Lett. [**50**]{}, 552 (2000).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Real-space renormalization-group techniques for quantum systems can be divided into two basic categories — those capable of representing correlations following a simple boundary (or area) law, and those which are not. I discuss the scaling of the accuracy of gapped systems in the latter case and analyze the resultant spatial anisotropy. It is apparent that particular points in the system, that are somehow ‘central’ in the renormalization, have local quantities that are much closer to the exact results in the thermodynamic limit than the system-wide average. Numerical results from the tree-tensor network and tensor renormalization-group approaches for the 2D transverse-field Ising model and 3D classical Ising model, respectively, clearly demonstrate this effect.' author: - 'Andrew J. Ferris' bibliography: - '../bib/andy.bib' title: 'The area law and real-space renormalization' --- Introduction ============ Solving large, quantum mechanical systems is very challenging, primarily because the dimension of the Hilbert space describing a system with many components grows exponentially with the number of components. Direct approaches to such problems, for instance by exact diagonalization, quickly become intractable, even for relatively small 2D and 3D quantum systems. Quantum Monte Carlo (QMC) is another direct approach (exact up to statistical error), but the sign problem causes difficulties for many systems of interest — such as fermionic, frustrated, or dynamical problems. Thus, in order to garner meaningful information about large quantum systems, clever approximations need to be employed. In this vain, many analytic and numeric techniques have been developed over the last 80 years. In this paper, I will focus specifically on numerical real-space renormalization-group (RG) techniques, which can be applied to both quantum and classical problems. The process of renormalization takes a divide-and-conquer approach to the problem, by tackling different parts of the system (or Hilbert space) separately or in succession, carefully simplifying or compressing the pieces at each step. In real-space RG, the renormalization procedure groups together spatial regions of the system, referred to as blocks. At each step, two neighboring blocks are combined, and then simplified (for instance, by truncating the Hilbert space). As the renormalization proceeds, the blocks include larger and larger portions of the initial system, until the entire system of interest is encapsulated. If the RG reaches a fixed-point, we can say we have reached the thermodynamic limit. One of the most successful real-space RG algorithms is the density-matrix renormalization group (DMRG) [@White1992], which very accurately describes (quasi-)1D systems. In this approach, a single site is added to the block at each step, and the optimal Hilbert space (limited to some dimension $\chi$) for the combined system is determined. Working in reverse, the procedure generates a variational wave-function known as a matrix-product state (MPS) (see [Fig. \[fig:TNs\]]{} (a)) [@Oestlund1995a]. DMRG can then be thought of as an optimization algorithm that sweeps over the tensors in the MPS, targeting the state with lowest energy. ![(a) Depiction of an MPS (in the unitary gauge) pictured in terms of renormalized Hilbert spaces. Each tensor adds a physical site (bottom) and passes the Hilbert space (truncated to $\chi$) upwards. (b) The 1D TTN for 8 sites. Each tensor combines and renormalizes two neighboring blocks. (c) A single layer of the 2D TTN, where a 2$\times$2 square is renormalized into a single site, first by combining in the $x$ then $y$ directions. \[fig:TNs\]](TTNs.eps){width="0.85\columnwidth"} A similar approach is the tree-tensor network (TTN) [@Shi2006], in which neighboring blocks are successively combined, depicted in [Fig. \[fig:TNs\]]{} (b). In this direct coarse-graining approach, the physical volume of each block doubles at each step. This ansatz is used less frequently than MPS/DMRG because the numerical cost is higher for a given amount of system entanglement or accuracy ($\mathcal{O}(\chi^4)$ vs. $\mathcal{O}(\chi^3)$). The tensor network can easily be extended to higher-dimensional systems (see [Fig. \[fig:TNs\]]{} (c)) [@Tagliacozzo2009; @Murg2010; @Li2012]. The major problem of using the TTN (or MPS/DMRG [@White1998]) in two- or higher-dimensions is that they do not respect the area-law for entanglement entropy with fixed $\chi$ [@Tagliacozzo2009; @Stoudenmire2012]. Generally speaking, gapped phases of local Hamiltonians are expected to have ‘local’ correlations. Thus, the amount of entanglement between a large, contiguous block and the rest of the system should scale proportional to the boundary area separating the regions. On the other hand, wave-functions generated by MPS or TTN contain arbitrary large blocks with bounded entanglement (depending on $\chi$), and therefore have poor overlap with the true ground state. The area law has motivated several tensor-network änsatze to describe higher dimensional systems, in an attempt to replicate the success of DMRG. One example is the projected entangled-pair state (PEPS) [@Nishino2001; @Maeshima2001; @Verstraete2004; @Murg2007; @Jordan2008; @Gu2008; @Jiang2008; @GarciaSaez2011], which can be thought of as a higher-dimensional generalization of MPS. The multi-scale entanglement renormalization ansatz (MERA) [@Vidal2007b; @Evenbly2009; @Cincio2008] adds additional local entanglement to the TTN, and in its simplest form (c.f. [@Evenbly2012]) exactly replicates the area law in two- or higher-dimensional systems. The drawback of these approaches has been the large numerical cost, typically scaling as $\chi^{10}$ or greater. Fortunately, one expects that as computational power increases, the accuracy of these änsatze will increase superpolynomially with $\chi$ (for gapped phases). In the mean-time, there is immediate demand for techniques with lower computational cost. Some approaches that have been tried recently include entangled plaquette states, and performing variational Monte Carlo over tensor network states [@Schuch2008; @Changlani2009; @Mezzacapo2009; @Marti2010; @Sandvik2007; @Wang2011; @Ferris2011b; @Ferris2011a]. Tensor network techniques that do not obey area laws have been used extensively in recent studies of 2D quantum systems, including DMRG in a cylindrical geometry [@Yan2011; @Stoudenmire2012; @Depenbrock2012], TTN [@Tagliacozzo2009], and direct approximate contractions of the 3D Suzuki-Trotter decomposition using tensor renormalization-group (TRG) and its variants [@Levin2007; @Gu2009; @Xie2009; @Zhao2010; @Xie2012]. In these approaches, a description of a locally-correlated state would require a bond-dimension that grows with system size (e.g. exponentially with cylinder width in 2D DMRG). There are two possible approaches to take in these cases: (a) use a small, finite geometry while keeping track of all correlations; or (b) study a large system using an ansatz with insufficient entanglement or bond-dimension $\chi$. The first approach is typically used because finite-size effects are well-understood, while finite-$\chi$ effects are less clear in the severely undersaturated regime. The purpose of this paper is to analyze, in generality, the scaling of accuracy in approach (b). One reason for this to be important is that approach (b) is implicitly used in the 3D classical tensor renormalization-group (or (2+1D) quantum TRG using a 3D representation of imaginary-time evolution) — a promising technique that recently demonstrated accuracy competitive with large-scale Monte Carlo studies [@Xie2012]. We see that following a naïve approach, the global accuracy of (b) scales only logarithmically with $\chi$, and thus also logarithmic in the numerical cost. On the other hand, provided proper care is taken, the accuracy of local observables scales polynomially with $\chi$. This exponential improvement puts the approach (at least formally) on similar grounds to finite-size scaling of small systems using DMRG. The ‘trick’ here is to realize that the anisotropic structure of the renormalization means that not all sites are equal. Some sites are far away from the boundaries of the renormalization, and are able to share ample entanglement with their surrounding environment. In this paper, ideally located sites are called ‘center’ sites. The paper is structured as follows. In [Sec. \[sec:TTN\]]{}, the 2D tree-tensor network is analyzed in-detail, with arguments for the above scaling backed up by numerical results for the transverse-field Ising model on the square lattice. A similar analysis for the tensor renormalization-group in higher-dimensions is shown to hold in [Sec. \[sec:TRG\]]{}, with numerical evidence from the (classical) 3D Ising model (which is related to the (2+1)D quantum model). The paper concludes in [Sec. \[sec:conclusion\]]{} with an outlook on some possible future directions. Tree tensor network {#sec:TTN} =================== In this section we analyze the tree tensor network on an $L \times L$ square lattice. Here we use the 2-to-1 renormalization scheme depicted in [Fig. \[fig:TNs\]]{} (c), alternating course graining along the $x$ and $y$ dimensions. The cost of contracting the tensor network corresponding to the expectation value of a nearest-neighbor Hamiltonian, $\langle \Psi | \hat{H} | \Psi \rangle$, scales as $\mathcal{O}(\chi^4 L)$, and similarly for calculating the derivative of the energy with respect to the tensor variables. The wave function $| \Psi \rangle$ generated by the TTN contains blocks of size $l \times l$ (where $l = 2^n$, for some integer $n$) that contain bounded entanglement (Schmidt rank $\chi$) with the rest of the system. These blocks are highlighted in [Fig. \[fig:TTN\]]{}. In two-dimensions, this fails to saturate the area law, which demands that the entanglement entropy should scale linearly with the perimeter of the block, requiring a Schmidt rank scaling as $\exp(\alpha l)$, for some constant $\alpha$. ![(Color online) Blocks of different layers in the 2D TTN. The marked ‘center’ site at coordinates (6,6) is furthest from the block boundaries. This should be the site where the Hilbert space of its immediate environment is largest, and closest to the bulk, for all $\chi$. \[fig:TTN\]](center_site.eps){width="0.55\columnwidth"} We now proceed to analyze the structure of the wave-functions having minimal energy. For large enough system size $L$, the bond dimension will be insufficient to describe the entanglement of the entire system, i.e. $\chi < \exp(\alpha L)$, and thus the minimum energy wave-function must be distinct from the true ground state. On the other hand, $\chi$ will be sufficient to describe the entanglement *within* the smaller blocks. Errors will accumulate primarily because of the lack of entanglement *between* larger blocks. Somewhere between these two extremes there will be a critical block-size $l^{\ast}$, where smaller blocks have sufficient entanglement and are thus well-renormalized, whereas larger blocks do not possess large enough $\chi$. This is the point where $$\chi \sim \exp(\alpha l^\ast). \label{lstar}$$ Because the blocks grow exponentially in size as a function of layer $n$, and the required bond-dimension therefore grows doubly-exponentially in $n$, we expect the transition between sufficient entanglement and woefully inadequate $\chi$ to be quite sharp. To simplify the analysis, we compare this situation to a cluster-mean field theory. In this theory, the full Hilbert space of clusters of size $l^{\ast} \times l^{\ast}$ is included, while no entanglement exists between neighboring clusters. In practice, a single cluster is exactly-diagonalized in a self-consistent fashion with their boundary conditions, and the fixed-point corresponds to the lowest energy state in the cluster mean-field ansatz. The error of global quantities such as the total energy can be reasonably large when using this approach. Let’s assume that the system has correlation length $\xi$, and that the expectation value of a local quantity decays exponentially towards the (correct) bulk value away from the cluster boundaries. In the limit that $l^{\ast} > \xi$, some fraction of the system will be ‘close’ to the boundaries (closer than $\xi$) and display incorrect results, while the remaining ‘bulk’ fraction will display roughly the correct results. In $d$-dimensions, the fraction of ‘error’ sites scales as $2d \xi / l^{\ast}$, and thus the error of a global quantity in the 2D TTN scales as $$\text{global error} \propto \frac{ \xi}{l^{\ast}} \sim \frac{\xi \alpha}{ \ln \chi}$$ where [Eq. (\[lstar\])]{} was used in the second relation. The error of a global quantity, such as the total energy or magnetization, thus only decreases logarithmically with $\chi$ and thus computation effort. Although this scaling is quite poor, the ansatz takes some advantage from the localized entanglement in the system and is already a large improvement over exact diagonalization[@Tagliacozzo2009]. On top of this, we can estimate *local* quantities with greatly reduced error by understanding the structure of the ansatz. Points inside the well-renormalized, ‘bulk’ region are surrounded by an immediate environment that is a good approximation of the true ground state (provided, again, that $\l^{\ast} > \xi$). As mentioned earlier, in a gapped phase the effect of the boundary should decay exponentially, so in the centre of the region the error should scale as $\exp(-l^{\ast}/2\xi)$. This time, including [Eq. (\[lstar\])]{} gives $$\text{local error} \sim \exp \left( \frac{-l^{\ast}}{2\xi} \right) \sim \exp \left( \frac{-\ln \chi}{ 2 \xi \alpha} \right) \sim \chi^{-\beta} ,$$ for some $\beta$ that depends on the specifics of the system (notably, becoming smaller for more entangled systems or those with larger correlation length). Thus the scaling of error with computation effort is *polynomial* — an exponential improvement on the error of global quantities. If $\beta$ is very large, the method could become competitive with well-established techniques such as quantum Monte Carlo (where statistical error scales as the square root of computational effort, assuming no sign-problem). The performance will degrade significantly in systems with large amounts of entanglement, or close to critical points. To investigate the above numerically, we identify points in the system that are somehow ‘central’ to the renormalization, independent of system parameters or $\chi$. In the TTN, these are the points that are renormalized best with their environment, favoring no particular direction. In the 2D TTN, we define these to be the points that have been successively combined with the block above, to the left, below, to the right, *ad infinitum* (see [Fig. \[fig:TTN\]]{}). For any choice of $\chi$, such a point has the largest immediate environment whose sites are described by a sufficiently large Hilbert space, roughly equal in all directions. We have implemented the 2D TTN to study the spin-1/2 transverse-field Ising model on the square lattice, described by Hamiltonian $$\hat{H} = -\sum_{<i,j>} \hat{\sigma}^z_i \hat{\sigma}^z_j + h \hat{\sigma}^x_i, \label{TFI}$$ where $<\!\!i,j\!\!>$ denotes neighboring sites and $h$ is the strength of a transverse magnetic field. Because the cost of the algorithm cost scales as $\mathcal{O}(\chi^4 L)$, a moderate size $L=32$ is used in order to investigate a system that is much larger than can be described exactly, while small enough to allow a range of $\chi$ to be used. To minimize the energy, I have employed the time-dependent variational principle [@Haegeman2011] and used a unitary gauge, finding a significant speed-up compared to the traditional, SVD approach [@Tagliacozzo2009]. In [Fig. \[fig:xmag\]]{} the error of the magnetization in the $x$ direction (as compared to the best, center site estimate) is displayed. We can clearly observe a pattern of successive ‘windows’, where errors are displayed primarily on the *edges* of blocks. For larger $\chi$, the size of the windows grow while the overall error decreases — while the central sites always remain inside the smallest window. At the critical point, close to $h=3.05$, there is a large correlation length and the windows are somewhat blurred, and the errors are greater in magnitude for a given $\chi$. ![(Color online) Spatial variations in the magnetization in field direction at various values of $h$, as predicted by the lowest-energy TTN with the displayed bond-dimensions $\chi$. The color values represent differences to the prediction at $\chi=112$. In the gapped regions away from the critical point (near $h=3.05$), we observe a ‘windowing’ pattern, with window size growing with $\chi$, and rapid convergence at the central points of the renormalization (at $x,y=21$). When the correlation length is longer, the effect is blurred somewhat and convergence was not reached with a value of $\chi = 112$. \[fig:xmag\]](X_combined.eps){width="\columnwidth"} We compare the predicted value of $\langle \hat{\sigma}^x \rangle$ from the global, average value and the local value at the center site in [Fig. \[fig:xplots\]]{}. Away from criticality, we observe that the center site value converges to the quantum Monte Carlo prediction (using ALPS [@ALPS20; @ALPS13]) significantly faster than the global average. At criticality, both values appear to be converging with a slow, $1/\ln \chi$ scaling (extrapolation may be viable when $L$ approaches the thermodynamic limit). At $h=3.25$ the behavior is not monotonic because the wave-function switches from the ferromagnetic to disordered phase at intermediate $\chi$. ![(Color online) Predictions for the magnetization in the field direction with various magnetic field strengths $h$, as predicted by the lowest-energy TTN with given bond-dimension $\chi$. The red crosses correspond to system-wide averages, while the blue points correspond to the central sites of the renormalization. The latter converge much faster than the former, away from the critical value of $h\approx 3.05$. The dashed lines correspond to QMC results for the system at temperature $T=0.005$. In (a) the two lines represent the statistical uncertainty, while in (b–d) the error is comparable to the line width. \[fig:xplots\]](X_plots.eps){width="\columnwidth"} These results confirm the above analysis and show that it is viable to extract meaningful information in systems where the parameter $\chi$ is severely undersaturated. Tensor renormalisation group {#sec:TRG} ============================ A related tensor-based, real-space renormalization technique is the tensor renormalization group (TRG). In this family of methods, the tensor network corresponding to the Markov network of a classical thermal state (whose contraction gives the partition function) is successively contracted into larger and larger blocks. At each layer, the dimension of the tensors must be truncated to prevent the difficulty from growing exponentially. The process is depicted in [Fig. \[fig:TRG\]]{}. ![The contraction of the 2D Markov network (i.e. partition function) on the left is approximated by the TRG scheme on the right, from [@Xie2012]. In the ‘second renormalisation group’, the projectors (triangular tensors) are optimized to maximize the partition function. The above diagram can also be considered an ansatz for the probability distribution itself, by opening additional legs on the Markov network tensors (circles) corresponding to the state at that site. \[fig:TRG\]](TRG.eps){width="0.9\columnwidth"} The method is well-suited to contracting 2D tensor networks, such as thermal state of a 2D classical system, or the network that results from the Trotter decomposition of a 1D quantum density matrix. For non-critical systems, the fixed-point of this RG flow is well-understood [@Gu2009] in terms of ‘corner’ correlations. The TRG, and related ‘second renormalization group’ (SRG) methods are able to encapsulate the correlations of the 2D classical system quasi-exactly with fixed $\chi$. This technique has been successfully applied in a 3D classical (or 2D quantum) setting, achieving impressive accuracies (see e.g. Ref. [@Xie2012]). However, in these higher-dimensional settings, the edges of the 3D blocks grow in length as the renormalization proceeds, and strictly speaking $\chi$ would need to grow exponentially to encapsulate all the correlations in a (non-critical) system. In general, for this kind of blocking scheme in $d$ dimensions, the correlations that need to be to be accounted for at each level of the renormalisation grows as $L^{d-2}$. Thus, the TRG fails to account for what one might call a ‘corner’ law for correlations [@GuifrePrivate; @Riera2013], related to the quantum area law for entanglement entropy that manifests naturally in a $(d+1)$-dimensional quantum theory, where one dimension is time. The structure of the TRG is strikingly similar to the TTN and one might suppose that a similar analysis in the limit of large 3D systems with insufficient $\chi$ might hold. Numerical evidence in the 3D Ising model appears to support this claim. I implemented the 3D higher-order SRG (HOSRG) approach from Ref. [@Xie2012], using an SVD-update [@Tagliacozzo2009; @Evenbly2007] to optimize the tensors to maximize the (global) partition function. The computational effort scales as $\mathcal{O}(\chi^{11} \ln L)$, and we are able to study much larger systems, with $L = 2^{12}$. In [Fig. \[fig:srgplots\]]{} (a) we see a clear indication of the ‘corner’ errors in the local energy, in a similar fashion to the ‘window’ pattern in [Fig. \[fig:xmag\]]{}. In [Fig. \[fig:srgplots\]]{} (b) the average energy and center site energy are compared for different values of $\chi$, and we see even close to the critical point (at $T\approx4.1$) the local quantity converges to the Monte Carlo prediction much more rapidly. ![(Color online) Predictions for the 3D Ising model at $T=4$ (less than the critical temperature of $T\approx 4.1$) using the HOSRG approach. (a) The bond-energy of the $z$-bonds through a cross-section of the $x$–$y$ plane, where the value at the center site labeled $x,y=21$ has been subtracted. The 3D renormalization scheme displays errors predominantly on the corners of the renormalized regions. (b) The energy (per-site) for different values of $\chi$, compared to classical Monte Carlo results. The red crosses correspond to system-wide averages, while the blue points correspond to the central sites of the renormalization. Similar to the quantum TTN, the latter converge much faster than the former. The dashed line corresponds to Monte Carlo simulations of a 48$\times$48$\times$48 lattice. \[fig:srgplots\]](srg_3d_ising_ZZ.eps "fig:"){width="0.5\columnwidth"}![(Color online) Predictions for the 3D Ising model at $T=4$ (less than the critical temperature of $T\approx 4.1$) using the HOSRG approach. (a) The bond-energy of the $z$-bonds through a cross-section of the $x$–$y$ plane, where the value at the center site labeled $x,y=21$ has been subtracted. The 3D renormalization scheme displays errors predominantly on the corners of the renormalized regions. (b) The energy (per-site) for different values of $\chi$, compared to classical Monte Carlo results. The red crosses correspond to system-wide averages, while the blue points correspond to the central sites of the renormalization. Similar to the quantum TTN, the latter converge much faster than the former. The dashed line corresponds to Monte Carlo simulations of a 48$\times$48$\times$48 lattice. \[fig:srgplots\]](srg_3d_ising_E.eps "fig:"){width="0.5\columnwidth"} Discussion {#sec:conclusion} ========== We have analyzed real-space renormalization procedures that are unable to account for local correlations in higher-dimensional quantum and classical systems. The anisotropy of tree-structured änsatze, such as TTN and TRG, can be taken advantage-of to identify sites that are central to the renormalization, providing an (exponentially more) accurate description of local quantities. The scaling of accuracy to numerical cost is expected to be polynomial (for gapped systems), formally putting the technique on a similar footing to 2D DMRG of small systems and quantum Monte Carlo (though in practice the algorithms used here might not be as efficient). There has been some reluctance to use a severely under-correlated ansatz for a quantum wave-function, where recent focus has been on DMRG in small geometries such as narrow cylinders. On the other hand, the TRG approach has shown very promising results in 3D classical (and 2D quantum) systems, while there has been less discussion on the inherent inability to account for all correlations in a large system. In-fact, it is possible that the central-site technique has been implemented in these studies in the past. From here, two possible directions to increase the effectiveness of real-space RG techniques in higher dimensions become apparent. The first would be find more efficient algorithms in the under-correlated regime. For example, the HOSRG was a step in this direction [@Xie2012], compared to earlier 3D TRG algorithms. In the present work, it could be beneficial to replace the TTN with a MPS having a tree-like structure, investigated already in Ref. [@Xiang2001]. The cost would reduce to $\mathcal{O}(\chi^3 L^2)$, but one would still be limited to moderate system sizes. The second approach would be to use an ansatz that takes into account the correlation structure of the system. PEPS and MERA already exist to describe higher-dimensional quantum systems. In the realm of classical Markov networks, progress was made in Ref. [@Gu2009] to use a TRG-like approach to take account of all local correlations (for 2D classical systems). However, a more direct, MERA-like approach to 2D and 3D classical systems would represent a major advancement in this field. Acknowledgements {#acknowledgements .unnumbered} ---------------- I would like to thank Guifre Vidal and David Poulin for discussions. This work was supported by NSERC and FQRNT through the network INTRIQ, as well as the visitor programme at the Perimeter Institute for Theoretical Physics.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We generalise Delhommé’s result that each tree-automatic ordinal is strictly below $\omega^{\omega^\omega}$ by showing that any tree-automatic linear ordering has ${\operatorname{FC}}$-rank strictly below $\omega^\omega$. We further investigate a restricted form of tree-automaticity and prove that every linear ordering which admits a tree-automatic presentation of branching complexity at most $k\in{\mathbb{N}}$ has ${\operatorname{FC}}$-rank strictly below $\omega^k$.' author: - Martin Huschenbett bibliography: - 'library.bib' title: 'The Rank of Tree-Automatic Linear Orderings' --- Introduction ============ In [@Del04], Delhommé showed that an ordinal $\alpha$ is string-automatic if, and only if, $\alpha<\omega^\omega$ and it is tree-automatic if, and only if, $\alpha<\omega^{\omega^\omega}$. Khoussainov, Rubin, and Stephan [@KRS05] extended his technique to prove that every string-automatic linear ordering has finite ${\operatorname{FC}}$-rank. Although it is commonly expected that every tree-automatic linear ordering has ${\operatorname{FC}}$-rank below $\omega^\omega$, this conjecture has not been verified yet.[^1] We close this gap by providing the missing proof (Theorem \[thm:main\]). As part of this, we give a full proof of Delhommé’s decomposition theorem for tree-automatic structures (Theorem \[thm:delhomme\]). Afterwards, we investigate a restricted form of tree-automaticity where the branching complexity of the trees involved is bounded. We show that each linear ordering which admits a tree-automatic presentation of branching complexity $k\in{\mathbb{N}}$ has ${\operatorname{FC}}$-rank below $\omega^k$ (Theorem \[thm:main\_bounded\_rank\]). As a consequence, we obtain that an ordinal $\alpha$ admits a tree-automatic presentation whose branching complexity is bounded by $k$ if, and only if, $\alpha<\omega^{\omega^k}$. Tree-Automatic Structures ========================= This section recalls the basic notions of tree-automatic structures (cf. [@BGR11; @Blu99]). Let $\Sigma$ be an alphabet. The set of all *(finite) words* over $\Sigma$ is denoted by $\Sigma^\star$ and the *empty word* by $\varepsilon$. A *tree domain* is a finite, prefix-closed subset $D\subseteq{{\{0,1\}}^\star}$. The *boundary* of $D$ is the set $\partial D=\set{ ud | u\in D,d\in{\{0,1\}},ud\not\in D}$ if $D$ is not empty and $\partial\emptyset=\{\varepsilon\}$ otherwise. A *$\Sigma$-tree* (or just *tree*) is a map $t\colon D\to\Sigma$ where ${\operatorname{dom}}(t)=D$ is a tree domain. The *empty tree* is the unique $\Sigma$-tree $t$ with ${\operatorname{dom}}(t)=\emptyset$. The set of all $\Sigma$-trees is denoted by ${T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ and its subsets are called *(tree) languages*. For $t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ and $u\in{\operatorname{dom}}(t)$ the *subtree* of $t$ rooted at $u$ is the tree $t{\mathord{\restriction} u}\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ defined by $${\operatorname{dom}}(t{\mathord{\restriction} u}) = \set{ v\in\{0,1\}^\star | uv\in{\operatorname{dom}}(t) } \quad\text{and}\quad (t{\mathord{\restriction} u})(v) = t(uv)\,.$$ For $u_1,\dotsc,u_n\in{\operatorname{dom}}(t)\cup\partial{\operatorname{dom}}(t)$ which are mutually no prefixes of each other and trees $t_1,\dotsc,t_n\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ we consider the tree $t[u_1/t_1,\dotsc,u_n/t_n]\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$, Intuitively, $t[u_1/t_1,\dotsc,u_n/t_n]$ is obtained from $t$ by simultaneously replacing for each ${i=1,\dotsc,n}$ the subtree rooted a $u_i$ by $t_i$. Formally, $${\operatorname{dom}}\bigl(t[u_1/t_1,\dotsc,u_n/t_n]\bigr) = {\operatorname{dom}}(t)\setminus\bigl(\{u_1,\dotsc,u_n\}\{0,1\}^\star\}\bigr)\cup \bigcup_{1\leq i\leq n} \{u_i\}{\operatorname{dom}}(t_i)$$ and $$\bigl(t[u_1/t_1,\dotsc,u_n/t_n]\bigr)(u) = \begin{cases} t_i(v) & \text{if $u=u_iv$ for some (unique) ${i\in\{1,\dotsc,n\}}$}\,, \\ t(u) & \text{otherwise}\,. \end{cases}$$ A *(deterministic bottom-up) tree automaton* ${\mathcal{A}}=(Q,\iota,\delta,F)$ over $\Sigma$ consists of a finite set $Q$ of *states*, a *start state* $\iota\in Q$, a *transition function* $\delta\colon \Sigma\times Q\times Q\to Q$, and a set $F\subseteq Q$ of *accepting* states. For all $t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$, $u\in{\operatorname{dom}}(t)\cup\partial{\operatorname{dom}}(t)$, and maps $\rho\colon U\to Q$ with $U\subseteq\partial{\operatorname{dom}}(t)$ a state ${\mathcal{A}}(t,u,\rho)\in Q$ is defined recursively by $${\mathcal{A}}(t,u,\rho) = \begin{cases} \delta\bigl(t(u),{\mathcal{A}}(t,u0,\rho),{\mathcal{A}}(t,u1,\rho)\bigr) & \text{if $u\in{\operatorname{dom}}(t)$,} \\ \rho(u) & \text{if $u\in U$,} \\ \iota & \text{if $u\in\partial{\operatorname{dom}}(t)\setminus U$.} \end{cases}$$ The second parameter is omitted if $u=\varepsilon$ and the third one if $U=\emptyset$. Notice that ${{\mathcal{A}}(t,u)={\mathcal{A}}(t{\mathord{\restriction} u})}$. The tree language *recognised* by ${\mathcal{A}}$ is the set $$L({\mathcal{A}}) = \Set{ t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}} | {\mathcal{A}}(t)\in F }$$ of all trees which yield an accepting state at their root. A language $L\subseteq{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ is *regular* if it can be recognised by some tree automaton. Let $\Box\not\in\Sigma$ be a new symbol and $\Sigma_\Box=\Sigma\cup\{\Box\}$. The *convolution* of an $n$-tuple ${\bar t=(t_1,\dotsc,t_n)\in({T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}})^n}$ of trees is the tree $\otimes\bar t\in{T_{\Sigma_\Box^n\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ defined by $${\operatorname{dom}}(\otimes\bar t)={\operatorname{dom}}(t_1)\cup\dotsb\cup{\operatorname{dom}}(t_n) \quad\text{and}\quad (\otimes\bar t)(u)=\bigl(t'_1(u),\dotsc,t'_n(u)\bigr)\,,$$ where $t'_i(u)=t_i(u)$ if $u\in{\operatorname{dom}}(t_i)$ and $t'_i(u)=\Box$ otherwise. A relation $R\subseteq({T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}})^n$ is *automatic* if the tree language $$\otimes R = \set{ \otimes\bar t | \bar t\in R }\subseteq{T_{\Sigma_\Box^n\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$$ is regular. We say a tree automaton *recognises* $R$ if it recognises $\otimes R$. A *(relational) signature* $\tau=({\mathcal{R}},{\operatorname{ar}})$ is a finite set ${\mathcal{R}}$ of *relation symbols* together with an *arity map* ${\operatorname{ar}}\colon{\mathcal{R}}\to{\mathbb{N}}_+$. A $\tau$-structure ${\mathfrak{A}}=\bigl(A;(R^{{\mathfrak{A}}})_{R\in{\mathcal{R}}}\bigr)$ consists of a set $A={\Vert {\mathfrak{A}}\Vert}$, its *universe*, and an ${\operatorname{ar}}(R)$-ary relation $R^{{\mathfrak{A}}}\subseteq A^{{\operatorname{ar}}(R)}$ for each $R\in{\mathcal{R}}$.[^2] Given a subset $B\subseteq A$, the *induced substructure* ${\mathfrak{A}}{\mathord{\restriction} B}$ is defined by $${\Vert {\mathfrak{A}}{\mathord{\restriction} B}\Vert} = B \quad\text{and}\quad R^{{\mathfrak{A}}{\mathord{\restriction} B}} = R^{{\mathfrak{A}}}\cap B^{{\operatorname{ar}}(R)}\ \text{for $R\in{\mathcal{R}}$.}$$ *First order logic* ${\mathsf{FO}}$ over $\tau$ is defined as usual and ${{\mathsf{FO}}(\exists^\infty)}$ is its extension by the “there exist infinitely many”-quantifier $\exists^\infty$. Writing $\phi(x_1,\dotsc,x_n)$ means that all free variables of the formula $\phi$ are among the $x_i$. For a formula $\phi(x_1,\dotsc,x_m,y_1,\dotsc,y_n)$ and a tuple $\bar b\in A^n$ we let $$\phi^{{\mathfrak{A}}}\bigl(\cdot,\bar b\bigr) = \Set{ \bar a\in A^m | {\mathfrak{A}}\models\phi\bigl(\bar a,\bar b\bigr) }\,.$$ If $n=0$ we simply write $\phi^{{\mathfrak{A}}}$ instead of $\phi^{{\mathfrak{A}}}(\cdot)$. A *tree-automatic presentation* of a $\tau$-structure ${\mathfrak{A}}$ is a tuple $\bigl({\mathcal{A}};({\mathcal{A}}_R)_{R\in{\mathcal{R}}}\bigr)$ of tree automata such that there exists a bijective *naming function* $\mu\colon A\to L({\mathcal{A}})$ with the property that ${\mathcal{A}}_R$ recognises $\mu(R^{{\mathfrak{A}}})$ for each $R\in{\mathcal{R}}$. A $\tau$-structure is *tree-automatic* if it admits a tree-automatic presentation. In the situation above, the structure $\mu({\mathfrak{A}})=\bigl(\mu(A);(\mu(R^{{\mathfrak{A}}}))_{R\in{\mathcal{R}}}\bigr)$ is isomorphic to ${\mathfrak{A}}$ and called a *tree-automatic copy* of ${\mathfrak{A}}$. Let ${\mathfrak{A}}$ be a tree-automatic structure, $\bar{{\mathcal{A}}}$ a tree-automatic presentation of ${\mathfrak{A}}$, $\mu$ the corresponding naming function, and $\phi(\bar x)$ an ${{\mathsf{FO}}(\exists^\infty)}$-formula over $\tau$. Then the relation $\mu(\phi^{{\mathfrak{A}}})$ is automatic and one can compute a tree automaton recognising it from $\bar{{\mathcal{A}}}$ and $\phi$. Every tree-automatic structure possesses a decidable ${{\mathsf{FO}}(\exists^\infty)}$-theory. Delhommé’s Decomposition Technique ================================== In this section, we present the decomposition technique Delhommé used to show that every tree-automatic ordinal is below $\omega^{\omega^\omega}$. Sum and Box Augmentations and the Decomposition Theorem ------------------------------------------------------- The central notions of Delhommé’s technique are sum augmentations and box augmentations. \[def:sum\_augmentation\] A $\tau$-structure ${\mathfrak{A}}$ is a *sum augmentation* of $\tau$-structures ${\mathfrak{B}}_1,\dotsc,\allowbreak{\mathfrak{B}}_n$ if there exists a finite partition $A=A_1\uplus\dotsb\uplus A_n$ of ${\mathfrak{A}}$ such that ${\mathfrak{A}}{\mathord{\restriction} A_i}\cong{\mathfrak{B}}_i$ for each ${i=1,\dotsc,n}$. \[ex:sum\_augmentation\] Let ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$ be linear orderings and ${\mathfrak{A}}$ a linearisation of the partial ordering ${\mathfrak{B}}_1\amalg\dotsb\amalg{\mathfrak{B}}_n=\bigl(\biguplus_{1\leq i\leq n} B_i;\preceq)$ with $x\preceq y$ iff $x,y\in B_i$ and $x\leq^{{\mathfrak{B}}_i} y$ for some $i$. Then ${\mathfrak{A}}$ is a sum augmentation of ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$. \[rem:sum\_augmentation\] Suppose a linear ordering ${\mathfrak{A}}=(A;\leq^{{\mathfrak{A}}})$ is a sum augmentation of ${\mathfrak{B}}_1,\dotsc,\allowbreak{\mathfrak{B}}_n$. First, each ${\mathfrak{B}}_i$ can be embedded into ${\mathfrak{A}}$ and hence is a linear ordering itself. Moreover, if ${\mathfrak{A}}$ is a well-ordering, then each ${\mathfrak{B}}_i$ is a well-ordering too. Second, ${\mathfrak{A}}$ is isomorphic to a linearisation of ${\mathfrak{B}}_1\amalg\dotsb\amalg{\mathfrak{B}}_n$. \[def:box\_augmentation\] A $\tau$-structure ${\mathfrak{A}}$ is a *box augmentation* of $\tau$-structures ${\mathfrak{B}}_1,\dotsc,\allowbreak{\mathfrak{B}}_n$ if there exists a bijection $f\colon B_1\times\dotsb\times B_n\to A$ such that for all ${j=1,\dotsc,n}$ and $\bar x\in \prod_{1\leq i\leq n,i\not=j} B_i$ the map $$f_{j,\bar x}\colon B_j\to A,b\mapsto (x_1,\dotsc,x_{j-1},b,x_{j+1},\dotsc,x_n)$$ is an embedding of ${\mathfrak{B}}_j$ into ${\mathfrak{A}}$. \[ex:box\_augmentation\] Let ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$ be linear orderings and ${\mathfrak{A}}$ a linearisation of the partial ordering ${\mathfrak{B}}_1\times\dotsb\times{\mathfrak{B}}_n=\bigl(B_1\times\dotsb\times B_n;\preceq)$ with $\bar x\preceq \bar y$ iff $x_i \leq^{{\mathfrak{B}}_i} y_i$ for all ${i=1,\dotsc,n}$. Then ${\mathfrak{A}}$ is a box augmentation of ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$. \[rem:box\_augmentation\] Suppose a linear ordering ${\mathfrak{A}}$ is a box augmentation of ${\mathfrak{B}}_1,\dotsc,\allowbreak{\mathfrak{B}}_n$. First, each ${\mathfrak{B}}_i$ can be embedded into ${\mathfrak{A}}$ and hence is a linear ordering itself. Moreover, if ${\mathfrak{A}}$ is a well-ordering, then each ${\mathfrak{B}}_i$ is a well-ordering too. Second, the bijection $f$ from Definition \[def:box\_augmentation\] above is an isomorphism between a linearisation of ${\mathfrak{B}}_1\times\dotsb\times{\mathfrak{B}}_n$ and ${\mathfrak{A}}$. Since the concept of box augmentations is too general for our purposes, we need to restrict it. In the following definition, an *$R$-colouring* of a $\tau$-structure ${\mathfrak{B}}$ is a map ${c\colon B^{{\operatorname{ar}}(R)}\to Q}$ into a finite set $Q$ such that $c(\bar t)\in c(R^{{\mathfrak{B}}})$ iff $\bar t\in R^{{\mathfrak{B}}}$ for all $\bar t\in B^{{\operatorname{ar}}(R)}$. \[def:tame\_box\_augmentation\] The box augmentation in Definition \[def:box\_augmentation\] is a *tame box augmentation* if for each $R\in{\mathcal{R}}$ the following condition holds: For every ${i=1,\dotsc,n}$ there exists an $R$-colouring $c_i\colon B_i^{{\operatorname{ar}}(R)}\to Q_i$ of ${\mathfrak{B}}_i$ such that the map $$ \prod\nolimits_{1\leq i\leq n} Q_i, \bigl(f(\bar x_1),\dotsc,f(\bar x_r)\bigr)\mapsto \bigl(c_i(x_{1,i},\dotsc,x_{r,i})\bigr){}_{{i=1,\dotsc,n}}$$ is an $R$-colouring of ${\mathfrak{A}}$. Suppose a linear ordering ${\mathfrak{A}}$ is a tame box augmentation of ${\mathfrak{B}}_1,\dotsc,\allowbreak{\mathfrak{B}}_n$. For each ${i=1,\dotsc,n}$ let $c_i\colon B_i^2\to Q_i$ be the corresponding $\leq$-colouring of ${\mathfrak{B}}_i$. Without loss of generality, assume that the $Q_i$ all are the same set, say $\{1,\dotsc,m\}$. For each ${i=1,\dotsc,n}$ consider the structure ${\mathfrak{C}}_i=\bigl(B_i;R_1^{{\mathfrak{C}}_i},\dotsc,R_m^{{\mathfrak{C}}_i}\bigr)$ with $R_j^{{\mathfrak{C}}_i}=c_i^{-1}(j)$. Then the $R_j^{{\mathfrak{C}}_j}$ form a finite partition of $B_i^2$ which is compatible with $\leq^{{\mathfrak{B}}_i}$. Finally, the ordering ${\mathfrak{A}}$ is a generalised product—in the sense of Feferman and Vaught—of the structures ${\mathfrak{C}}_1,\dotsc,{\mathfrak{C}}_n$ where only atomic formulae are used. More generally, the very essence of the notion of a tame box augmentation is to first partition all relations as well as their complements and to take a generalised product afterwards. \[rem:tame\_box\_augmentation\] If ${\mathfrak{A}}$ is a tame box augmentation of ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$ and ${X_i\subseteq B_i}$ for each $i$, then ${\mathfrak{A}}{\mathord{\restriction} f(X_1\times\dotsb\times X_n)}$ is tame box augmentation of ${\mathfrak{B}}_1{\mathord{\restriction} X_1},\dotsc,{\mathfrak{B}}_n{\mathord{\restriction} X_n}$ via the bijection ${f{\mathord{\restriction} (X_1\times\dotsb\times X_n)}}$. In the situations of Definitions \[def:sum\_augmentation\], \[def:box\_augmentation\], and \[def:tame\_box\_augmentation\] we also say that the structures ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$ form a *sum decomposition* respectively a *(tame) box decomposition* of ${\mathfrak{A}}$. The decomposition theorem for tree-automatic structures is the following, whose proof is postponed to Section \[sec:proof\_thm:delhomme\]. \[thm:delhomme\] Let ${\mathfrak{A}}$ be a tree-automatic $\tau$-structure and $\phi(x,y_1,\dotsc,y_n)$ an ${{\mathsf{FO}}(\exists^\infty)}$-formula over $\tau$. Then there exists a finite set ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ of tree-automatic $\tau$-structures such that for all $\bar s\in A^n$ the structure ${\mathfrak{A}}{\mathord{\restriction} \phi^{{\mathfrak{A}}}(\cdot,\bar s)}$ is a sum augmentation of tame box augmentations of elements from ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$. For now, suppose that ${\mathcal{C}}$ is a class of $\tau$-structures ranked by $\nu$, i.e., $\nu$ assigns to each structure ${\mathfrak{A}}\in{\mathcal{C}}$ an ordinal $\nu({\mathfrak{A}})$, its *$\nu$-rank*, which is invariant under isomorphism. An ordinal $\alpha$ is *$\nu$-sum-indecomposable* if for any structure ${\mathfrak{A}}\in{\mathcal{C}}$ with $\nu({\mathfrak{A}})=\alpha$ every sum decomposition ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$ of ${\mathfrak{A}}$ contains a component ${\mathfrak{B}}_i$ with ${\mathfrak{B}}_i\in{\mathcal{C}}$ and $\nu({\mathfrak{B}}_i)=\alpha$. Similarly, we define *$\nu$-(tame-)box-indecomposable* ordinals. Notice that every $\nu$-box-indecomposable ordinal is also $\nu$-tame-box-indecomposable. The following corollary is a direct consequence of Theorem \[thm:delhomme\]. \[cor:delhomme\_indecomposability\] Let ${\mathcal{C}}$ be a class of $\tau$-structures ranked by $\nu$, ${\mathfrak{A}}$ a tree-automatic $\tau$-structure, and $\phi(x,y_1,\dotsc,y_n)$ an ${{\mathsf{FO}}(\exists^\infty)}$-formula over $\tau$. Then there are only finitely many ordinals $\alpha$ which are simultaneously $\nu$-sum-indecomposable as well as $\nu$-tame-box-indecomposable and admit a $\bar s\in A^n$ with ${\mathfrak{A}}{\mathord{\restriction} \phi^{{\mathfrak{A}}}(\cdot,\bar s)}\in{\mathcal{C}}$ and $\nu\bigl({\mathfrak{A}}{\mathord{\restriction} \phi^{{\mathfrak{A}}}(\cdot,\bar s)}\bigr)=\alpha$. Let ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ be the finite set of structures which exists by Theorem \[thm:delhomme\]. Consider an ordinal $\alpha$ which is $\nu$-sum-indecomposable as well as $\nu$-tame-box-indecomposable and admits a tuple $\bar s\in A^n$ with ${\mathfrak{A}}{\mathord{\restriction} \phi^{{\mathfrak{A}}}(\cdot,\bar s)}\in{\mathcal{C}}$ and $\nu\bigl({\mathfrak{A}}{\mathord{\restriction} \phi^{{\mathfrak{A}}}(\cdot,\bar s)}\bigr)=\alpha$. Then there exists a tame box decomposition ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_m$ of ${\mathfrak{A}}{\mathord{\restriction} \phi^{{\mathfrak{A}}}(\cdot,\bar s)}$ such that each ${\mathfrak{B}}_i$ is a sum augmentation of elements from ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$. Since $\alpha$ is $\nu$-tame-box-indecomposable, there is an ${i_0\in\{1,\dotsc,m\}}$ such that ${\mathfrak{B}}_{i_0}\in{\mathcal{C}}$ and $\nu({\mathfrak{B}}_{i_0})=\alpha$. Moreover, there exists a sum decomposition ${\mathfrak{C}}_1,\dotsc,{\mathfrak{C}}_n$ of ${\mathfrak{B}}_{i_0}$ such that ${\mathfrak{C}}_j\in{\mathcal{S}}_\phi^{{\mathfrak{A}}}$ for each ${j=1,\dotsc,n}$. As $\alpha$ is also $\nu$-sum-indecomposable, there is a ${j_0\in\{1,\dotsc,n\}}$ such that ${\mathfrak{C}}_{j_0}\in{\mathcal{C}}$ and $\nu({\mathfrak{C}}_{j_0})=\alpha$. In particular, ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ contains a structure ${\mathfrak{B}}$ with ${\mathfrak{B}}\in{\mathcal{C}}$ and $\nu({\mathfrak{B}})=\alpha$. Since ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ is finite, there are only finitely many ordinals $\alpha$ of the type under consideration. Tree-Automatic Ordinals ----------------------- In order to prove that every tree-automatic ordinal is strictly below $\omega^{\omega^\omega}$, we apply Corollary \[cor:delhomme\_indecomposability\] to the class of all well-orderings and rank each well-ordering ${\mathfrak{A}}$ by its order type ${\operatorname{tp}}({\mathfrak{A}})$. To identify the ${\operatorname{tp}}$-sum-indecomposable and ${\operatorname{tp}}$-box-indecomposable ordinals, we need the natural sum and product. Due to the Cantor normal form, every ordinal can be regarded as a polynomial in $\omega$ with natural numbers as coefficients and ordinals as exponents. Intuitively, the natural sum of two ordinals is formed by adding the corresponding polynomials and the natural product by multiplying the polynomials whereby exponents are added using the natural sum. Formally, let $\alpha=\sum_{i=1}^{i=n} \omega^{\gamma_i}k_i$ and $\beta=\sum_{i=1}^{i=n} \omega^{\gamma_i}\ell_i$ with $\gamma_1>\dotsb>\gamma_n\geq 0$ and $k_1,\dotsc,k_n,\ell_1,\dotsc,\ell_n\in{\mathbb{N}}$ be two ordinals in Cantor normal form. The *natural sum* $\alpha\oplus\beta$ and the *natural product* $\alpha\otimes\beta$ are defined by $$\alpha\oplus\beta = \sum\nolimits_{i=1}^{i=n} \omega^{\gamma_i}(k_i+\ell_i) \qquad\text{and}\qquad \alpha\otimes\beta = \bigoplus\nolimits_{i,j=1}^{i,j=n} \omega^{\gamma_i\oplus\gamma_j} k_i\ell_j\,.$$ Compared with the usual addition and multiplication of ordinals, both operations are commutative and strictly monotonic in both arguments and $\otimes$ distributes over $\oplus$. The following theorem is an adaption of results in [@Car42] to our setting. \[thm:caruth\] Let $\alpha$ and $\beta_1,\dotsc,\beta_n$ be ordinals. 1. If $\alpha$ is a sum augmentation of $\beta_1,\dotsc,\beta_n$, then $\alpha \leq \beta_1\oplus\dotsb\oplus\beta_n$. 2. If $\alpha$ is a box augmentation of $\beta_1,\dotsc,\beta_n$, then $\alpha \leq \beta_1\otimes\dotsb\otimes\beta_n$. \[cor:indecomposable\_ordinals\] Let $\alpha$ be an ordinal. Then $\omega^\alpha$ is ${\operatorname{tp}}$-sum-indecomposable and $\omega^{\omega^\alpha}$ is ${\operatorname{tp}}$-box-indecomposable. Let $\beta_1,\dotsc,\beta_n$ be a sum decomposition of $\omega^\alpha$. Then $\beta_i\leq\omega^\alpha$ for each $i$. If $\beta_i<\omega^\alpha$ for all $i$, then $\beta_1\oplus\dotsb\oplus\beta_n<\omega^\alpha$. This contradicts Theorem \[thm:caruth\] (1). Now, let $\beta_1,\dotsc,\beta_n$ be a box decomposition of $\omega^{\omega^\alpha}$. Then $\beta_i\leq\omega^{\omega^\alpha}$ for each $i$. By contradiction, assume $\beta_i<\omega^{\omega^\alpha}$ for all $i$. Since $\omega^{\omega^\alpha}$ is a limit ordinal, there are $\gamma_i<\omega^\alpha$ with $\beta_i<\omega^{\gamma_i}$ and hence $$\beta_1\otimes\dotsb\otimes\beta_n < \omega^{\gamma_1\oplus\dotsb\oplus\gamma_n} < \omega^{\omega^\alpha}\,.$$ This contradicts Theorem \[thm:caruth\] (2). Finally, Corollaries \[cor:delhomme\_indecomposability\] and \[cor:indecomposable\_ordinals\] imply that any tree-automatic ordinal is strictly less than $\omega^{\omega^\omega}$. The main ingredient for the converse implication is the following lemma. \[lemma:ordinals\_are\_tree\_automatic\] For each $k\in{\mathbb{N}}$ the ordinal $\omega^{\omega^k}$ admits a tree-automatic presentation over a unary alphabet $\Sigma$. We proceed by induction on $k$. ##### Base case. {#base-case. .unnumbered} $k=0$.\ The map $\mu\colon\omega\to{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ which assigns to $n\in\omega$ the unique tree $\mu(n)$ with ${{\operatorname{dom}}\bigl(\mu(n)\bigr) = \{0\}^{<n}}$ can be used as naming function for a tree-automatic presentation of $\omega$. ##### Inductive step. {#inductive-step. .unnumbered} $k>0$.\ We regard $\omega^{\omega^k}$ as the length-lexicographically ordered set of all maps $f\colon\omega\to\omega^{\omega^{k-1}}$ which are zero almost everywhere. Let $\nu$ be the naming function corresponding to the tree-automatic presentation of $\omega^{\omega^{k-1}}$ which exists by induction. We define a map $\mu\colon\omega^{\omega^k}\to{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ by letting $\mu(f)$ be the unique tree with $${\operatorname{dom}}\bigl(\mu(f)\bigr) = \bigcup_{0\leq i<n} \bigl\{0^i\bigr\} \cup \bigl\{0^i1\}{\operatorname{dom}}\bigl(\nu\bigl(f(i)\bigr)\bigr)\,,$$ where $n\in\omega$ is minimal with $f(m)=0$ for all $m\geq n$. This map can be used as naming function for a tree-automatic presentation of $\omega^{\omega^k}$. \[cor:main\_ordinal\] An ordinal $\alpha$ is tree-automatic if, and only if, $$\alpha < \omega^{\omega^\omega}\,.$$ By contradiction, assume there exists a tree-automatic ordinal ${\alpha\geq\omega^{\omega^\omega}}$. Consider $\phi(x,y)=x\leq y\land x\not=y$. Clearly, $\phi^\alpha\bigl(\cdot,\beta)=\beta$ for every $\beta\in\alpha$. In particular, ${\operatorname{tp}}\bigl(\alpha{\mathord{\restriction} \phi^\alpha\bigl(\cdot,\omega^{\omega^d}\bigr)}\bigr)=\omega^{\omega^d}$ for each $d\in{\mathbb{N}}$. Since these ordinals $\omega^{\omega^d}$ are ${\operatorname{tp}}$-sum-indecomposable as well as ${\operatorname{tp}}$-box-indecomposable, this contradicts Corollary \[cor:delhomme\_indecomposability\]. Now, let $\alpha<\omega^{\omega^\omega}$ be some ordinal. There exists a $k\in{\mathbb{N}}$ such that $\alpha<\omega^{\omega^k}$. By Lemma \[lemma:ordinals\_are\_tree\_automatic\], $\omega^{\omega^k}$ is tree-automatic. Finally, $\alpha$ is ${\mathsf{FO}}$-definable with one parameter in $\omega^{\omega^k}$ and hence tree-automatic. Proof of the Decomposition Theorem {#sec:proof_thm:delhomme} ---------------------------------- We conclude this section by providing a proof of Theorem \[thm:delhomme\]. Let $\bigl({\mathcal{A}};({\mathcal{A}}_R)_{R\in{\mathcal{R}}}\bigr)$ be a tree-automatic presentation of ${\mathfrak{A}}$ with $L({\mathcal{A}})\subseteq{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$. To keep notation simple, we assume that the corresponding naming function $\mu\colon A\to L({\mathcal{A}})$ is the identity, i.e., ${\mathfrak{A}}$ is identified with its tree-automatic copy $\mu({\mathfrak{A}})$. For $R\in{\mathcal{R}}$ let $Q_R$ be the set of states of ${\mathcal{A}}_R$. Moreover, let ${\mathcal{A}}_\phi$ be a tree automaton recognising $\phi^{{\mathfrak{A}}}$ and $Q_\phi$ its set of states. For each $t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ and all $r\geq 1$ we put $\otimes_r t=\otimes(t,\dotsc,t)\in{T_{\Sigma_\Box^r\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$, where the convolution is made up of $r$ copies of $t$. We further define a tree $\boxtimes_n t=(t,\emptyset,\dotsc,\emptyset)\in{T_{\Sigma_\Box^{1+n}\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$, where the number of empty trees $\emptyset$ in the convolution is $n$. To simplify notation even more, we put $${\llbracket t\rrbracket}_\phi = {\mathcal{A}}_\phi(\boxtimes_n t) \qquad\text{and}\qquad {\llbracket t\rrbracket}_R = {\mathcal{A}}_R\bigl(\otimes_{{\operatorname{ar}}(R)} t\bigr)$$ for every $t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ and $R\in{\mathcal{R}}$. Consider the set $$\Gamma = \prod_{R\in\{\phi\}\uplus{\mathcal{R}}} Q_R \times \prod_{R\in{\mathcal{R}}} 2^{Q_R}\,.$$ For each $\gamma=\bigl((q_R)_{R\in\{\phi\}\uplus{\mathcal{R}}},(P_R)_{R\in{\mathcal{R}}}\bigr)\in\Gamma$ we define a structure ${\mathfrak{S}}_\gamma$ by $${\Vert {\mathfrak{S}}_\gamma\Vert} = S_\gamma = \Set{ t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}} | \text{${\llbracket t\rrbracket}_\phi=q_\phi$ and ${\llbracket t\rrbracket}_R=q_R$ for each $R\in{\mathcal{R}}$} }$$ and $$R^{{\mathfrak{S}}_\gamma} = \Set{ \bar t\in S_\gamma^{{\operatorname{ar}}(R)} | {\mathcal{A}}_R(\otimes\bar t)\in P_R }\ \text{for $R\in{\mathcal{R}}$.}$$ Clearly, ${\mathfrak{S}}_\gamma$ is a tree-automatic copy of itself. Finally, we put $${\mathcal{S}}_\phi^{{\mathfrak{A}}} = \Set{ {\mathfrak{S}}_\gamma | \gamma\in\Gamma }\,.$$ Obviously, this set is finite. For the rest of this proof, we fix some parameters $\bar s=(s_1,\dotsc,s_n)\in A^n$ and put ${D=\bigcup_{1\leq i\leq n} {\operatorname{dom}}(s_i)}$. The *$\bar s$-type* of a tree $t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ is the tuple $${\operatorname{tp}}_{\bar s}(t)= \bigl(t{\mathord{\restriction} D},U,(\rho_R)_{R\in\{\phi\}\uplus{\mathcal{R}}}\bigr)\,,$$ where $t{\mathord{\restriction} D}\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ is the restriction of $t$ to the tree domain ${\operatorname{dom}}(t)\cap D$, $U={\operatorname{dom}}(t)\cap\partial D$, and $\rho_R\colon U\to Q_R,u\mapsto{\llbracket t{\mathord{\restriction} u}\rrbracket}_R$ for each $R\in\{\phi\}\uplus{\mathcal{R}}$. Observe that $$\otimes(t,\bar s) = \otimes(t{\mathord{\restriction} D},\bar s)\bigl[(u/\boxtimes_n t{\mathord{\restriction} u})_{u\in U}\bigr]$$ and hence $$\label{eq:tp_saturates_phi} {\mathcal{A}}_\phi\bigl(\otimes(t,\bar s)\bigr) = {\mathcal{A}}_\phi\bigl(\otimes(t{\mathord{\restriction} D},\bar s),\rho_\phi\bigr)\,,$$ i.e., whether $t\in\phi^{\mathfrak{A}}(\cdot,\bar s)$ is valid can be determined from ${\operatorname{tp}}_{\bar s}(t)$. Since $D$ is finite, there are only finitely many distinct $\bar s$-types. Consequently, the equivalence relation $\sim_{\bar s}$ on $T_\Sigma$ defined by $t\sim_{\bar s} t'$ iff ${\operatorname{tp}}_{\bar s}(t)={\operatorname{tp}}_{\bar s}(t')$ has finite index. Due to Eq. , $\phi^{{\mathfrak{A}}}(\cdot,\bar s)$ is a union of $\sim_{\bar s}$-classes. Say $B_1,\dotsc,B_m\subseteq\phi^{{\mathfrak{A}}}(\cdot,\bar s)$ are these $\sim_{\bar s}$-classes, then ${\mathfrak{A}}{\mathord{\restriction} \phi^{{\mathfrak{A}}}(\cdot,\bar s)}$ is a sum augmentation of ${\mathfrak{A}}{\mathord{\restriction} B_1},\dotsc,{\mathfrak{A}}{\mathord{\restriction} B_m}$. Thus, it remains to show that ${\mathfrak{A}}{\mathord{\restriction} B}$ is a tame box augmentation of elements from ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ for each $\sim_{\bar s}$-class $B\subseteq\phi^{{\mathfrak{A}}}(\cdot,\bar s)$. Therefore, fix some $\sim_{\bar s}$-class $B\subseteq\phi^{{\mathfrak{A}}}(\cdot,\bar s)$, let $\vartheta=\bigl(t_D,U,(\rho_R)_{R\in\{\phi\}\uplus{\mathcal{R}}}\bigr)$ be the corresponding $\bar s$-type, and put ${\mathfrak{B}}={\mathfrak{A}}{\mathord{\restriction} B}$. For $u\in U$ we define $$\gamma(\vartheta,u) = \bigl((\rho_R(u))_{R\in\{\phi\}\uplus{\mathcal{R}}},(P_R(u))_{R\in{\mathcal{R}}}\bigr) \in\Gamma$$ by $$P_R(u) = \Set{ q\in Q_R | {\mathcal{A}}_R\bigl(\otimes_{{\operatorname{ar}}(R)} t_D,\rho_R[u\mapsto q]\bigr)\in F_R }\ \text{for $R\in{\mathcal{R}}$,}$$ where $F_R\subseteq Q_R$ is the set of accepting states of ${\mathcal{A}}_R$. Let $u_1,\dotsc,u_m$ be an enumeration of the elements of $U$ and put ${\mathfrak{C}}_i={\mathfrak{S}}_{\gamma(\vartheta,u_i)}$ for ${i=1,\dotsc,m}$. Next, we show that ${\mathfrak{B}}$ is a tame box augmentation of ${\mathfrak{C}}_1,\dotsc,{\mathfrak{C}}_m$. First, observe that $$f\colon C_1\times\dotsb\times C_m\to{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}, (x_1,\dotsc,x_m)\mapsto t_D[u_1/x_1,\dotsc,u_m/x_m]$$ is injective. Some $t\in{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ is contained in the image of $f$ if, and only if, $t{\mathord{\restriction} D}=t_D$, ${{\operatorname{dom}}(t)\cap\partial D=U}$, and $t{\mathord{\restriction} u_i}\in C_i$ for each ${i=1,\dotsc,m}$. The latter is equivalent to ${\operatorname{tp}}_{\bar s}(t)=\vartheta$ and hence $f$ is a bijection $f\colon C_1\times\dotsb\times C_m\to B$. Fix some ${j=1,\dotsc,m}$ and $\bar x\in\prod_{1\leq i\leq m,i\not=j} C_i$ and let $$f_{j,\bar x}\colon C_j\to B, t\mapsto f(x_1,\dotsc,x_{j-1},t,x_{j+1},\dotsc,x_m)\,.$$ Consider $R\in{\mathcal{R}}$ and $r={\operatorname{ar}}(R)$. For all $\bar t\in C_j^r$ we have $$\otimes f_{j,\bar x}(\bar t) = (\otimes_r t_D)\bigl[(u_i/\otimes_r x_i)_{1\leq i\leq m,i\not=j}, u_j/\otimes\bar t\bigr]$$ and hence $${\mathcal{A}}_R\bigl(\otimes f_{j,\bar x}(\bar t)\bigr) = {\mathcal{A}}_R\bigl(\otimes_r t_D, \rho_R\bigl[u_j\mapsto {\mathcal{A}}_R(\otimes\bar t)\bigr]\bigr)\,.$$ This leads to the following chain of equivalences $$\begin{aligned} f_{j,\bar x}(\bar t)\in R^{{\mathfrak{B}}} &\quad\Longleftrightarrow\quad {\mathcal{A}}_R\bigl(\otimes f_{j,\bar x}(\bar t)\bigr)\in F_R \\ &\quad\Longleftrightarrow\quad {\mathcal{A}}_R\bigl(\otimes_r t_D, \rho_R\bigl[u_j\mapsto {\mathcal{A}}_R(\otimes\bar t)\bigr]\bigr)\in F_R \\ &\quad\Longleftrightarrow\quad {\mathcal{A}}_R(\otimes\bar t)\in P_R(u_j) \\ &\quad\Longleftrightarrow\quad \bar t\in R^{{\mathfrak{C}}_j}\,,\end{aligned}$$ which shows that ${\mathfrak{B}}$ is a box augmentation of ${\mathfrak{C}}_1,\dotsc,{\mathfrak{C}}_m$. It remains to show that this box augmentation is tame. Therefore, fix some $R\in{\mathcal{R}}$, put $r={\operatorname{ar}}(R)$, and notice that the map $$c_i\colon C_i^r\to Q_R,\bar t\mapsto {\mathcal{A}}_R(\otimes\bar t)$$ is an $R$-colouring of ${\mathfrak{C}}_i$ for each ${i=1,\dotsc,m}$. We have to show that $$c\colon B^r\to Q_R^m, \bigl(f(\bar x_1),\dotsc,f(\bar x_r)\bigr)\mapsto \bigl(c_i(x_{1,i},\dotsc,x_{r,i})\bigr){}_{1\leq i\leq m}$$ is an $R$-colouring of ${\mathfrak{B}}$. Consider the map $$h\colon Q_R^m\to Q_m, (q_1,\dotsc,q_m)\mapsto {\mathcal{A}}_R\bigl(\otimes_r t_D, \set{ u_i\mapsto q_i | 1\leq i\leq m }\bigr)\,.$$ For every $\bar t\in B^r$ we obtain $h\bigl(c(\bar t)\bigr)={\mathcal{A}}_R(\otimes\bar t)$ and hence $h\circ c$ is an $R$-colouring of ${\mathfrak{B}}$. Consequently, $c$ is an $R$-colouring of ${\mathfrak{B}}$ as well. Tree-Automatic Linear Orderings =============================== The objective of this section is to prove our main result, namely Theorem \[thm:main\], which states that every tree-automatic linear ordering has ${\operatorname{FC}}$-rank below $\omega^\omega$. Due to the fact that every countable linear ordering is a dense sum of scattered linear orderings, the proof is essentially an application of Corollary \[cor:delhomme\_indecomposability\] to the class of countable scattered linear orderings ranked by ${\operatorname{VD}}_*$, a variation of the ${\operatorname{FC}}$-rank. Since it is already known that every ordinal is ${\operatorname{VD}}_*$-sum-indecomposable [@KRS05], the major part of this section is devoted to identifying the ${\operatorname{VD}}_*$-tame-box-indecomposable ordinals. Linear Orderings and the ${\operatorname{FC}}$-rank --------------------------------------------------- A *(linear) ordering* is a structure ${\mathfrak{A}}=(A;\leq^{{\mathfrak{A}}})$ where $\leq^{{\mathfrak{A}}}$ is a *non-strict* linear order on $A$. Sometimes we use the corresponding *strict* linear order $<^{{\mathfrak{A}}}$. If ${\mathfrak{A}}$ is clear from the context we omit the superscript ${\mathfrak{A}}$. An *interval* in ${\mathfrak{A}}$ is a subset $I\subseteq A$ such that $x<z<y$ implies $z\in I$ for all $x,y\in I$ and $z\in A$. For $x,y\in A$ the *closed interval* $[x,y]_{{\mathfrak{A}}}$ in ${\mathfrak{A}}$ is the set $\set{ z\in A | x\leq z\leq y}$ if $x\leq y$ and the set $\set{ z\in A | y\leq z\leq x}$ if $x>y$. A *condensation (relation)* on a linear ordering ${\mathfrak{A}}$ is an equivalence relation $\sim$ on $A$ such that each $\sim$-class is an interval of ${\mathfrak{A}}$. For two subsets $X,Y\subseteq A$ we write $X\ll Y$ if $x<y$ for all $x\in X$ and $y\in Y$. If $\sim$ is a condensation on ${\mathfrak{A}}$, the set $A/\mathord{\sim}$ of all $\sim$-classes is (strictly) linearly ordered by $\ll$. We denote the corresponding linear ordering by ${\mathfrak{A}}/\mathord{\sim}$. An example of a condensation is the relation $\sim$ with $x\sim y$ iff the closed interval $[x,y]_{{\mathfrak{A}}}$ in ${\mathfrak{A}}$ is finite. The ordering ${\mathfrak{A}}/\mathord{\sim}$ is obtained from ${\mathfrak{A}}$ by identifying points which are only finitely far away from each other. If this process is transfinitely iterated, it eventually becomes stationary. Intuitively, the ${\operatorname{FC}}$-rank of ${\mathfrak{A}}$ is the ordinal $\alpha$ counting the number of steps which are necessary to reach this fix point. Let ${\mathfrak{A}}$ be a linear ordering. For each ordinal $\alpha$ a condensation $\sim_\alpha^{{\mathfrak{A}}}$ on ${\mathfrak{A}}$ is defined by transfinite induction: 1. $\sim_0^{{\mathfrak{A}}}$ is the identity relation on ${\mathfrak{A}}$, 2. for successor ordinals $\alpha=\beta+1$ let $x\sim_\alpha^{{\mathfrak{A}}} y$ iff the interval $[\tilde x,\tilde y]_{{\mathfrak{A}}/\mathord{\sim_\beta^{{\mathfrak{A}}}}}$ in ${\mathfrak{A}}/\mathord{\sim_\beta^{{\mathfrak{A}}}}$ is finite, where $\tilde x$ and $\tilde y$ are the $\sim_\beta^{{\mathfrak{A}}}$-classes of $x$ and $y$, and 3. for limit ordinals $\alpha$ let $x\sim_\alpha^{{\mathfrak{A}}} y$ iff $x\sim_\beta^{{\mathfrak{A}}} y$ for some $\beta<\alpha$. For each ordering ${\mathfrak{A}}$ there exists an ordinal $\alpha$ such that $\sim_\alpha^{{\mathfrak{A}}}$ and $\sim_\beta^{{\mathfrak{A}}}$ coincide for each $\beta\geq\alpha$. More precisely, every ordinal $\alpha$ whose cardinality is greater than the one of ${\mathfrak{A}}$ has this property. Theorem 5.9 in [@Ros82] ascertains that if ${\mathfrak{A}}$ is countable then $\alpha$ can be chosen countable as well. The *${\operatorname{FC}}$-rank* of a linear ordering ${\mathfrak{A}}$, denoted by ${\operatorname{FC}}({\mathfrak{A}})$, is the least ordinal $\alpha$ such that $\sim_\alpha^{{\mathfrak{A}}}$ and $\sim_\beta^{{\mathfrak{B}}}$ coincide for each $\beta\geq\alpha$. For a linear ordering ${\mathfrak{A}}$ and a subset $B\subseteq A$ we simply write ${\operatorname{FC}}(B)$ for ${\operatorname{FC}}({\mathfrak{A}}{\mathord{\restriction} B})$. The following theorem is the main result of this article. \[thm:main\] Let ${\mathfrak{A}}$ be a tree-automatic linear ordering. Then $${\operatorname{FC}}({\mathfrak{A}})< \omega^\omega\,.$$ Since ${\operatorname{FC}}(\alpha)\leq\beta$ if, and only if, $\alpha\leq\omega^\beta$ for all countable ordinals $\alpha$ and $\beta$, Theorem \[thm:main\] above yields another proof of the fact that every tree-automatic ordinal is strictly less than $\omega^{\omega^\omega}$ (cf. Corollary \[cor:main\_ordinal\]). Scattered Linear Orderings and the ${\operatorname{VD}}$-rank ------------------------------------------------------------- *Throughout the rest of this paper, we consider only countable linear orderings.* A linear ordering ${\mathfrak{A}}$ is *scattered* if the ordering $(\mathbb{Q};<)$ of the rationals cannot be embedded into ${\mathfrak{A}}$, or equivalently, if there exists an ordinal $\alpha$ such that ${\mathfrak{A}}/\mathord{\sim_\alpha^{{\mathfrak{A}}}}$ contains exactly one element (cf. Chapter 5 in [@Ros82]). Examples of scattered orderings include the natural numbers $\omega=({\mathbb{N}};\leq)$, the reversed natural numbers $\omega^*=({\mathbb{N}};\geq)$, the integers $\zeta=(\mathbb{Z};\leq)$, and the finite linear orderings $\mathbf{n}=\bigl(\{1,\dotsc,n\};\leq\bigr)$ for $n\in{\mathbb{N}}$. Furthermore, every ordinal is scattered. For an ordering ${\mathfrak{I}}$ the *${\mathfrak{I}}$-sum* of an $I$-indexed family $({\mathfrak{A}}_i)_{i\in I}$ of orderings is the linear ordering $${\mathfrak{A}} = \sum\nolimits_{i\in{\mathfrak{I}}} {\mathfrak{A}}_i$$ defined by $A = \biguplus_{i\in I} A_i$ and $x\leq^{{\mathfrak{A}}} y$ iff $x,y\in A_i$ and $x\leq^{{\mathfrak{A}}_i} y$ for some $i\in I$ or $x\in A_i$ and $y\in A_j$ for some $i,j\in I$ with $i<^{{\mathfrak{I}}} j$. If ${\mathfrak{I}}$ is finite, say ${\mathfrak{I}}=\mathbf{n}$, we write ${\mathfrak{A}}_1+\dotsb+{\mathfrak{A}}_n$ for $\sum_{i\in\mathbf{n}} {\mathfrak{A}}_i$. Next, we introduce the class of very discrete linear orderings and their connection to the scattered linear orderings. For each countable ordinal $\alpha$ the class ${{\mathcal{VD}}}_\alpha$ of linear orderings is defined by transfinite induction: 1. ${{\mathcal{VD}}}_0 = \{\mathbf{0},\mathbf{1}\}$, and 2. for $\alpha>0$ the class ${{\mathcal{VD}}}_\alpha$ contains all finite sums, $\omega$-sums, $\omega^*$-sums, and $\zeta$-sums of elements from ${{\mathcal{VD}}}_{<\alpha}=\bigcup_{\beta<\alpha}{{\mathcal{VD}}}_\alpha$. The class ${{\mathcal{VD}}}$ of *very discrete* linear orderings is the union of all classes ${{\mathcal{VD}}}_\alpha$. The *${\operatorname{VD}}$-rank* of some ${\mathfrak{A}}\in{{\mathcal{VD}}}$, denoted by ${\operatorname{VD}}({\mathfrak{A}})$, is the least ordinal $\alpha$ with ${\mathfrak{A}}\in{{\mathcal{VD}}}_\alpha$. The following result is due to Hausdorff and Theorem 5.24 in [@Ros82]. A countable linear ordering ${\mathfrak{A}}$ is scattered if, and only if, it is contained in ${{\mathcal{VD}}}$. In case ${\mathfrak{A}}$ is scattered, $${\operatorname{FC}}({\mathfrak{A}}) = {\operatorname{VD}}({\mathfrak{A}})\,.$$ In order to formulate the intermediate steps of our proof of Theorem \[thm:main\], we need a slight variation of the ${\operatorname{VD}}$-rank [@KRS05]. The *${\operatorname{VD}}_*$-rank* of a scattered linear ordering ${\mathfrak{A}}$, denoted by ${\operatorname{VD}}_*({\mathfrak{A}})$, is the least ordinal $\alpha$ such that ${\mathfrak{A}}$ is a finite sum of elements from ${{\mathcal{VD}}}_\alpha$. The ${\operatorname{VD}}$-rank and the ${\operatorname{VD}}_*$-rank of a scattered linear ordering ${\mathfrak{A}}$ are closely related by the following inequality $$\label{eq:VD_inequality} {\operatorname{VD}}_*({\mathfrak{A}})\leq {\operatorname{VD}}({\mathfrak{A}})\leq{\operatorname{VD}}_*({\mathfrak{A}})+1\,.$$ The following lemma is very useful when reasoning about the ranks of scattered linear orderings. The first inequality is Lemma 5.14 in [@Ros82] and the second inequality is a trivial consequence of the first one. \[lemma:VD\_subordering\] Let ${\mathfrak{A}}$ be a scattered linear ordering and $B\subseteq A$. Then $${\operatorname{VD}}({\mathfrak{A}}{\mathord{\restriction} B}) \leq {\operatorname{VD}}({\mathfrak{A}}) \qquad\text{and}\qquad {\operatorname{VD}}_*({\mathfrak{A}}{\mathord{\restriction} B}) \leq {\operatorname{VD}}_*({\mathfrak{A}})\,.$$ Sum and Box Augmentations of Scattered Linear Orderings ------------------------------------------------------- Every sum decomposition of a scattered linear ordering ${\mathfrak{A}}$ entirely consists of scattered linear orderings (cf. Remark \[rem:sum\_augmentation\]). The relationship between the ${\operatorname{VD}}_*$-ranks of ${\mathfrak{A}}$ and the components was established in [@KRS05]. \[prop:VD\_sum\] Let ${\mathfrak{A}}$ be a scattered linear ordering and a sum augmentation of ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$. Then $${\operatorname{VD}}_*({\mathfrak{A}}) = \max \bigl\{ {\operatorname{VD}}_*({\mathfrak{B}}_1),\dotsc,{\operatorname{VD}}_*({\mathfrak{B}}_n) \bigr\}\,.$$ \[cor:VD\_sum\_indecomposable\] Every countable ordinal is ${\operatorname{VD}}_*$-sum-indecomposable. As already mentioned, we are mainly interested in the ${\operatorname{VD}}_*$-tame-box-indecomposable ordinals. The main tool for identifying them is Proposition \[prop:VD\_box\] below whose proof is postponed to page . Notice that Remark \[rem:sum\_augmentation\] implies that ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$ therein are scattered linear orderings. \[prop:VD\_box\] Let ${\mathfrak{A}}$ be a scattered linear ordering and a tame box augmentation of ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$. Then $${\operatorname{VD}}_*({\mathfrak{A}}) \leq {\operatorname{VD}}_*({\mathfrak{B}}_1)\oplus\dotsb\oplus{\operatorname{VD}}_*({\mathfrak{B}}_n)\,.$$ \[cor:VD\_box\_indecomposable\] Every countable ordinal of the shape $\omega^\alpha$ is ${\operatorname{VD}}_*$-tame-box-indecomposable. Let ${\mathfrak{A}}$ be a scattered linear ordering with ${\operatorname{VD}}_*({\mathfrak{A}})=\omega^\alpha$ and ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n$ a tame box decomposition of ${\mathfrak{A}}$. Since each ${\mathfrak{B}}_i$ can be embedded into ${\mathfrak{A}}$, Lemma \[lemma:VD\_subordering\] yields ${{\operatorname{VD}}_*({\mathfrak{B}}_i)\leq\omega^\alpha}$. If ${\operatorname{VD}}_*({\mathfrak{B}}_i)<\omega^\alpha$ for each $i$, then $${\operatorname{VD}}_*({\mathfrak{B}}_1)\oplus\dotsb\oplus{\operatorname{VD}}_*({\mathfrak{B}}_n) < \omega^\alpha\,.$$ This contradicts Proposition \[prop:VD\_box\]. As a first step towards the proof of Proposition \[prop:VD\_box\] we provide two rather technical lemmas. \[lemma:ramsey\_sequence\] Let ${\mathfrak{A}}$ be a linear ordering without a greatest element and $c\colon A^2\to Q$ a $\leq$-colouring of ${\mathfrak{A}}$. Then there exist a strictly increasing, unbounded sequence $(a_i)_{i\in{\mathbb{N}}}$ in ${\mathfrak{A}}$ and a colour $q\in Q$ such that $c(a_i,a_j)=q$ for all $i,j\in{\mathbb{N}}$ with $i<j$. Since ${\mathfrak{A}}$ has no greatest element, there exists a strictly increasing and unbounded sequence $(x_i)_{i\in{\mathbb{N}}}$ in ${\mathfrak{A}}$. By Ramsey’s theorem for infinite, undirected, edge coloured graphs there exist an infinite set $H\subseteq{\mathbb{N}}$ and a colour $q\in Q$ such that $c(x_i,x_j)=q_1$ for all $i,j\in H$ with $i<j$. Let $k_0<k_1<\dotsb$ be the increasing enumeration of all elements in $H$ and put $a_i=x_{k_i}$ for all $i\in{\mathbb{N}}$. Notice that the dual of this lemma holds as well and makes a statement about linear orderings without a least element and strictly decreasing, unbounded sequences. In the following lemma, the interval $(-\infty,a_0]_{{\mathfrak{A}}}$ denotes the set of all $a\in A$ with $a\leq a_0$. \[lemma:sequence\_split\] Let ${\mathfrak{A}}$ be an $\omega$-sum of elements from ${{\mathcal{VD}}}_{<\alpha}$ and $(a_i)_{i\in{\mathbb{N}}}$ a increasing sequence in ${\mathfrak{A}}$. Then $${\operatorname{VD}}_*\bigl((-\infty,a_0]_{{\mathfrak{A}}}\bigr)<\alpha \quad\text{and}\quad {\operatorname{VD}}_*\bigl((a_{k-1},a_k]_{{\mathfrak{A}}}\bigr)<\alpha\ \text{for all $k\geq 1$.}$$ Let ${\mathfrak{A}}=\sum_{i\in\omega} {\mathfrak{A}}_i$ with ${\mathfrak{A}}_i\in{{\mathcal{VD}}}_{<\alpha}$ for all $i\in\omega$. For each $k\in\omega$ there exists a unique $\ell\in\omega$ with $a_k\in A_\ell$. Then $(-\infty,a_k]_{{\mathfrak{A}}}\subseteq A_0\cup\dotsb\cup A_\ell$ and hence $${\operatorname{VD}}_*\bigl((-\infty,a_k]_{{\mathfrak{A}}}\bigr) \leq {\operatorname{VD}}_*({\mathfrak{A}}_0+\dotsb+{\mathfrak{A}}_\ell)<\alpha\,.$$ Moreover, for $k\geq 1$ we have ${\operatorname{VD}}_*\bigl((a_{k-1},a_k]_{{\mathfrak{A}}}\bigr)\leq{\operatorname{VD}}_*\bigl((-\infty,a_k]_{{\mathfrak{A}}}\bigr)<\alpha$. Again, the dual of this statement which speaks about $\omega^*$-sums and decreasing sequences holds true. Basically, the proof of Proposition \[prop:VD\_box\] proceeds by induction on $n$ and reduces thus to the case $n=2$. Proposition \[prop:VD\_box2\] slightly rephrases the claim for $n=2$. \[prop:VD\_box2\] Let $\alpha$ and $\beta$ be ordinals, ${\mathfrak{C}}$ a scattered linear ordering, and ${\mathfrak{A}}$ and ${\mathfrak{B}}$ form a tame box decomposition of ${\mathfrak{C}}$ with ${\operatorname{VD}}_*({\mathfrak{A}})\leq\alpha$ and ${\operatorname{VD}}_*({\mathfrak{B}})\leq\beta$. Then $$\label{eq:VD_box2} {\operatorname{VD}}_*({\mathfrak{C}}) \leq \alpha\oplus\beta\,.$$ We proceed by induction on $\alpha$ and $\beta$. To keep notation simple, we assume that the map $f\colon A\times B\to C$ from the definition of box augmentation is the identity, i.e., $C=A\times B$ and ${\mathfrak{C}}$ is a linearisation of ${\mathfrak{A}}\times{\mathfrak{B}}$ (cf. Remark \[rem:box\_augmentation\]). Before delving into the induction, we perform a slight simplification. By definition, there exist ${\mathfrak{A}}_1,\dotsc,{\mathfrak{A}}_m\in{{\mathcal{VD}}}_\alpha$ and ${\mathfrak{B}}_1,\dotsc,{\mathfrak{B}}_n\in{{\mathcal{VD}}}_\beta$ such that ${\mathfrak{A}}={\mathfrak{A}}_1+\dotsb+{\mathfrak{A}}_m$ and ${\mathfrak{B}}={\mathfrak{B}}_1+\dotsb+{\mathfrak{B}}_n$. Since every $\zeta$-sum of linear orderings can be written as a sum of an $\omega$-sum and an $\omega^*$-sum, we can assume that none of the ${\mathfrak{A}}_i$ or ${\mathfrak{B}}_j$ is constructed as a $\zeta$-sum. Obviously, ${\mathfrak{C}}$ is a sum augmentation of the $m\cdot n$ orderings ${\mathfrak{C}}{\mathord{\restriction} (A_i\times B_j)}$. By Proposition \[prop:VD\_sum\], it suffices to show $${\operatorname{VD}}_*\bigl({\mathfrak{C}}{\mathord{\restriction} (A_i\times B_j)}\bigr)\leq \alpha\oplus\beta$$ for all $i$ and $j$. Since ${\mathfrak{C}}{\mathord{\restriction} (A_i\times B_j)}$ is a tame box augmentation of ${\mathfrak{A}}_i$ and ${\mathfrak{B}}_j$, it remains to show Eq.  under the stronger assumptions that ${\operatorname{VD}}({\mathfrak{A}})\leq\alpha$, ${\operatorname{VD}}({\mathfrak{B}})\leq\beta$, and neither ${\mathfrak{A}}$ nor ${\mathfrak{B}}$ is constructed as a $\zeta$-sum. ##### Base case. {#base-case.-1 .unnumbered} $\alpha=0$ or $\beta=0$.\ If $\alpha=0$, then ${\mathfrak{A}}\cong\mathbf{1}$ and ${\mathfrak{C}}\cong{\mathfrak{B}}$. Thus, ${\operatorname{VD}}_*({\mathfrak{C}})={\operatorname{VD}}_*({\mathfrak{B}})\leq\alpha\oplus\beta$. Similarly, ${{\operatorname{VD}}_*({\mathfrak{C}})\leq\alpha\oplus\beta}$ if $\beta=0$. ##### Inductive step. {#inductive-step.-1 .unnumbered} $\alpha>0$ and $\beta>0$.\ If ${\mathfrak{A}}$ is a finite sum of elements from ${{\mathcal{VD}}}_{<\alpha}$, then ${\operatorname{VD}}_*({\mathfrak{A}})<\alpha$ and ${\operatorname{VD}}_*({\mathfrak{C}})<\alpha\oplus\beta$ by induction. Similarly, ${\operatorname{VD}}_*({\mathfrak{C}})<\alpha\oplus\beta$ if ${\mathfrak{B}}$ is a finite sum. It remains to show the claim under the assumption that ${\mathfrak{A}}$ and ${\mathfrak{B}}$ are $\omega$-sums or $\omega^*$-sums. We distinguish four cases. In each case, let $c_1\colon A^2\to Q_1$ and $c_1\colon B^2\to Q_2$ be $\leq$-colourings of ${\mathfrak{A}}$ and ${\mathfrak{B}}$ such that $$c\colon(A\times B)^2\to Q_1\times Q_2, \bigl((a_1,b_1),(a_2,b_2)\bigr)\mapsto \bigl(c_1(a_1,a_2),c_2(b_1,b_2)\bigr)$$ is a $\leq$-colouring of ${\mathfrak{C}}$. ##### Case 1. {#case-1. .unnumbered} ${\mathfrak{A}}$ is an $\omega$-sum of elements from ${{\mathcal{VD}}}_{<\alpha}$ and ${\mathfrak{B}}$ is an $\omega^*$-sum of elements from ${{\mathcal{VD}}}_{<\beta}$.\ By Lemma \[lemma:ramsey\_sequence\], there exist a strictly increasing, unbounded sequence $(a_i)_{i\in{\mathbb{N}}}$ in ${\mathfrak{A}}$ and a colour $q_1\in Q_1$ such that $c_1(a_i,a_j)=q_1$ for all $i,j\in{\mathbb{N}}$ with $i<j$. By the dual of Lemma \[lemma:ramsey\_sequence\], there exist a strictly decreasing, unbounded sequence $(b_i)_{i\in{\mathbb{N}}}$ in ${\mathfrak{B}}$ and a colour $q_2\in Q_2$ such that $c_2(b_i,b_j)=q_1$ for all $i,j\in{\mathbb{N}}$ with $i>j$. Depending on how $(a_0,b_0)$ compares to $(a_1,b_1)$ in ${\mathfrak{C}}$, we distinguish two cases. ##### Case 1.1. {#case-1.1. .unnumbered} $(a_0,b_0)<(a_1,b_1)$.\ Figure \[fig:case\_1.1\] depicts the idea behind the treatment of this case. The horizontal axis describes ${\mathfrak{A}}$ and increases from left to right, wheres the vertical axis outlines ${\mathfrak{B}}$ and grows from bottom to top. Within the grid, arrows point from smaller to greater elements. =\[circle,draw,fill,minimum size=0.7mm,inner sep=0\] =\[scale=0.8,inner sep=0.5mm,outer sep=0,fill=white\] =\[semithick,-latex’\] (-7mm,7mm) rectangle (51mm,-51mm); (0mm,0mm) – +(50mm,0mm); (0mm,-50mm) – +(0mm,50mm); (22mm,2mm) node\[anchor=south\] [${\mathfrak{A}}$]{}; (-1mm,-25mm) node\[anchor=east\] [${\mathfrak{B}}$]{}; /iin [6mm/0,12mm/1,18mm/2,27mm/k-1,33mm/k,38mm/k+1,46mm/k+2]{} [ (,-0.7mm) – ++(0mm,1.4mm) +(0mm,1mm) node\[anchor=base\] [$a_{\scriptscriptstyle \i}$]{}; (,-6mm) – +(0mm,-44mm); ]{} /in [6mm/0,12mm/1,45mm/]{} [ (0.7mm,-) – +(-1.4mm,0mm) node\[anchor=east,inner sep=0.5mm\] [$b_{\scriptscriptstyle \j}$]{}; ]{} (0mm,-6mm) – +(49mm,0); (a0b0) at (6mm,-6mm) ; (a1b1) at (12mm,-12mm) ; (ab) at (30mm,-20mm) ; (akb0) at (33mm,-6mm) ; (ak1bl) at (38mm,-45mm) ; (a’b’) at (42mm,-40mm) ; (a0b0) edge (a1b1) (ab) edge (akb0) (akb0) edge (ak1bl) (ak1bl) edge (a’b’); (22.5mm,-3mm) node [$Y_{\scriptscriptstyle 1}$]{}; (3mm,-30mm) node\[anchor=base\] [$X_{\scriptscriptstyle 0}$]{}; (9mm,-30mm) node\[anchor=base\] [$X_{\scriptscriptstyle 1}$]{}; (15mm,-30mm) node\[anchor=base\] [$X_{\scriptscriptstyle 2}$]{}; (22.5mm,-30mm) node\[anchor=base\] [$\dotsb$]{}; (30mm,-30mm) node\[anchor=base\] [$X_{\scriptscriptstyle k}$]{}; (42mm,-30mm) node\[anchor=base\] [$X_{\scriptscriptstyle k+2}$]{}; Formally, let $$\begin{aligned} X_0 &= (-\infty,a_0]_{{\mathfrak{A}}}\times(-\infty,b_0)_{{\mathfrak{B}}} & X_k &= (a_{k-1},a_k]_{{\mathfrak{A}}}\times(-\infty,b_0)_{{\mathfrak{B}}}\ \text{for $k\geq 1$}\end{aligned}$$ and $$\begin{aligned} Y_1 &= A\times[b_0,\infty)_{{\mathfrak{B}}} & Y_2 &= \bigcup_{k\in{\mathbb{N}}} X_{2k} & Y_3 &= \bigcup_{k\in{\mathbb{N}}} X_{2k+1}\,.\end{aligned}$$ Since ${A\times B=Y_1\uplus Y_2\uplus Y_3}$, by Proposition \[prop:VD\_sum\], it suffices to show ${\operatorname{VD}}_*(Y_i)\leq\alpha\oplus\beta$ for $i=1,2,3$. Lemma \[lemma:sequence\_split\] and its dual yield $$\begin{aligned} {\operatorname{VD}}_*\bigl((-\infty,a_0]_{{\mathfrak{A}}}\bigr)&<\alpha & {\operatorname{VD}}_*\bigl((a_{k-1},a_k]_{{\mathfrak{A}}}\bigr)&<\alpha\ \text{for $k\geq 1$} & {\operatorname{VD}}_*\bigl([b_0,\infty)_{{\mathfrak{B}}}\bigr)<\beta\,.\end{aligned}$$ Together with the induction hypothesis this yields ${\operatorname{VD}}_*(X_k)<\alpha\oplus\beta$ for all $k\in{\mathbb{N}}$ as well as ${\operatorname{VD}}_*(Y_1)<\alpha\oplus\beta$. As a next step, we show that $$\label{eq:stripe_order} X_k\ll X_{k+2}\ \text{for all $k\in{\mathbb{N}}$.}$$ Therefore, let $(a,b)\in X_k$ and $(a',b')\in X_{k+2}$. Since the sequence of the $b_i$ is strictly decreasing and unbounded, there is an $\ell\geq1$ such that $b_\ell\leq b'$. The choice of the sequences $(a_i)_{i\in{\mathbb{N}}}$ and $(b_i)_{i\in{\mathbb{N}}}$ implies $$c\bigl((a_0,b_0),(a_1,b_1)\bigr) = (q_1,q_2) = c\bigl((a_k,b_0),(a_{k+1},b_\ell)\bigr)$$ and hence $(a_k,b_0)<(a_{k+1},b_\ell)$. Since ${\mathfrak{C}}$ is a linearisation of ${\mathfrak{A}}\times{\mathfrak{B}}$, we have $(a,b)<(a_k,b_0)$ and $(a_{k+1},b_\ell)<(a',b')$. Altogether, $$(a,b) < (a_k,b_0) < (a_{k+1},b_\ell) < (a',b')\,.$$ As as a direct consequence of Eq. , we obtain $$\begin{aligned} {\mathfrak{A}}{\mathord{\restriction} Y_2} &= \sum_{k\in\omega} {\mathfrak{A}}{\mathord{\restriction} X_{2k}} & {\mathfrak{A}}{\mathord{\restriction} Y_3} &= \sum_{k\in\omega} {\mathfrak{A}}{\mathord{\restriction} X_{2k+1}}\,.\end{aligned}$$ Since every ${\mathfrak{A}}{\mathord{\restriction} X_{2k}}$ is a finite sum of elements from ${{\mathcal{VD}}}_{<\alpha\oplus\beta}$, ${\mathfrak{A}}{\mathord{\restriction} Y_2}$ is an $\omega$-sum of elements from ${{\mathcal{VD}}}_{<\alpha\oplus\beta}$ and hence ${\operatorname{VD}}_*(Y_2)\leq\alpha\oplus\beta$. Analogously, ${\operatorname{VD}}_*(Y_3)\leq\alpha\oplus\beta$. This completes Case 1.1. ##### Case 1.2. {#case-1.2. .unnumbered} $(a_0,b_0)>(a_1,b_1)$.\ This case is very similar to Case 1.1 and depicted in Figure \[fig:case\_1.2\]. To see this, let $$\begin{aligned} X_0 &= (a_0,\infty)_{{\mathfrak{A}}}\times[b_0,\infty)_{{\mathfrak{B}}} & X_k &= (a_0,\infty)_{{\mathfrak{A}}}\times[b_i,b_{i-1})_{{\mathfrak{B}}}\ \text{for $k\geq 1$}\end{aligned}$$ and $$\begin{aligned} Y_1 &= (-\infty,a_0]_{{\mathfrak{A}}}\times B & Y_2 &= \bigcup_{k\in{\mathbb{N}}} X_{2k} & Y_3 &= \bigcup_{k\in{\mathbb{N}}} X_{2k+1}\,.\end{aligned}$$ Again, we obtain ${\operatorname{VD}}_*(X_k)<\alpha\oplus\beta$ for all $k\in{\mathbb{N}}$ as well as ${\operatorname{VD}}_*(Y_1)<\alpha\oplus\beta$. Moreover, for each $k\in{\mathbb{N}}$ it holds that $X_k\gg X_{k+2}$ and hence $$\begin{aligned} {\mathfrak{A}}{\mathord{\restriction} Y_2} &= \sum_{k\in\omega^*} {\mathfrak{A}}{\mathord{\restriction} X_{2k}} & {\mathfrak{A}}{\mathord{\restriction} Y_3} &= \sum_{k\in\omega^*} {\mathfrak{A}}{\mathord{\restriction} X_{2k+1}}\,.\end{aligned}$$ Consequently, ${\operatorname{VD}}_*(Y_2),{\operatorname{VD}}_*(Y_3)\leq\alpha\oplus\beta$. This completes Case 1.2 and hence Case 1. (-7mm,7mm) rectangle (51mm,-51mm); (0mm,0mm) – +(50mm,0mm); (0mm,-50mm) – +(0mm,50mm); (25mm,2mm) node\[anchor=south\] [${\mathfrak{A}}$]{}; (-1mm,-22.5mm) node\[anchor=east\] [${\mathfrak{B}}$]{}; /iin [6mm/0,12mm/1,45mm/]{} [ (,-0.7mm) – ++(0mm,1.4mm) +(0mm,1mm) node\[anchor=base\] [$a_{\scriptscriptstyle \i}$]{}; ]{} /in [6mm/0,12mm/1,18mm/2,27mm/k-1,33mm/k,38mm/k+1,46mm/k+2]{} [ (0.7mm,-) – +(-1.4mm,0mm) node\[anchor=east,inner sep=0.5mm\] [$b_{\scriptscriptstyle \j}$]{}; (6mm,-) – +(43mm,0mm); ]{} (6mm,0mm) – +(0mm,-50mm); (a0b0) at (6mm,-6mm) ; (a1b1) at (12mm,-12mm) ; (ab) at (20mm,-30mm) ; (a0bk) at (6mm,-33mm) ; (albk1) at (45mm,-38mm) ; (a’b’) at (40mm,-42mm) ; (a1b1) edge (a0b0) (a’b’) edge (albk1) (albk1) edge (a0bk) (a0bk) edge (ab); (3mm,-22.5mm) node [$Y_{\scriptscriptstyle 1}$]{}; (28mm,-3mm) node [$X_{\scriptscriptstyle 0}$]{}; (28mm,-9mm) node [$X_{\scriptscriptstyle 1}$]{}; (28mm,-15mm) node [$X_{\scriptscriptstyle 2}$]{}; (28mm,-21.5mm) node [$\vdots$]{}; (28mm,-30mm) node [$X_{\scriptscriptstyle k}$]{}; (28mm,-42mm) node [$X_{\scriptscriptstyle k+2}$]{}; ##### Case 2. {#case-2. .unnumbered} ${\mathfrak{A}}$ and ${\mathfrak{B}}$ both are $\omega$-sums.\ Consider the strictly increasing, unbounded sequences $(a_i)_{i\in{\mathbb{N}}}$ in ${\mathfrak{A}}$ and $(b_i)_{i\in{\mathbb{N}}}$ in ${\mathfrak{B}}$ which exist by Lemma \[lemma:ramsey\_sequence\]. Depending on how $(a_0,b_1)$ compares to $(a_1,b_0)$ in ${\mathfrak{C}}$, we distinguish two cases. ##### Case 2.1. {#case-2.1. .unnumbered} $(a_0,b_1)<(a_1,b_0)$.\ This case is treated similar to Case 1.1 and depicted in Figure \[fig:case\_2.1\]. (-7mm,-7mm) rectangle (51mm,51mm); (0mm,0mm) – +(50mm,0mm); (0mm,0mm) – +(0mm,50mm); (22mm,-2mm) node\[anchor=north\] [${\mathfrak{A}}$]{}; (-1mm,25mm) node\[anchor=east\] [${\mathfrak{B}}$]{}; /iin [6mm/0,12mm/1,18mm/2,27mm/k-1,33mm/k,38mm/k+1,46mm/k+2]{} [ (,0.7mm) – ++(0mm,-1.4mm) node\[anchor=north\] [$a_{\scriptscriptstyle \i}$]{}; (,6mm) – +(0mm,43mm); ]{} /in [6mm/0,12mm/1,45mm/]{} [ (0.7mm,) – +(-1.4mm,0mm) node\[anchor=east,inner sep=0.5mm\] [$b_{\scriptscriptstyle \j}$]{}; ]{} (0mm,6mm) – +(49mm,0mm); (a0b1) at (6mm,12mm) ; (a1b0) at (12mm,6mm) ; (ab) at (30mm,40mm) ; (akbl) at (33mm,45mm) ; (ak1b0) at (38mm,6mm) ; (a’b’) at (42mm,20mm) ; (a0b1) edge (a1b0) (ab) edge (akbl) (akbl) edge (ak1b0) (ak1b0) edge (a’b’); (22.5mm,3mm) node [$Y_{\scriptscriptstyle 1}$]{}; (3mm,27mm) node\[anchor=base\] [$X_{\scriptscriptstyle 0}$]{}; (9mm,27mm) node\[anchor=base\] [$X_{\scriptscriptstyle 1}$]{}; (15mm,27mm) node\[anchor=base\] [$X_{\scriptscriptstyle 2}$]{}; (22.5mm,27mm) node\[anchor=base\] [$\dotsb$]{}; (30mm,27mm) node\[anchor=base\] [$X_{\scriptscriptstyle k}$]{}; (42mm,27mm) node\[anchor=base\] [$X_{\scriptscriptstyle k+2}$]{}; ##### Case 2.2. {#case-2.2. .unnumbered} $(a_0,b_1)>(a_1,b_0)$.\ This case is symmetric to Case 2.1. ##### Case 3. {#case-3. .unnumbered} ${\mathfrak{A}}$ is an $\omega^*$-sum and ${\mathfrak{B}}$ is an $\omega$-sum.\ This case is symmetric to Case 1. ##### Case 4. {#case-4. .unnumbered} ${\mathfrak{A}}$ and ${\mathfrak{B}}$ both are $\omega^*$-sums.\ This case is dual to Case 2. This finishes the proof of Proposition \[prop:VD\_box2\]. Finally, we are in a position to perform the induction which proves Proposition \[prop:VD\_box\]. \[proof:prop:VD\_box\] We show the claim by induction on $n$. ##### Base case. {#base-case.-2 .unnumbered} $n=1$.\ Clearly, ${\mathfrak{A}}\cong{\mathfrak{B}}_1$ and hence ${\operatorname{VD}}_*({\mathfrak{A}})={\operatorname{VD}}_*({\mathfrak{B}}_1)$. ##### Inductive step. {#inductive-step.-2 .unnumbered} $n>1$.\ To simplify notation, we assume that ${\mathfrak{A}}$ is a linearisation of ${\mathfrak{B}}_1\times\dotsb\times{\mathfrak{B}}_n$. For each $i$ let $c_i\colon B_i^2\to Q_i$ be a $\leq$-colouring of ${\mathfrak{B}}_i$ such that $$c\colon(B_1\times\dotsb\times B_n)^2\to Q_1\times\dotsb\times Q_n, (\bar a,\bar b)\mapsto\bigl(c_1(a_1,b_1),\dotsc,c_n(a_n,b_n)\bigr)$$ is a $\leq$-colouring of ${\mathfrak{A}}$. We consider the relation $\sim$ on $B_1$ which is defined by $x\sim y$ iff $c_1(x,x)=c_1(y,y)$. This is an equivalence relation with at most $|Q_1|$ equivalence classes, say $X_1,\dotsc,X_m\subseteq B_1$ are these $\sim$-classes. Obviously, ${\mathfrak{A}}$ is a sum augmentation of the $m$ orderings ${\mathfrak{A}}{\mathord{\restriction} (X_i\times B_2\times\dotsb\times B_n)}$ for ${i=1,\dotsc,m}$. By Proposition \[prop:VD\_sum\], it suffices to show for each $i$ the inequality $$\label{eq:VD_prodn} {\operatorname{VD}}_*\bigl({\mathfrak{A}}{\mathord{\restriction} (X_i\times B_2\times\dotsb\times B_n)}\bigr)\leq {\operatorname{VD}}_*({\mathfrak{B}}_1)\oplus\dotsb\oplus{\operatorname{VD}}_*({\mathfrak{B}}_n)\,.$$ Therefore, define for each $x\in B_1$ a scattered linear ordering ${\mathfrak{C}}_x$ by ${\Vert {\mathfrak{C}}_x\Vert}=B_2\times\dotsb\times B_n$ and $\bar a\leq^{{\mathfrak{C}}_x}\bar b$ iff $(x,\bar a)\leq^{{\mathfrak{A}}}(x,\bar b)$. Clearly, ${\mathfrak{C}}_x$ is a tame box augmentation of ${\mathfrak{B}}_2,\dotsc,{\mathfrak{B}}_n$ and hence $$\label{eq:VD_prodn-1} {\operatorname{VD}}_*({\mathfrak{C}}_x)\leq {\operatorname{VD}}_*({\mathfrak{B}}_2)\oplus\dotsb\oplus{\operatorname{VD}}_*({\mathfrak{B}}_n)$$ by induction. For $x,y\in B_1$ with $x\sim y$ and all $\bar a,\bar b\in B_2\times\dotsb\times B_n$ we have ${c\bigl((x,\bar a),(x,\bar b)\bigr)=c\bigl((y,\bar a),(y,\bar b)\bigr)}$ and hence $\bar a\leq^{{\mathfrak{C}}_x} \bar b$ iff $\bar a\leq^{{\mathfrak{C}}_y}\bar b$, i.e., ${\mathfrak{C}}_x={\mathfrak{C}}_y$. For any $\sim$-class $X_i\subseteq B_1$ and every $x\in X_i$ we obtain that ${\mathfrak{A}}{\mathord{\restriction} (X_i\times B_2\times\dotsb\times B_n)}$ is a tame box augmentation of ${\mathfrak{B}}_1{\mathord{\restriction} X_i}$ and ${\mathfrak{C}}_x$. Finally, Eq.  follows from ${\operatorname{VD}}_*({\mathfrak{B}}_1{\mathord{\restriction} X_i})\leq{\operatorname{VD}}_*({\mathfrak{B}}_1)$, Eq. , and Proposition \[prop:VD\_box2\]. Proof of the Main Result ------------------------ In order to conclude Theorem \[thm:main\] from Corollaries \[cor:delhomme\_indecomposability\], \[cor:VD\_sum\_indecomposable\], and \[cor:VD\_box\_indecomposable\], we need another auxiliary result. Statement (1) of the lemma below is in fact shown by the proof of Proposition 4.5 in [@KRS05]. \[lemma:scattered\_closed\_intervals\] Let ${\mathfrak{A}}$ be a linear ordering and $\alpha<{\operatorname{FC}}({\mathfrak{A}})$. 1. ${\mathfrak{A}}$ contains a scattered closed interval $I$ with ${\operatorname{FC}}(I)=\alpha+1$. 2. ${\mathfrak{A}}$ contains a scattered closed interval $I$ with ${\operatorname{VD}}_*(I)=\alpha$. We only show (2). By (1), there exists a closed scattered interval $I$ of ${\mathfrak{A}}$ with ${\operatorname{VD}}(I)={\operatorname{FC}}(I)=\alpha+1$. Since $I$ has a least and a greatest element, it is neither an $\omega$-sum nor an $\omega^*$-sum nor a $\zeta$-sum of elements from ${{\mathcal{VD}}}_{<\alpha+1}={{\mathcal{VD}}}_\alpha$. Thus, $I$ is a finite sum of elements from ${{\mathcal{VD}}}_\alpha$ and hence ${\operatorname{VD}}_*(I)\leq\alpha$. Due to Eq. , ${\operatorname{VD}}_*(I)=\alpha$. Now, we are prepared to provide the missing proof of the main result. By contradiction, assume there exists a tree-automatic linear ordering ${\mathfrak{A}}$ with ${\operatorname{FC}}({\mathfrak{A}})\geq\omega^\omega$. Consider the formula $\phi(x,y_1,y_2) = y_1\leq x\land x\leq y_2$. By Lemma \[lemma:scattered\_closed\_intervals\], for each $d\in{\mathbb{N}}$ there exists a scattered closed interval $I=[b_1,b_2]_{{\mathfrak{A}}}$ in ${\mathfrak{A}}$ with $b_1\leq b_2$ and ${\operatorname{VD}}_*(I)=\omega^d$. Since $I=\phi^{{\mathfrak{A}}}(\cdot,b_1,b_2)$ and $\omega^d$ is ${\operatorname{VD}}_*$-sum-indecomposable as well as ${\operatorname{VD}}_*$-tame-box-indecomposable, this contradicts Corollary \[cor:delhomme\_indecomposability\]. ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-Free Tree-Automatic Presentations ====================================================================================================== In this section, we investigate a restricted form of tree-automaticity where only those tree-automatic presentations $\bigl({\mathcal{A}};({\mathcal{A}}_R)_{R\in{\mathcal{R}}}\bigr)$ are permitted for which the binary tree $$T({\mathcal{A}}) = T\bigl(L({\mathcal{A}})) = \bigcup_{t\in L({\mathcal{A}})} {\operatorname{dom}}(t)$$ is of bounded branching complexity—in some sense defined later.[^3] The main result of this section, namely Theorem \[thm:main\_bounded\_rank\], states that any linear ordering ${\mathfrak{A}}$ which admits a tree-automatic presentation whose branching complexity is bounded by $k\in{\mathbb{N}}$ satisfies ${{\operatorname{FC}}({\mathfrak{A}})<\omega^k}$. Binary Trees and the Cantor-Bendixson Rank ------------------------------------------ The *infinite full binary tree* is the set ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}=\{0,1\}^\star$ whose nodes are ordered by the prefix-relation $\preceq$. A *binary tree* is a (possibly empty) prefix-closed subset $T\subseteq{{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$. The (isomorphism type of) the *subtree* rooted at $u\in T$ is $$T{\mathord{\restriction} u} = \Set{ v\in\{0,1\}^\star | uv\in T }\,.$$ A binary tree $T$ is *regular* if it is a regular language. Due to the Myhill-Nerode theorem, this is equivalent to the fact that $T$ has (up to isomorphism) only finitely many distinct subtrees $T{\mathord{\restriction} u}$. To every tree language $L\subseteq{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ we assign a binary tree $$T(L) = \bigcup_{t\in L} {\operatorname{dom}}(t)\,.$$ For every regular tree language $L\subseteq{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ the binary tree $T(L)$ is regular. Let ${\mathcal{A}}$ be a tree automaton recognising $L$. For each $u\in T(L)$ let $$Q(u) = \set{ {\mathcal{A}}(t,u) | t\in L }\,.$$ It is easy to see that $Q(u)=Q(v)$ implies $T(L){\mathord{\restriction} u}=T(L){\mathord{\restriction} v}$. Thus, $T(L)$ is regular. A binary tree $T$ is called *${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free* if ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ cannot be embedded into $T$, i.e., there is no injection $f\colon {{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}\to T$ such that $u\preceq v$ iff $f(u)\preceq f(v)$ for all $u,v\in{{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$. An *infinite branch* of a binary tree $T$ is an infinite subset $P\subseteq T$ which is prefix-closed and linearly ordered by $\preceq$. The *derivative* of $T$ is the set $d(T)$ of all $u\in T$ which are contained in at least two distinct infinite branches of $T$. Clearly, $d(T)$ is a binary tree. For $n\in{\mathbb{N}}$ let $d^{(n)}(T)$ be the $n^{\mathrm{th}}$ derivation of $T$, i.e., $d^{(0)}(T)=T$ and $d^{(n)}(T)=d\bigl(d^{(n-1)}(T)\bigr)$ for $n>0$. Whenever $T$ is regular there exists an $n\in{\mathbb{N}}$ such that $d^{(n)}(T)=d^{(k)}(T)$ for all $k\geq n$ and $d^{(n)}(T)$ is finite precisely if $T$ is ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free [@KRS05]. Let $T$ be a regular, ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free binary tree. The *${\operatorname{CB}}_*$-rank* of $T$, denoted by ${\operatorname{CB}}_*(T)$, is the least $n\in{\mathbb{N}}$ such that $d^{(n)}(T)$ is finite.[^4] Clearly, $d(T{\mathord{\restriction} u}) = d(T){\mathord{\restriction} u}$ and hence ${\operatorname{CB}}_*(T{\mathord{\restriction} u})\leq{\operatorname{CB}}_*(T)$ for all $u\in T$. A tree-automatic presentation $\bigl({\mathcal{A}};({\mathcal{A}}_R)_{R\in{\mathcal{R}}}\bigr)$ is *${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free* if $T\bigl(L({\mathcal{A}})\bigr)$ is ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free and then its *rank* is the ${\operatorname{CB}}_*$-rank of $T\bigl(L({\mathcal{A}})\bigr)$.[^5] Obviously, the structures which admit a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank $0$ are precisely the finite structures. Furthermore, it can be shown that the structures which admit a presentation of rank at most $1$ are exactly the string-automatic structures.[^6] ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-Free Tree-Automatic Presentation of Linear Orderings ------------------------------------------------------------------------------------------------------------------------- The following is the main result of this section. \[thm:main\_bounded\_rank\] Let ${\mathfrak{A}}$ be a linear ordering which admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank $k\geq 1$. Then $${\operatorname{FC}}({\mathfrak{A}}) < \omega^k\,.$$ \[cor:main\_ordinal\_bounded\_rank\] An ordinal $\alpha$ admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank at most $k$ if, and only if, $$\alpha < \omega^{\omega^k}\,.$$ As direct consequence of this corollary and Corollary \[cor:main\_ordinal\], every tree-automatic ordinal already admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation. In fact, Jain, Khoussainov, Schlicht, and Stephan [@JKS12] recently showed that every tree-automatic presentation of an ordinal—or more generally, of a scattered linear ordering—is ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free. The proof of Theorem \[thm:main\_bounded\_rank\] works by more detailed inspection of the proofs of Theorem \[thm:delhomme\], Corollary \[cor:delhomme\_indecomposability\], and Theorem \[thm:main\] in combination with the following lemma. \[lemma:anti\_chains\_bounded\] Let $T$ be a regular, ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free binary tree. Then there exists a constant $C\in{\mathbb{N}}$ such that any anti-chain $A\subseteq T$ contains at most $C$ elements $u$ with ${\operatorname{CB}}_*(T{\mathord{\restriction} u})={\operatorname{CB}}_*(T)$. If ${\operatorname{CB}}_*(T)=0$ then $T$ is finite and the claim is trivially satisfied. Thus, assume ${\operatorname{CB}}_*(T)=k>0$. Let $n\in{\mathbb{N}}$ be the *index* of $T$, i.e., the size of the set $\set{ T{\mathord{\restriction} u} | u \in T }$. We show that $C=2^n$ is a possible choice. By contradiction, suppose there is an anti-chain $A$ consisting of $2^n+1$ elements $u\in T$ satisfying ${{\operatorname{CB}}_*(T{\mathord{\restriction} u})=k}$. Let $B$ be the set of all $v\in T$ which are the longest common prefix of two distinct elements from $A$. Then $B$ contains exactly $2^n$ elements. For every $u\in A$ the set $d^{(k-1)}(T{\mathord{\restriction} u})=d^{(k-1)}(T){\mathord{\restriction} u}$ is infinite. By König’s lemma, there exists an infinite branch of $d^{(k-1)}(T)$ containing $u$. Thus, $B\subseteq d^{(k)}(T)$. For every $v\in d^{(k)}(T)$ it holds that $d^{(k)}(T){\mathord{\restriction} v}=d^{(k)}(T{\mathord{\restriction} v})$ and hence the index of $d^{(k)}(T)$ is at most $n$. Since $d^{(k)}(T)$ contains at least $2^n$ elements, a simple pumping argument shows that $d^{(k)}(T)$ is infinite. But this contradicts ${\operatorname{CB}}_*(T)=k$. Now, we are in a position to show the main result of this section. We show the claim by induction on $k\geq 1$. Therein, we use the induction hypothesis only in the following restricted form: Every scattered linear ordering ${\mathfrak{A}}$ which admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank $k\geq0$ satisfies ${\operatorname{VD}}_*({\mathfrak{A}})<\omega^k$. For $k\geq 1$ this assertion easily follows from ${\operatorname{VD}}({\mathfrak{A}})={\operatorname{FC}}({\mathfrak{A}})<\omega^k$. ##### Base case. {#base-case.-3 .unnumbered} $k=0$.\ Since any structure which admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank $0$ is finite, every such scattered linear ordering ${\mathfrak{A}}$ trivially satisfies ${\operatorname{VD}}_*({\mathfrak{A}})=0<\omega^0$. ##### Inductive step. {#inductive-step.-3 .unnumbered} $k\geq 1$.\ By contradiction, assume there exists a tree-automatic linear ordering ${\mathfrak{A}}$ which admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation $\bigl({\mathcal{A}};({\mathcal{A}}_R)_{R\in{\mathcal{R}}}\bigr)$ of rank $k$ and satisfies ${\operatorname{FC}}({\mathfrak{A}})\geq\omega^k$. To keep notation simple, we assume that the naming function $\mu\colon A\to L({\mathcal{A}})$ is the identity, i.e., ${\mathfrak{A}}$ is identified with its tree-automatic copy $\mu({\mathcal{A}})$. Let $C$ be the constant which exists by Lemma \[lemma:anti\_chains\_bounded\] for the binary tree $T(A)$. Moreover, let ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ be the set which is constructed in the proof of Theorem \[thm:delhomme\] from $\bar{{\mathcal{A}}}$ and the formula ${\phi(x,y_1,y_2) = y_1\leq x\land x\leq y_2}$. We show that ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ contains for each $n\in{\mathbb{N}}$ a scattered linear ordering ${\mathfrak{B}}$ with $\omega^{k-1} n<{\operatorname{VD}}_*({\mathfrak{B}})<\omega^k$. This contradicts the finiteness of ${\mathcal{S}}_\phi^{{\mathfrak{A}}}$ and proves the theorem. Therefore, consider some $n\in{\mathbb{N}}$. By Lemma \[lemma:scattered\_closed\_intervals\], there exists a scattered closed interval ${I=[a_1,a_2]_{{\mathfrak{A}}}}$ of ${\mathfrak{A}}$ with $a_1\leq a_2$ and ${\operatorname{VD}}_*(I)=\omega^{k-1}(nC+1)$. Now, we delve into the details of the proof of Theorem \[thm:delhomme\]. Since $I=\phi^{{\mathfrak{A}}}(\cdot,a_1,a_2)$ and $\omega^{k-1}(nC+1)$ is ${\operatorname{VD}}_*$-sum-indecomposable, there exists a $\sim_{(a_1,a_2)}$-class $B\subseteq I$ such that ${\operatorname{VD}}_*(B)=\omega^{k-1}(nC+1)$. Let $\vartheta=\bigl(t_D,U,(\rho_R)_{R\in\{\phi\}\uplus{\mathcal{R}}}\bigr)$ be the corresponding $(a_1,a_2)$-type, $u_1,\dotsc,u_r$ an enumeration of $U$, and ${\mathfrak{S}}_i={\mathfrak{S}}_{\gamma(\vartheta,u_i)}$ for each ${i=1,\dotsc,r}$. Notice that the ${\mathfrak{S}}_i$ are scattered linear orderings and form a tame box decomposition of ${\mathfrak{A}}{\mathord{\restriction} B}$. It is easy to see that $T(S_i)\subseteq T(A){\mathord{\restriction} u_i}$ and hence ${\operatorname{CB}}_*\bigl(T(S_i)\bigr)\leq k$ for each $i$. Since $U$ is an anti-chain in $T(A)$, equality holds true in at most $C$ cases. Without loss of generality, there exists a $p\leq C$ such that ${\operatorname{CB}}_*\bigl(T(S_i)\bigr)=k$ for $i\leq p$ and ${\operatorname{CB}}_*\bigl(T(S_i)\bigr)<k$ for $i>p$. By the restricted induction hypothesis, we obtain ${\operatorname{VD}}_*({\mathfrak{S}}_i)<\omega^{k-1}$ for $i>p$. If we had ${\operatorname{VD}}_*({\mathfrak{S}}_i)\leq\omega^{k-1}n$ for each ${i=1,\dotsc,p}$, then $$\underbrace{{\operatorname{VD}}_*({\mathfrak{S}}_1)\oplus\dotsb\oplus{\operatorname{VD}}_*({\mathfrak{S}}_p)}_{\leq\omega^{k-1}np}\oplus \underbrace{{\operatorname{VD}}_*({\mathfrak{S}}_{p-1})\oplus\dotsb\oplus{\operatorname{VD}}_*({\mathfrak{S}}_r)}_{<\omega^{k-1}} < \omega^{k-1}(nC+1)\,.$$ This would contradict Proposition \[prop:VD\_box\] and hence there exists a ${j\in\{1,\dotsc,p\}}$ with ${{\operatorname{VD}}_*({\mathfrak{S}}_j)>\omega^{k-1}n}$. Since ${\mathfrak{S}}_j$ can be embedded into ${\mathfrak{A}}{\mathord{\restriction} B}$, we further obtain $${\operatorname{VD}}_*({\mathfrak{S}}_j)\leq\omega^{k-1}(nC+1)<\omega^k\,.\qedhere$$ In order to verify Corollary \[cor:main\_ordinal\_bounded\_rank\] we still have to prove that every ordinal $\alpha<\omega^{\omega^k}$ admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank at most $k$. The “only if”-part follows directly from Theorem \[thm:main\_bounded\_rank\] and we only need to show the “if”-part. For $k=0$ the claim is trivial since each ordinal $\alpha<\omega$ is finite. Thus, assume $k>0$ and consider some $\alpha<\omega^{\omega^k}$. There exists an $n\in{\mathbb{N}}$ such that $\alpha<\omega^{\omega^{k-1}n}$. The ordinal $\omega^{\omega^{k-1} n}$ can be regarded as the lexicographically ordered set of all $n$-tuples of elements from $\omega^{\omega^{k-1}}$. Let $\bar{{\mathcal{A}}}$ be the tree-automatic presentation of $\omega^{\omega^{k-1}}$ which was constructed in Lemma \[lemma:ordinals\_are\_tree\_automatic\] and $\nu\colon A\to{T_{\Sigma\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ the corresponding naming function. A closer look at the induction in the proof of Lemma \[lemma:ordinals\_are\_tree\_automatic\] reveals that $\bar{{\mathcal{A}}}$ is ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free and of rank $k$. The map $\mu\colon\omega^{\omega^{k-1}n}\to{T_{\Sigma_\Box^n\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$ with $$\mu(\beta_1,\dotsc,\beta_n)=\otimes\bigl(\nu(\beta_1),\dotsc,\nu(\beta_n)\bigr)$$ can be used as naming function for a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank $k$ of $\omega^{\omega^{k-1}n}$. Finally, $\alpha$ is ${\mathsf{FO}}$-definable with one parameter in $\omega^{\omega^{k-1}n}$ and hence admits a ${{\mathfrak{T}}_{2\ifthenelse{\equal{\empty}{\empty}}{}{,\empty}}}$-free tree-automatic presentation of rank $k$ as well. [^1]: Recently, Jain, Khoussainov, Schlicht, and Stephan [@JKS12] independently from us obtained results which verify this conjecture as well. [^2]: By convention, structures are named in Fraktur and their universes by the same letter in Roman. [^3]: Roughly speaking, the branching complexity is bounded if the infinite full binary tree cannot be embedded and is measured in terms of the Cantor-Bendixson rank. [^4]: In fact, ${\operatorname{CB}}_*$ is a variation of the Cantor-Bendixson rank which was adapted to trees in [@KRS05]. [^5]: In [@BGR11] the authors speak of *bounded-rank tree-automatic presentations*. Their notion of *rank* is defined differently, but can be shown to be equivalent to ours. [^6]: String-automatic structures are defined like tree-automatic structures but with finite words and finite automata instead of trees and tree automata.
{ "pile_set_name": "ArXiv" }
--- author: - 'J. Peña-Rodríguez,' - 'J. Pisco-Guabave' - 'D. Sierra-Porta' - 'M. Suárez-Durán' - 'M. Arenas-Flórez' - 'L. M. Pérez-Archila' - 'J. D. Sanabria-Gómez' - 'H. Asorey' - 'L. A. Núñez.' bibliography: - 'MuTe\_bib.bib' title: 'Design and construction of MuTe: a hybrid Muon Telescope to study Colombian Volcanoes' --- Introduction {#sec:intro} ============ Muography or muon radiography is a non-invasive technology whose primary purpose is to obtain digital images from the density contrasts due to the different inner structures of objects by analyzing the atmospheric muon flux transmitted through them [@Kaiser2019; @Bonomi2020; @Bonechi2020]. Nowadays there are several emerging academic and commercial applications such as the detection of hidden materials in containers[@Blanpied2015], archaeological building scanning[@Morishima2017; @GomezEtal2016], nuclear plant inspection[@Fujii2013], nuclear waste monitoring, underground cavities[@Saracino2017], the overburden of railway tunnels[@ThompsonEtal2019] and vulcanology applications (see, e.g.,[@TanakaOlah2019] and references therein). In Colombia, there are more than a dozen active volcanoes representing significant risks to the nearby population[@Cortes2016; @Agudelo2016; @Munoz2017]. This motivated local research groups to explore possible applications of the muography technique[@AsoreyEtal2017B; @SierraPortaEtal2018; @PenaRodriguezEtal2018; @GuerreroEtal2019; @ParraAvila2019; @PenaRodriguez2019]. Muons are elementary particles, two hundred times heavier than electrons and with a lifetime of approximately $2.2~\mu$s. They are produced by the interaction of particles reaching the Earth’s atmosphere from galactic and extragalactic sources. These extraterrestrial impinging particles – called cosmic rays– generate showers of secondary particles with a significant presence of muons, produced by decaying charged pions and kaons. The energy spectrum of muons at sea level has a maximum at around $4$ GeV with a flux of $\sim 1$ cm$^{-2}$ min$^{-1}$ [@nakamura2010review]. Despite a great deal of work in these areas, some particular problems exist and are still being addressed today: - The low muon flux across the scanned object due to: (a) the muon flux diminishes with higher zenith angles; and (b) the flux is reduced ($\sim$two orders) in crossing a $1$ km path-length of standard rock [@groom2001muon; @groom2000passage]. - The background produced by charged particles: upward charged particles [@jourde2013experimental], Extensive Air Showers (EAS) [@nishiyama2014experimental; @Olah2017ICRC; @KUSAGAYA2015; @Bene2013; @Olh2017], and scattered low momentum muons ($< 1$ GeV/c) [@nishiyama2016monte; @Gomez2017; @Olh2018; @Olah2018Invest; @ambrosino2015joint]. These particles cause an overestimation of muon flux with a corresponding underestimation of the density distribution inside the volcano [@carbone2013experiment; @nishiyama2016monte]. The Colombian Muon Telescope [@AsoreyEtal2017B; @SierraPortaEtal2018] is a hybrid instrument suitable for different geophysical scenarios. MuTe employs a hodoscope manufactured of plastic scintillator bars to determine the direction of particles impinging the detector. Additionally, MuTe incorporates particle-identification techniques for reducing the background noise sources[@Bonechi2020; @PenaRodriguez2019]. A Water Cherenkov Detector (WCD) measures the energy loss of charged particles filtering the noise due to the soft-component of EAS (electrons and positrons), and of particles arriving simultaneously. Discrimination of fake events due to scattered and backwards muons is addressed using a picosecond Time-of-Flight system. In this paper, we report the main features of the Colombian muon telescope, which considers the signal-to-noise problems mentioned above. In section \[detector\] we present the MuTe features, while section \[mechanical\] describes the mechanical response of MuTe to environmental field conditions. Section \[daq\] examines the data acquisition system, trigger mechanism and power consumption of MuTe; while in section \[measurement\], we present the first MuTe flux data, with an estimation of the background obtained with the WCD. Finally, in section \[conclusions\] conclusions and final remarks are presented. The hybrid detector {#detector} =================== In this section, we present the most significant characteristics of our muon telescope, MuTe: the event tracking and the signal-to-background discrimination. Our hybrid instrument consists of two independent detectors: a scintillator hodoscope and a WCD. A similar hybrid measurement technique has been previously implemented in the Pierre Auger Observatory to study the composition of primary cosmic rays [@aab2017muon; @aab2016prototype]. Scintillator hodoscope: the tracking device ------------------------------------------- The MuTe hodoscope consists of two detection panels, each with $60$ scintillator bars, separated from each other by a configurable distance, which is typically $\sim 250$ cm. Each plastic scintillator strip is made of a polystyrene base of Styron $665$-W doped with $1\%$ of $2.5$-diphenyloxazole (PPO) and $0.03\%$ of $1.4$-bis-\[2-(5-phenyloxazolyl)\]-benzene (POPOP), co-extruded with a $0.25$ mm thick high reflectivity coating of $\text{TiO}_{\text{2}}$ [@PlaDalmau2003]. A $1.2$ mm diameter wavelength shifting fiber (Saint-Gobain BCF-92) is placed inside a co-extruded hole ($1.8$ mm diameter) in the scintillator strip. The fibre has a core refraction index of $1.42$, an absorption peak at $410$ nm, and an emission peak at $492$ nm. It captures the photons produced by the impinging charged particles and carries them to a Hamamatsu (S13360-1350CS) Silicon Photomultiplier (SiPM). The WLS fibre is attached to the SiPM sensitive area using a mechanical coupling (See figure \[fig:frame\]). The scintillator panels are build up as an array consisting each of $30$ horizontal (X) and $30$ vertical (Y) bars, each comprising $900$ pixels of $16$ cm$^2$ active area. The two X and Y scintillator layers are mounted inside a $0.9$ mm thick stainless steel box, with the SiPM electronics, the temperature/pressure sensors (HP03), and the coaxial cables for signal transmission. A second steel housing encloses the electronics readout, the ToF system, and the power supply, keeping all the components insulated and protected from environmental conditions. The hodoscope reconstructs the arrival direction of muons by taking into account the pair of pixels activated in each panel. The total aperture angle and the angular resolution of the telescope varies by changing the distance between the scintillator panels. The distance between the detector and the volcanic structure as well as the separation of the panels define the spatial resolution of this telescope as $$\Delta x=L\times\Delta\theta=L\times \arctan{\frac{2dD}{D^2+4d^2i(i+1)}},$$ where $\Delta\theta$ is the angular resolution, $L$ is the distance to the target, $D$ is the separation between the panels, $d$ is pixel width, and $i$ represents the $i$th illuminated pixel. For instance, for inter-panel distances of $150$ cm, $200$ cm and $250$ cm, the total angular aperture is $1.3$ rad, $1.1$ rad and $0.9$ rad giving angular resolutions of $53$ mrad, $40$ mrad and $32$ mrad, respectively. Taking into account a distance to the volcano of $900$ m, from the previous angular resolutions we obtain spatial resolutions of $48$ m, $36$ m and $28$ m respectively. The multiple scattering of muons in the rock and the air worsens the effective spatial resolution of the telescope. The maximum scattering angle estimated is $\sim~1.5^{\circ}$ taking into account the minimum energy needed by muons to cross standard rock path-lengths between $10$ m and $1000$ m. Nevertheless, muons must have extra energy to reach the detector after leaving the scanned object. Those emerging from the scanned object with an energy $>1$ GeV have a scattering angle $\sim~1^{\circ}$ [@Suarez2019], generating a blurring of $\sim$ $31.4$ m for $L=900$ m. The exposure time of MuTe to achieve a sensible contrast is $\sim$100 days [@AsoreyEtal2017B]. ![Mounting and coupling details for one detection panel. (Left) Mechanical assembly of the scintillator bar, the Saint-Gobain BCF-92 WLS fiber and the Hamamatsu S13360-1350CS SiPM. (Right) Scintillator panel with the SiPM electronics front-end and signal transmission cables (coaxial RG-174U).[]{data-label="fig:frame"}](Figures/panel.png){width="1\columnwidth"} Telescope acceptance -------------------- The acceptance of the instrument affects the measured particle flux depending on the telescope’s geometric parameters: number of pixels in the panel ($N_x~\times~N_y$), pixel size ($d$) and inter-panel distance ($D$). The number of detected muons $N(\varrho)$ [@LesparreEtal2010], can be defined as $$N(\varrho)=\Delta T~\times~\mathcal{T}\times I(\varrho), \label{Nmuons}$$ where $I(\varrho)$ is the integrated flux (measured in cm$^{-2}$ sr$^{-1}$ s$^{-1}$), $\mathcal{T}$ the acceptance function (measured in cm$^{2}$ sr), and $\Delta T$ the recording time. The integrated flux depends on the opacity ($\varrho$): the amount of matter crossed by the muons. ![Angular resolution (left), and acceptance function (right) for the MuTe hodoscope with $N_x=N_y=30$, $d=4$ cm and $D=250$ cm versus incoming direction. The maximum solid angle is $1.024~\times~10^{-3}$ sr for perpendicular trajectories where the acceptance rises up to $3.69$ cm$^{2}$ sr.[]{data-label="fig:acceptance"}](Figures/Solid_angle.png "fig:"){width="0.48\columnwidth"} ![Angular resolution (left), and acceptance function (right) for the MuTe hodoscope with $N_x=N_y=30$, $d=4$ cm and $D=250$ cm versus incoming direction. The maximum solid angle is $1.024~\times~10^{-3}$ sr for perpendicular trajectories where the acceptance rises up to $3.69$ cm$^{2}$ sr.[]{data-label="fig:acceptance"}](Figures/Aceptancia.png "fig:"){width="0.48\columnwidth"} For a particular trajectory $r_{m,n}$ displayed by a pair of illuminated pixels on both panels, one can calculate the solid angle $\delta\Omega(r_{m,n})$ and the detection area $S(r_{m,n})$. All pairs of pixels with the same relative position, [$m=i-k$, $n=j-l$]{}, share the same direction, $r_{m,n}$ and the same $\delta\Omega(r_{m,n})$. This means directions normal to the hodoscope plane, have the larger detection area, while directions crossing corner-to-corner have a smaller solid angle and detection surface. The acceptance is obtained [@LesparreEtal2010] multiplying the detection area by the angular resolution, $$\mathcal{T}(r_{m,n})=S(r_{m,n})\times \delta\Omega(r_{m,n}).$$ A hodoscope with two matrices of $N_x\times N_y$ pixels has $(2N_x-1)\times(2N_y-1)$ discrete directions $r_{m,n}$, spanning an solid angle $\Omega$. Our telescope equipped with $900$ pixels in each panel is able to reconstruct $3481$ discrete directions. In figure \[fig:acceptance\], we show the angular resolution and acceptance function for the Mute hodoscope with $N_x=N_y=30$ scintillator bars, pixel size ($d=4$ cm) and $D=250$ cm. The total angular aperture of the telescope with that configuration is roughly 50$^{\circ}$(0.9 rad) with a maximum solid angle of $1.024\times 10^{-3}$ sr at $r_{0,0}$ corresponding to the largest acceptance of $\approx 3.69$ cm$^{2}$ sr. The MuTe acceptance may be compared with other muon telescopes mentioned in the literature: a) $N_x=N_y=12$, $d=7$ cm and $D=100$ cm; with an acceptance $\mathcal{T}=30$ cm$^{2}$ sr and angular resolution $< 0.018$ sr [@UchidaTanakaTanaka2009space]. b) $N_x=N_y=16$, $d=5$ cm and $D=80$ cm; with a acceptance $\mathcal{T}=25$ cm$^{2}$ sr and angular resolution $< 0.015$ sr [@LesparreEtal2010]. Time-of-Flight: momentum measurement ------------------------------------ Time-of-Flight methods have been applied in muography to distinguish backward moving particles from the incident ones [@jourde2013experimental]. Particles entering from the rear side of the detector represent roughly $44\%$ of background noise for zenith angles above $81^{\circ}$ [@nishiyama2016monte]. MuTe performs ToF measurements for identifying backward particles as well as low momentum ($< 1$ GeV/c) muons which are scattered by the volcano contributing to the background noise. The MuTe ToF system was implemented on a Field Programming Gate Array (FPGA) employing a Time-to-Digital Converter with a time resolution of $\sim 40$ ps, which measures the time-lapse of the crossing particles between the front and rear panel. Taking into account the ToF $t$ of particles crossing the hodoscope in a given trajectory $d$, and the particle identification provided by the WCD (distinguishing muons from electron/positrons) we can estimate particle momentum as follows $$p = \frac{m_0 c d}{\sqrt{c^2t^2-d^2}} \, ,$$ where $m_0$ is the rest mass of the charged particle ($ 105.65$ MeV/c$^2$ for muons and $0.51$ MeV/c$^2$ for electrons/positrons) and $c$ the speed of light. The uncertainty in the momentum estimation depends on the error of the ToF measurement and the error of the trajectory length, as $$\sigma_p^2 = \left( \frac{\partial p}{\partial t} \right)^2 + \left( \frac{\partial p}{\partial d} \right)^2$$ Measuring the momentum allows us to set a threshold of $1$ GeV/c, above which the influence of noise due to soft muons is negligible [@nishiyama2016monte; @nishiyama2014experimental; @Olh2018; @Olh2017; @ambrosino2015joint]. In order to establish such a cutoff, we calculate the ToF resolution requirements, which depends on the momentum resolution we want to obtain. For perpendicular tracks to the hodoscope plane, we need a ToF resolution of $10$ ps to differentiate muons with momentum $>~1$ GeV/c with an error of $\pm~0.1$ GeV/c. The ToF measurements must be compensate for the signal delay depending on the length of the transmission lines and the impinging point of the particle in the scintillator panels. We found the signal is delayed $5.05$ ns/m in the transmission lines and $77$ ps/cm in the scintillator bars. The estimated ToF resolution of MuTe is $\sim138$ ps establishing a momentum threshold from $0.4\pm~0.1$ GeV/c. The pixel-related error is $\sim89$ ps taking into account the pixel spatial resolution ($4$ cm/$\sqrt{12}$) and the scintillator bar delay ($77$ ps/cm). This value increases to $\sim97$ ps by adding in quadrature the electronics resolution ($40$ ps). The effective coincidence timing resolution for the hodoscope is the panel time resolution multiplied by $\sqrt{2}$ [@Moses2010]. Water Cherenkov Detector: deposited energy measurement ------------------------------------------------------ Water Cherenkov Detectors, widely used in cosmic-ray observatories, have high acceptance, reasonable efficiency, and $\sim 100\%$ duty cycle. These devices, implemented with few cubic meters of water and with one or more photomultiplier tubes (PMTs), record the Cherenkov radiation produced by charged crossing particles moving with a velocity greater than the speed of light in water. They are sensitive to the muonic and electromagnetic component of air showers [@Auger2015], and also detect –indirectly– high energy photons by pair production ($\gamma \rightarrow$ e$^{\pm}$) [@allard2007detecting; @allard2008use; @allekotte2008surface]. The MuTe’s WCD is a $3.2$ mm thick stainless steel cube of $1.2$ m sides, coated inside with Tyvek diffuser sheets, which enhance the reflectivity for the Cherenkov photons. An eight inch PMT (Hamamatsu R5912) with a quantum efficiency of $22\%$ at $390$ nm– acts as the photosensitive device. The number of photons detected by the PMT can be associated with the energy deposited by the crossing particle, allowing us to differentiate between muons and the electromagnetic component of EAS (photons, electrons, and positrons) [@Billoir2014]. The EM component is one of the most important noise sources in muography [@KUSAGAYA2015; @Nishiyama2014Noise; @Marteau2012Noise]. At ground level, the most probable muons ($\sim 4$ GeV) can traverse the whole WCD losing up to $240$ MeV ($2$ MeV/cm along $120$ cm) for perpendicular trajectories; while the most probable electrons ($\sim 20$ MeV) stop in $10$ cm of water losing $2$ MeV/cm [@groom2001muon; @groom2000passage; @lohmann1985energy; @olive2014passage; @Vasquez2018; @Motta2018]. The WCD detects charged particles coming from all directions due to its $2\pi$ acceptance with a deposited energy resolution of $\sim 0.72$ MeV and a measuring range from $50$ MeV (which is the typical electronic noise) up to $\sim 1.5$ GeV (which is the largest value that does not saturate the electronics readout).This property allows the MuTe to monitor the local variations of the secondary particle flux over time [@Leon2018] and also, to distinguish particles coming from the volcano direction by coincidence with the hodoscope trigger. Mechanical response {#mechanical} =================== Structural design ----------------- As shown in figure \[fig:Structure\], the hodoscope, the WCD, the electronic readout, and the central monitoring server are all mounted on a sturdy metallic structure. The frame consists of a $4.2$ m $\times~2.8$ m $\times~1.8$ m parallelepiped-shaped structure constructed of steel angles ASTM A-36 of $3.2$ mm thickness and mechanically attached with screws ($0.5$ inch diameter). The telescope can be raised up to $15^{\circ}$ with respect to the horizontal. ![MuTe lateral view. The WCD, placed on the centre of mass of the structure, benefits the angular elevation (maximum $15^{\circ}$, in $3^{\circ}$ steps). The inter-panel distance can vary from $40$ cm up to $250$ cm. The electronics front-end boxes and the PMT housing are isolated from rain and humidity.[]{data-label="fig:Structure"}](Figures/Detector.eps){width="1\columnwidth"} It is necessary to study the MuTe’s mechanical behaviour in the volcanic environment. In the following section, we analyze the stress load and the vibration of the instrument, by taking into account it’s structural design as well as the mechanical stresses due to tremors and wind conditions. Such simulations, using Finite Element Analysis (FEA), predict the physical behaviour of the instrument with linear/non-linear, and static/dynamic analysis capabilities. We employed the <span style="font-variant:small-caps;">Solidworks 3D CAD Modeling Software</span> with the package <span style="font-variant:small-caps;">Solidworks Simulation</span> for the structural analysis. Vibration analysis and tremor response -------------------------------------- We calculated the natural frequencies and vibration patterns of the instrument under the external influences of wind sources and typical volcano seismic activity which could affect the integrity of the telescope. Volcanic tremors and internal movements can be distinguished[@mcnutt1992volcanic; @londono2001spectral; @langer2006automatic; @chouet2003volcano]: - volcano-tectonic earthquakes associated with fracturing that occur in response to stress changes in the active areas due to fluid movements with frequency peaks between $2$ and $15$ Hz. - long period tremors with frequencies of $1$ to $2$ Hz, attributed to pressure changes in cracks, cavities and ducts. -------------- ------------------- ------------------------ -- -- [**Frequency**]{} [**Max. vibration**]{} [**Mode**]{} [**(Hertz)**]{} [**(Hertz)**]{} 1 1.6 0.01272 2 5.0149 0.0113 3 5.6445 0.0303 4 7.5633 0.0361 5 7.5702 0.0166 -------------- ------------------- ------------------------ -- -- : Vibration analysis of the instrument. The first column indicates the natural frequencies of the MuTe structure., while the second one shows the maximum vibration reaction when the structure is under resonance.[]{data-label="Table_nat_frec1"} As can be seen in table \[Table\_nat\_frec1\], the instrument undergoes negligible mechanical affectation due to displacements caused by tremors or other manifestations, inherent to volcanic environments having a frequency range from $1.6$ Hz to $7.5$ Hz. The structure’s reaction (vibration frequencies not exceeding $0.04$ Hz) guarantees its structural integrity against seismic events triggered by volcanic activity. Static and wind load -------------------- The aim of the static load analysis is to simulate the behaviour of the instrument against deformations that may lead to structure failure. The MuTe structure is mostly ASTM A-36 steel. The possible primary loads for the structure arise from two sources: the water volume inside the WCD ($\sim 1728$ Kg) and the metal frames for the scintillator panels ($\sim 70$ Kg each). The simulation mesh was $2.6~\times~10^6$ finite elements with $15 \pm 5$ mm size. Figure \[fig:stress\] (left) displays the simulation results with displacements ranging from $0$ mm up to $3.29$ mm with the maximum peak stress under the WCD. However, such deformations do not represent any considerable mechanical problem for the instrument. ![Left and right plates illustrate the stress graph resulting from the static and dynamic –wind action– load analysis, respectively. The maximum material deflection is about $3.29$ mm in the rear part of the WCD, which is under high pressure due to the water weight. The maximum wind pressure occurs in the front of the scintillator panels suffering a mechanical displacement about $1.63$ mm. To determine dynamical wind loads on the instrument, we used meteorological data from IDEAM. The maximum wind speed reported is $30$ m/s, with an occurrence probability of $4\%$[]{data-label="fig:stress"}](Figures/stress_graph.png "fig:"){width="0.48\columnwidth"} ![Left and right plates illustrate the stress graph resulting from the static and dynamic –wind action– load analysis, respectively. The maximum material deflection is about $3.29$ mm in the rear part of the WCD, which is under high pressure due to the water weight. The maximum wind pressure occurs in the front of the scintillator panels suffering a mechanical displacement about $1.63$ mm. To determine dynamical wind loads on the instrument, we used meteorological data from IDEAM. The maximum wind speed reported is $30$ m/s, with an occurrence probability of $4\%$[]{data-label="fig:stress"}](Figures/stress_graph_wind.png "fig:"){width="0.48\columnwidth"} To determine dynamical wind loads on the instrument, we use meteorological data from IDEAM and the maximum wind speed reported is $30$ m/s, with an occurrence probability of $4\%$. The right panel in figure \[fig:stress\] illustrates the structure stress due to wind load. The mechanical structure suffers displacements up to $1.63$ mm in the frontal part of the scintillator panel. However, this displacement is negligible, and the instrument will not experience significant deformations. Heat dissipation in the structure --------------------------------- We simulated the temperature distribution based on the thermal inputs (heat loads) and outputs (heat losses) by considering the conduction, convection and thermal irradiation due to the detector environment. Such processes include environmental temperature, solar radiation, wind cooling, and the heat dissipated by the electronics. This thermal analysis allowed us to understand the heat transfer along the structure and how it may affect the detector components. Instrument safety and reliability in the field is an essential factor to consider since the characteristics of many components (SiPMs, scintillation bars, etc.) depend on temperature. We performed the thermal structure analysis using the heat module of <span style="font-variant:small-caps;">Solidworks Modeling Software</span> with the parameters shown in Table \[instr\_mat\]. Again, IDEAM provided the average temperature, radiation and wind speed on the observation place. In figure \[fig:temp\_graph\] we see that the areas of maximum temperature in the detector, reaching $\sim 60^{\circ}$C, at the centre of the scintillation panels where the solar radiation heats a large surface, and an average of $23^{\circ}$C in the remaining structure. The WCD is a good heat dissipator due to its large metallic area and water content ($\sim 1.7$ m$^3$) attaining a maximum temperature of $40^{\circ}$C; while the front side of both panels have a lower temperature than the rear side since the wind flow generates a cooling process by convection. ----------------------- -------------------------- ------------------------ ----------------------------- Structure Material: AISI 1020 Sky temperature -10 $^{\circ}$C Model type: Linear elastic isotropic Electronic box WCD 5.2 W Thermal conductivity: 47 W/(m K) Gen. electronic box 12.5 W Specific heat: 420 J/(kg K) Electronic box Scint. 12.3 W Density: 7900 kg/m$^3$ Sun radiation 4500 Wh m$^{-2}$ day$^{-1}$ Cherenkov medium: Water Convection coeficient 10 W/(m$^2$ K) Model type: Linear elastic isotropic Mean enviroment temp. 16 $^{\circ}$C Thermal conductivity: 0.61 W/(m K) Base water temperature 10 $^{\circ}$C Specific heat: 4200 J/(kg K) Density: 1000 kg/m$^3$ ----------------------- -------------------------- ------------------------ ----------------------------- : Instrument materials and data used in the MuTe thermal analysis implemented by using the heat module of <span style="font-variant:small-caps;">Solidworks Modeling Software</span>. The simulation input consists of the heat transfer properties of the MuTe metallic structure, as well as, the heat sources surrounding the instrument which have environmental origin or caused by the electronics functioning. []{data-label="instr_mat"} ![Temperature distribution in the MuTe. The maximum heat area ($60^{\circ}$C) is on the rear side of the scintillator panels due to the solar radiation, while the front sides are cooled by the wind convection. The WCD has its heat-dissipation mechanism due to its water volume content.[]{data-label="fig:temp_graph"}](Figures/MuTe_Temp.png){width="1\columnwidth"} The temperature distribution allows us to identify the critical heat areas in the detector structure. Consequently, we can enhance heat dissipation, by shading the instrument from solar radiation as well as allowing air circulation for cooling by convection. Electronics readout {#daq} =================== The MuTe electronics has two main –independent but synchronized– readout systems: one for the hodoscope and one for the WCD. ![Diagram of a single scintillator panel. Signals from the SiPMs are read out by the MAROC3 board whose slow control parameters are handled by a Raspberry Pi 2. All the detected events are time-stamped and sent via ethernet to the central monitoring server. The master trigger measures the ToF of the crossing particle and notifies the event truthfulness.[]{data-label="fig:scintillatordetector1"}](Figures/DAQ.eps){width="0.8\columnwidth"} In the hodoscope, $120$ SiPMs Hamamatsu S13360-1350CS, with a gain of $\sim 10^6$ and a photo-detection efficiency of $40\%$ at $450$ nm– detect the light signals coming from the scintillator bars. Each SiPM has a pre-conditioning electronics for amplifying ($\times 92$) and enhancing the signal-to-noise ratio before the transmission. A multi-channel ASIC MAROC3 from Omega discriminates the $60$ signals after making a gain adjustment to reduce the bar response variability. An FPGA Cyclone III sets the MAROC3 slow control parameters (channel gains and discrimination thresholds) from Altera. We set a discrimination threshold of $8$ photo-electrons taking into account previous analysis of dark count, cross-talk and after-pulse of the SiPM S13360-1350CS [@Villafrades2020]. The SCB Raspberry Pi 2 records the data from the scintillation panels when a coincidence condition is fulfilled (See section \[trigger\]). Environmental data (temperature, barometric pressure, and power consumption) are also recorded for post-processing, status monitoring, and calibration procedures. On the other hand, the SBC controls the SiPMs bias voltage depending on the temperature via the programmable power supply C11204. The recorded events are individually time-stamped with a resolution of $10$ ns and synchronized using the PPS (Pulse Per Second) signal from a Venus GPS. A general diagram of the electronics readout for a single scintillator panel is shown in figure \[fig:scintillatordetector1\]. ![The diagram of the WCD DAQ system. The PMT and the bias electronics are inside the WCD. A 10 bits ADC digitizes signals from the PMT anode and last-dynode and stored in a hard disk join with temperature and barometric pressure data. An FPGA sets the acquisition parameters and the event time-stamp[]{data-label="fig:WCD"}](Figures/WCDDAQ.eps){width="0.9\columnwidth"} In the WCD, a PMT R5912 detects the Cherenkov light from the charged particles crossing the water volume. The PMT is biased through a tapered resistive chain by a high-voltage power supply EMCO C20 spanning $0$ to $2000$ V. The pulses from the anode and the last dynode –amplified $20$ times– are independently digitized by two $10$ bits ADCs with a sampling frequency of $40$ MHz. A $12-$sample vector stores the pulse shape in each channel, when the signal amplitude exceeds the discrimination threshold ($\sim 100$ ADC bins). Then, a temporal label with $25$ ns resolution concatenates the event information. The timestamp is synchronized with the PPS signal from a GPS Motorola OnCore. Temperature and barometric pressure data are also recorded for the off-line analysis and data correction. An FPGA Nexys II handles the tasks of thresholding, base-line correction, temporal labelling, and temperature-pressure recording. A third ADC channel digitizes a NIM signal coming from the hodoscope when an in-coincidence event occurs (See \[trigger\]). The acquisition parameters of the WCD DAQ system (shown in figure \[fig:WCD\] with discrimination thresholds and the PMT bias voltage) are set by an SCB Cubieboard 2. All the data is stored locally on an external hard disk. A local server collects data from the WCD and the hodoscope every $12$ hours for carrying out an *in-situ* analysis. The results are sent via the GSM network to a remote server, which updates the MuTe status on a monitoring web page. An intranet system connects the local server, the hodoscope, and the WCD, as shown in figure \[fig:power\]. Additionally, the MuTe enables a wireless access point for working locally from a laptop or any other mobile device. Triggering system {#trigger} ----------------- The MuTe triggering system determines event coincidence between the hodoscope and the WCD [@PenaRodriguez2019]. The Trigger $T1$ is individually enabled for rear and front hodoscope panels when the pulse amplitude exceeds the discrimination threshold value. This trigger signal splits into three sublevels: the Trigger P1-P2 for cross-checking events in-coincidence and ToF measurements, the Trigger P1 for starting the data transmission from the MAROC 3A to the SCB, and the Ext-Trigger for holding the information inside the MAROC 3A while it is read. To identify the position of the activated pixel, MuTe counts only the events activating a vertical and a horizontal bar per panel, called trigger T2. Coincident events between the front and the rear panel in a time window ($7$ to $12$ ns) classify as crossing particles, determine the trigger T3, and estimate the particle flux across the hodoscope. The coincidence window takes into account the time needed by a particle travelling at the speed of light through two paths: the shortest ($2.5$ m) and the longest ($3.5$ m). When the WCD detects a particle, the trigger T4 is activated. Next, trigger T5 determines coincidence of the events between the hodoscope and the WCD; this trigger is also called hybrid trigger (T5 = T3 **AND** T4). The NIM Trigger signal is digitized by the third ADC channel of the WCD for labelling the events in-coincidence with the hodoscope. All the time delays due to the transmission of the signals are considered for data analysis. ![General diagram of MuTe. The local server manages the data coming from the hodoscope and the WCD by intranet while a hard disk stores all the obtained information. Then, MuTe sends its operational status via GSM towards a remote server, and a WiFi connection link for local testing. The whole detector consumes $41.4$ W being the local server the major power load, dissipating ($\sim 12$ W) due to the operation of the hard disk, the intranet router, and the GSM transceiver.[]{data-label="fig:power"}](Figures/Total2.eps){width="0.8\columnwidth"} Power consumption and operating autonomy ---------------------------------------- Electrical power independence is a crucial parameter for our MuTe detector because it has to operate autonomously on a distant location. We designed a photovoltaic system taking into account the power requirement of all detector components. MuTe power supply has four photovoltaic panels of $100$W ($18$ V, $5.56$ A). This panel array provides the instrument with six days of continuous operation, which is the maximum number of consecutive cloudy days that occurred in the last $22$ years[^1]. Appendix \[ApA\] details the estimation of the power capacity and autonomy of the Colombian Muon Telescope and figure \[fig:power\] displays the power consumption data: $\sim 24$ W for the hodoscope, $\sim 5.2$ W for the WCD and $\sim 12.2$ W for the central monitoring server. The two hard disks –used to store $470$ MB per hour of data from the hodoscope and the WCD– are the devices with greater power consumption, but they provide almost six months of data-storage autonomy. First measurements {#measurement} ================== In the first measurements, the MuTe hodoscope was operated in the vertical direction recording the muon flux during $15$ hours. The average counting rate was $\sim~836.3$ event/h with a discrimination threshold of $8$ photo-electrons (MIP $\sim~16$ pe). The inter-panel distance was $134$ cm, the angular aperture $82^{\circ}$ and the maximum acceptance of $12.83$ cm$^{2}$ sr. To reconstruct the particle trajectories and the flux crossing through the hodoscope, we apply a four-bar activation condition: a pair XY in the front panel and a pair XY in the rear one. ![Particle count data recorded by the hodoscope operating in the vertical direction during $15$ hours with a separation of $134$ cm between panels. The vertical flux was $\sim 10.7~\times~10^{-3}$ cm$^{-2}$ sr$^{-1}$ s$^{-1}$. As expected, the flux decreases while the zenith angle increases: for a zenith angle of $41^{\circ}$ the flux is around $4.5~\times~10^{-3}$ cm$^{-2}$ sr$^{-1}$ s$^{-1}$, a half order of magnitude lower than the flux maximum.[]{data-label="fig:hits_15"}](Figures/Hits_15h.png "fig:"){width="0.48\columnwidth"} ![Particle count data recorded by the hodoscope operating in the vertical direction during $15$ hours with a separation of $134$ cm between panels. The vertical flux was $\sim 10.7~\times~10^{-3}$ cm$^{-2}$ sr$^{-1}$ s$^{-1}$. As expected, the flux decreases while the zenith angle increases: for a zenith angle of $41^{\circ}$ the flux is around $4.5~\times~10^{-3}$ cm$^{-2}$ sr$^{-1}$ s$^{-1}$, a half order of magnitude lower than the flux maximum.[]{data-label="fig:hits_15"}](Figures/Flux.png "fig:"){width="0.49\columnwidth"} In figure \[fig:hits\_15\], we display the number of hits and the particle flux recorded by the hodoscope. The maximum count was $\sim$ 67 for straight trajectories ($\theta_x=\theta_y=0^{\circ}$). The number of counts decreases for non-perpendicular trajectories due to the hodoscope acceptance and the muon flux, which is modulated by the zenith angle ($\cos^2 \theta$). The estimated flux reaches a maximum of $10.7~\times~10^{-3}$ cm$^{-2}$sr$^{-1}$s$^{-1}$ which is comparable with the flux of $9~\times~10^{-3}$ cm$^{-2}$ sr$^{-1}$ s$^{-1}$, reported in reference [@Lesparre2012] . The variance in the flux histogram can be reduced by increasing the acquisition time. Later, the MuTe was set outdoors pointing in the horizontal direction (0$^{\circ}$ elevation) as shown in figure \[fig:WCDHod\]. The WCD and the hodoscope each detect individually but synchronized in time. The in-coincidence flux between both is two orders of magnitude lower than the events recorded by the WCD (see figure \[fig:WCDHod\_rate\]) representing only 2$\%$. This reduction in flux is due, mainly, to: a two order of magnitude decrease in the muon flux between the maximum at 0° zenith and at quasi-horizontal angles; and, furthermore, the angular acceptance of the WCD is roughly 2$\pi$ while that of the hodoscope is only a fraction due to its geometry. ![The MuTe setup for the first field measurements. The detector is pointing towards the horizon with an elevation angle of $0^{\circ}$. The aperture of the hodoscope $\theta_H$ is $50^{\circ}$ for a separation distance between panels of $250$ cm. The aperture of the whole detector (WCD + hodoscope) $\theta_C$ is roughly $32^{\circ}$.[]{data-label="fig:WCDHod"}](Figures/Acceptance.eps){width="0.7\columnwidth"} The energy deposited in the WCD (blue) and for the in-coincidence events between both WCD and hodoscope figure \[fig:WCDHod\_rate\], emerge from three main sources: muons, electron/positrons and multiple particle events. The muonic component represents roughly $33.6\%$ of the events ($180$ MeV$<~E_{loss}~<~400$ MeV), the electromagnetic $36$% ($E_{loss}~<~180$ MeV), and the multiple particle $30.4\%$ ($E_{loss}~>~400$ MeV) of the histogram. ![Energy deposited in the WCD for the omnidirectional events (blue) and for the in-coincidence events between both WCD and the hodoscope (red). The dashed line represents the deposited energy of the vertical muons (VEM) which is estimated to be $240$ MeV taking into account muon losses ($\sim 2$ MeV/cm in water). The first hump corresponds to the energy deposited by electrons, positrons and gammas while above $400$ MeV events correspond to multiple particles.[]{data-label="fig:WCDHod_rate"}](Figures/WCDHod.eps){width="0.7\columnwidth"} These results show that the background (electromagnetic and multiple particles) is comparable to the signal, even greater taking into account that soft muons have not been extracted from the muonic component. On the other hand, multiple particle background made up by several particles temporally correlated, e.g. inclined cosmic showers impacting the detector [@Bonechi2019], become more significant comparable in magnitude to the electromagnetic and muonic humps. ![In-coincidence event rate detected by the front (blue), rear (green) and the WCD (red). The WCD rate is $50\%$ lower than the detected by the panels due to due to its smaller acceptance angle than that of the hodoscope. The dashed line indicates the expected WCD rate taking into account the ratio between the angular apertures $\theta_H$ and $\theta_C$.[]{data-label="fig:RateWCDH"}](Figures/HodWCDRate.eps){width="0.8\columnwidth"} The aperture angle of the hodoscope $\theta_H$ at an inter-panel distance of $2.5$ m is around $50^{\circ}$, and the aperture of the whole detector (WCD + hodoscope) $\theta_C$ is roughly $32^{\circ}$. This means that several trajectories with high inclination angle will not be detected by the WCD, identifying only $\sim 62\%$ of the hodoscope events. In figure \[fig:RateWCDH\], we show the coincidence rate detected by the hodoscope planes and the WCD during $14$ hours. The mean rate for the panels is around $3.2$ events/s and for the WCD we have just $1.5$ events/s, which is $\approx 50\%$ the rate of the hodoscope and it is lower than expected ($\sim 2$ events/s) due to the detection efficiency. ![Preliminary ToF measurements. The left side shows the ToF of single crossing particles (blue) and the time difference of particles impinging individually each panel (red). The right side shows the ToF distribution (mean $\sim$9.3ns) for single crossing particles.[]{data-label="fig:ToF"}](Figures/signal_back.eps "fig:"){width="0.48\columnwidth"} ![Preliminary ToF measurements. The left side shows the ToF of single crossing particles (blue) and the time difference of particles impinging individually each panel (red). The right side shows the ToF distribution (mean $\sim$9.3ns) for single crossing particles.[]{data-label="fig:ToF"}](Figures/ToF.eps "fig:"){width="0.48\columnwidth"} In figure \[fig:ToF\], we present preliminary results of ToF measurements. The left plate shows the ToF of single particles crossing the hodoscope (blue) and the time difference between two particles impinging individually each panel (red). The probability that two particles impinge individually each hodoscope panel creating a ToF signal like a single particle is negligible ($\sim~0.05\%$) under $200$ ns. The right side shows ToF details for single particles. The mean ToF ($\sim9.3$ ns) coincides with the expected range ($8.3$-$11.6$ ns) whose limits were defined as the ToF of a relativistic particle crossing the shortest ($2.5$ m) and the largest ($3.5$ m) hodoscope path. The signal delay in the scintillator bars enlarges the ToF measured range ($2.53$-$20.9$ ns). Some final remarks {#conclusions} ================== In this paper, we presented the structural –mechanical and thermal– simulations and the first calibration measurements of a hybrid muon telescope (scintillator hodoscope + WCD) designed to implement muography in the volcanoes of the Colombian Andes. Our instrument includes a hodoscope made by a pair of detection panels of plastic scintillator bars with an angular resolution of $32$ mrad for an inter-panel distance of $250$ cm. Furthermore, our design also incorporates particle identification techniques to filter the most common background noise sources in muography. A water Cherenkov detector allows to reduce noise signals coming from the soft-component of EAS (electrons and positrons), and multiple particle events using energy loss estimation. The WCD also detects fluctuations in the cosmic ray background at the observation place. Additionally, a picosecond Time-of-Flight system measures the direction and momentum of incident charged particles allowing the removal of backward and low momentum scattered muons. We estimate that MuTe can discriminate muons below $0.4~\pm~0.1$ GeV/c taking into account the $138$ ps ToF resolution. The background noise due to the electromagnetic component of EAS was estimated to be $36\%$ of the collected data, while events corresponding to backward, forward and low momentum muons are $\sim 33.6\%$. As displayed in figure \[fig:WCDHod\_rate\], the estimated multiple particle background was $30.4\%$ and the two-particle cases are the most probable in comparison with events involving several simultaneous particles. Such events release an average energy of $480$ MeV in the WCD. The integrated flux recorded by the hodoscope pointing at $90^{\circ}$ with an aperture of $82^{\circ}$ drops drastically about two orders of magnitude compared to the total flux registered by the WCD at the observation point. Such flux reduction allows the multiple particle background to become more significant, decreasing the detector signal-to-background ratio. After a complete analysis of the MuTe mechanical response corresponding to vibrations and tremors ranging from $1.6$ Hz to $7.5$ Hz, we found that MuTe structure would not undergo severe affectation. On the other hand, through the thermal simulations, we obtained that the maximum temperature in the hodoscope under the extreme Machín volcano environmental conditions was $60^{\circ}$C at the rear side of the scintillation panels. An average wind speed of $30$ m/s generates a convection process in the front side of the panels, causing a temperature drop to $23^{\circ}$C. The WCD temperature, regulated by its water content, reaches at most 40$^{\circ}$C. This thermal analysis was considered for optimizing the SiPM operation [@PenaRodriguez2020]. The angular resolution of MuTe ($32$ mrad) is similar to other experiments such as TOMUVOL ($8.7$ mrad) [@Crloganu2013], MU-RAY ($15$ mrad) [@Ambrosino2014], MURAVES ($8$ mrad) [@Cimmino2017], and DIAPHANE ($100$ mrad) [@Lesparre2012]. Moreover, for the filtering of backwards muons, our ToF system has a better resolution ($\sim 138$ ps) compared to MURAVES ($400$ ps) and DIAPHANE ($1$ ns) [@jourde2013experimental]. Our instrument incorporates particle identification techniques based on energy loss and momentum measurements to remove the background noise caused by low momentum muons, multiple particle events and electron/positrons. We gratefully acknowledge the observations, suggestions and criticisms for the anonymous referees, improving the precision, presentation and clarity of the present work. The authors express our gratitude for the financial support of Departamento Administrativo de Ciencia, Tecnología e Innovación of Colombia (ColCiencias) under contract FP44842-082-2015 and to the Programa de Cooperación Nivel II (PCB-II) MINCYT-CONICET-COLCIENCIAS 2015, under project CO/15/02. We are also very gratefull to LAGO and to the Pierre Auger Collaboration for their continuous support. The simulations in this work were partially possible due to the computational support of the Red Iberoamericana de Computación de Altas Prestaciones (RICAP, 517RT0529), co-funded by the Programa Iberoamericano de Ciencia y Tecnología para el Desarrollo (CYTED) under its Thematic Networks Call. We also thank the permanent cooperation from the Universidad Industrial de Santander (SC3UIS) High Performance and Scientific Computing Centre. Finally, we would like to acknowledge the Vicerrectoría Investigación y Extensión Universidad Industrial de Santander for its permanent sponsorship. Finally, DSP would like to thank the School of Physics, the Grupo de Investigación en Relatividad y Gravitación, Grupo Halley and Vicerrectoría Investigación y Extensión of the Universidad Industrial de Santander for the warm hospitality during my post-doctoral fellowship. Estimation of MuTe power storage capacity {#ApA} ========================================= The storage capacity required by the system in a day ($C_A$) is calculated using the modified Roger equation (\[cap\_alma\]) [@messenger2017photovoltaic], i.e. $$C_a=\frac{E_c(1+F_s)}{\eta_{pb}\, \eta_{cdb} \, \eta_{rc} \, \eta_{pc}\, D_{b}} \, , \label{cap_alma}$$ where $E_c$ is the load energy considering DC/DC converters, $F_s$ the scaling factor, $\eta_{pb}$ the efficiency of conductors, $\eta_{cdb}$ the efficiency of batteries, $\eta_{rc}$ the battery charge and discharge efficiency, $\eta_{pc}$ the charge controller efficiency, and $D_{b}$ corresponds to the battery depth of discharge. $E_c$ defined as follows $$E_c=E_x+\frac{E_{\gamma}}{\eta_{dcdc}} \, , \label{two}$$ where $E_{x}$ is the load energy which does not require DC/DC converters, $E_{\gamma}$ the load energy requiring DC/DC converters and, $\eta_{dcdc}$ the efficiency of the DC/DC converters. From (\[two\]) with $E_{x}=58.57$ Wh and $E_{\gamma}=746.24$ Wh, we have $E_c=844.08$ Wh. Thus, by using $\eta_{pb}=0.97$, $\eta_{cdb}=0.95$, $\eta_{rc}=0.95$, $\eta_{pc}=0.98$, $D_{b}=0.8$ and $F_s=0.2$ (20% oversizing), we obtained the value of $C_a=1472.75$ Wh, while the total storage capacity with which the system will count can be obtained by means of $$C_{at}=\frac{C_aD_a}{V_{nb}} \, ,$$ here $D_a$ is the total number of autonomy days and $V_{nb}$, is the nominal voltage of the battery bank. In this case, we set six days of autonomy with a nominal voltage of $12$ V, resulting in a total storage capacity of $C_{at}=736.38$ Ah. According to the criteria and environmental conditions presented above, we found the need for four batteries of $205$ Ah with a weight of $65$ Kg per battery and a discharge depth of $80\%$. [^1]: We use meteorological information (irradiance, temperature, and cloudiness) from NASA satellites <https://eosweb.larc.nasa.gov/> and from the Colombia Meteorology and Hydrology National Institute, i.e. in Spanish *Instituto de Hidrología, Meteorología y Estudios Ambientales* (IDEAM) <http://atlas.ideam.gov.co>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We calculate the density distribution of protons and neutrons for $^{40,42, 44,48}Ca$ in the frame-work of relativistic mean field (RMF) theory with NL3 and G2 parameter sets. The microscopic proton-nucleus optical potential for $p+^{40}Ca$ system is evaluted from Dirac NN-scattering amplitude and the density of the target nucleus using Relativistic-Love-Franey and McNeil-Ray-Wallace parametrizations. Then we estimate the scattering observables, such as elastic differential scattering cross-section, analysing power and the spin observables with relativistic impulse approximation. We compare the results with the experimental data for some selective cases and found that the use of density as well as the scattering matrix parametrization is crucial for the theoretical prediction.' address: - 'School of Physics, Sambalpur University, Jyotivihar-768 019, India' - 'Institute of Physics, Sachivalaya Marg, Bhubaneswar-751 005, India ' author: - 'M. Bhuyanand S. K. Patra' title: Effects of density and parametrization on scattering observables --- Explaining the nuclear structure by taking the tool of nuclear reaction is one of the most curious and challenging solution for Nuclear Physics both in theory and laboratory. So far the elastic scattering reaction of Neucleon-Nucleus is more interesting than that of Nucleus-Nucleus at laboratory energy $E_{lab} \simeq$ 1000 MeV. The Neucleon-Nucleus interaction provides a fruitful source to determine the nuclear structure and a clear path toward the formation of exotic nuclei in laboratory. One of the theoritical method to study such type of reaction is the Relativistic Impulse Approximation (RIA). In a wide range of energy interval, the conventional impulse approximation [@fedd63; @mahu78] reproduces quantitatively the main features of quasi-elastic scattering for medium mass nuclei [@bala68; @glau55]. The observables of the elastic scattering reaction not only depend on the energy of the incident particle but also on the kinematic parameter as well as the density discributions of the target nucleus. In the present letter, our motivation is to calculate the nucleon-nucleus elastic differential scattering cross-section ($\frac{d\sigma}{d\Omega}$) and other quantities, like optical potential ($U_{opt}$), analysing power ($A_y$) and spin observables ($Q-$value) taking input as relativistic mean field (RMF) and recently proposed effective field theory motivated relativistic mean field (E-RMF) density. The RMF and E-RMF densities are obtained from the most successful NL3 [@lala97] and advanced G2 [@tang96] parameter sets, respectively. As representative cases, we used these target densities folded with the NN-aplitude of 1000 MeV energetic proton projectile with Relativistic-Love-Franey (RLF) and McNeil-Ray-Wallace (MRW) parametrizations [@neil83] for $^{40,42,44,48}Ca$ in our calculations. The RMF and E-RMF theories are well documented [@tang96; @patra01a; @patra91] and for completeness we outline here very briefly the formalisms for finite nuclei. The energy density functional of the E-RMF model for finite nuclei is written as [@ser97; @fur96], $$\begin{aligned} \mathcal{E}(\mathbf{r}) = \sum_\alpha \varphi_\alpha^\dagger \Bigg\{ -i \mbox{\boldmath$\alpha$} \!\cdot\! \mbox{\boldmath$\nabla$} + \beta (M - \Phi) + W + \nonumber \\ \frac{1}{2}\tau_3 R + \frac{1+\tau_3}{2} A - \frac{i}{2M} \beta \mbox{\boldmath$\alpha$}\!\cdot\! (f_v \mbox{\boldmath$\nabla$} W + \frac{1}{2}f_\rho\tau_3 \mbox{\boldmath$\nabla$} \nonumber \\ R + \lambda \mbox{\boldmath$\nabla$} A ) + \frac{1}{2M^2}\left (\beta_s + \beta_v \tau_3 \right ) \Delta A \Bigg\} \varphi_\alpha \nonumber \\ \null + \left ( \frac{1}{2} + \frac{\kappa_3}{3!}\frac{\Phi}{M} + \frac{\kappa_4}{4!}\frac{\Phi^2}{M^2}\right ) \frac{m_{s}^2}{g_{s}^2} \Phi^2 - \frac{\zeta_0}{4!} \frac{1}{ g_{v}^2 } W^4 \nonumber \\[3mm] \null + \frac{1}{2g_{s}^2}\left( 1 + \alpha_1\frac{\Phi}{M}\right) \left( \mbox{\boldmath $\nabla$}\Phi\right)^2 - \frac{1}{2g_{v}^2}\left( 1 +\alpha_2\frac{\Phi}{M}\right) \nonumber \\ \left( \mbox{\boldmath $\nabla$} W \right)^2 \null - \frac{1}{2}\left(1 + \eta_1 \frac{\Phi}{M} + \frac{\eta_2}{2} \frac{\Phi^2 }{M^2} \right) \frac{{m_{v}}^2}{{g_{v}}^2} W^2 - \frac{1}{2g_\rho^2} \nonumber \\ \left( \mbox{\boldmath $\nabla$} R\right)^2 - \frac{1}{2} \left( 1 + \eta_\rho \frac{\Phi}{M} \right) \frac{m_\rho^2}{g_\rho^2} R^2 \nonumber \\ \null - \frac{1}{2e^2}\left( \mbox{\boldmath $\nabla$} A\right)^2 + \frac{1}{3g_\gamma g_{v}}A \Delta W + \frac{1}{g_\gamma g_\rho}A \Delta R , $$ where the index $\alpha$ runs over all occupied states $\varphi_\alpha (\mathbf{r})$ of the positive energy spectrum, $\Phi \equiv g_{s} \phi_0(\mathbf{r})$, $W \equiv g_{v} V_0(\mathbf{r})$, $R \equiv g_{\rho}b_0(\mathbf{r})$ and $A \equiv e A_0(\mathbf{r})$. The terms with $g_\gamma$, $\lambda$, $\beta_{s}$ and $\beta_{v}$ take care of the effects related with the electromagnetic structure of the pion and the nucleon (see Ref. [@fur96]). The energy density contains tensor couplings, and scalar-vector and vector-vector meson interactions, in addition to the standard scalar self interactions $\kappa_{3}$ and $\kappa_{4}$. Thus, the E-RMF formalism can be interpreted as a covariant formulation of density functional theory as it contains all the higher order terms in the Lagrangian, obtained by expanding it in powers of the meson fields. The terms in the Lagrangian are kept finite by adjusting the parameters. Further insight into the concepts of the E-RMF model can be obtained from Ref. [@fur96]. It may be noted that the standard RMF Lagrangian is obtained from that of the E-RMF by ignoring the vector-vector and scalar-vector cross interactions, and hence does not need a separate discussion. In each of the two formalisms (E-RMF and RMF), the set of coupled equations are solved numerically by a self-consistent iteration method and the baryon, scalar, isovector, proton, neutron and tensor densities are calculated. The numerical procedure of calculation and the detailed equations for the ground state properties of finite nuclei, we refere the reader to Refs. [@patra91; @patra01a]. The densities obtained from RMF (NL3) [@lala97] and E-RMF (G2) [@tang96] are used for folding with the NN-sacttering amplitude at $E_{lab}=1000 MeV$, which gives the proton-nucleus complex optical potential for RMF and E-RMF formalisms. RIA involves mainly two steps [@furn87; @pard83] of calculations for the evaluation of the NN-scattering amplitude. In this case, five Lorentz covariant function [@neil83] multiply with the so called Fermi invariant Dirac matrix (NN-scattering amplitudes). This NN-amplitudes are folded with the target densities of protons and neutrons to produced a first order complex optical potential $U_{opt}$. The invariant NN-scattering operater ${\cal F}$ can be written in terms of five complex functions (the five terms involves in the proton-proton pp and neutron-neutron pn scattering) as follows: $$\begin{aligned} {\cal F(q,E)}={\cal F}^{S} +{\cal F}^{V}\gamma^{\mu}_{(0)}\gamma_{(1)\mu} +{\cal F}^{PS}\gamma^{5}_{(0)}\gamma^{5}_{(1)}\nonumber\\ +{\cal F}^{T}\sigma^{\mu\nu}_{(0)}\sigma_{(1)\mu\nu} +{\cal F}^{A}\gamma^{5}_{(0)}\gamma^{\mu}_{(0)}\gamma^{5}_{(1)}\gamma_{(1)\mu},\end{aligned}$$ where (0) and (1) are the incident and struck nucleons respectively. The amplitude for each ${\cal F}^{L}$ is a complex function of the Lorentz invariants [*T*]{} and [*S*]{} with ${\it E}=E_{lab}$ and [*q*]{} is the four momentum. We recommend the redears for detail expressions to Refs. [@bunu10; @pery86; @fox89; @dock87; @bri77; @ser84; @mann84; @har86; @ser87]. Then the Dirac optical potential ${\it U}_{opt}(q, E)$ can be written as, $$\begin{aligned} {\it U}_{opt}(q, E) = \frac{-4\pi ip}{M}\langle\psi\vert \sum_{n=1}^{A}exp^{iq.x(n)}{\cal F}(q, E; n)\vert\psi\rangle,\end{aligned}$$ where ${\cal F}$ is the scattering operator, ${\it p}$ is the momentum of the projectile in the nucleon-nucleus center of mass frame, $\vert\psi\rangle$ is the nuclear ground state wave function for A-particle. Finally using the Numerov algorithm the obtained wave function is matched with the coulomb scattering solution for a boundary condition at $r\rightarrow \infty$ and we get the scattering observables from the scattering amplitude, which are defined as: $$\begin{aligned} \frac{d\sigma}{d\Omega}\equiv\vert A(\theta)\vert ^{2}+\vert B(\theta)\vert ^{2}\\ A_{y}\equiv\frac{2Re[A^{*}(\theta)B(\theta)]}{d\sigma /d\Omega}\\ and\nonumber\\ Q\equiv\frac{2Im[A(\theta)B^{*}(\theta)]}{d\sigma /d\Omega}.\end{aligned}$$ Now we present our calculated results of neutrons and protons density distribution obtained from the RMF and E-RMF formalisms [@patra01a]. Then we evaluate the scattering observables using these densities in the relativistic impulse approximation, which involves the following two steps: in the first step we generate the complex NN-interaction from the Lorentz invariant matrix ${\cal F}^L(q,E)$ as defined in Eq. (2). Then the interaction is folded with the ground state target nuclear density for both the RLF and MRW parameters [@neil83] separately and obtained the nucleon-nucleus complex optical potential $U_{opt}(q,E)$ for the parametrisations. It is to be noted that pairing interaction is taken care using the Pauli blocking approximation. In the second step, we solve the wave function of the scattering state utilising the optical potential prepared in the first step by well known Numerov algorithm [@koon86]. The result approxumated with the non-relativistic Coulomb scattering for a longer range of radial component which results the scattering amplitude and other observables [@thy68]. In thr present paper we calculate the density distribution of protons and neutrons for $^{40,42,44,48}$Ca in NL3 and G2 parameter sets. From the density we evalute the optical potential and other scattering observables and some representative cases are presented in Figures $1-3$. ![[*(upper panel): The neutrons and protons density distribution for $^{40}Ca$ with NL3 and G2 parameter sets. (lower panel) (a) the Dirac optical potential for $p+^{40}Ca$ system using RMF (NL3) and E-RMF (G2) densities with RLF parametrisation, (b) same as (a), but for MRW parametrisation. The projectile proton with $E_{lab}=1000$ MeV is taken.*]{}](Fig1.eps){width="1.0\columnwidth"} In Fig. 1, the protons and neutrons density distribution for $^{40}Ca$ using NL3 and G2 parameter sets (upper panel) and the optical potential obtained with RLF and MRW parametrisation for $p+^{40}Ca$ at 1000 MeV proton energy (lower panel) are shown. From the figure, it is noticed that, there is no significant difference in desities for RMF and E-RMF parameter sets. However, a careful inspection shows a small enhancement in central density (0-1.6 fm) for NL3 set. On the otherhand the densities obtained from G2 elongated to a larger distance towards the tail part of the density distribution. As the optical potential is a complex function which constitute both real and imaginary part for both scalar and vector, we have displyed those values in the lower panel of Fig. 1. Unlike to the (upper panel) of protons and neutrons density distribution, here we find a large difference of $U_{opt}(q,E)$ between the RLF and MRW parametrisation. Further, the $U_{opt}(q,E)$ value of either RLF or MRW differs significantly depending on the NL3 or G2 force parameters. That means, the optical potential not only sensitive to RLF or MRW but also to the use of NL3 or G2 parameter sets. Investigating the figure it is clear that, the extrimum magnitude of real and imaginary part of the scalar potential are -442.2 and 113.6 MeV for RLF (G2) and -372.4 and 109.1 MeV for RLF (NL3). The same values for the MRW parametrisation are -219.8 and 32.8 MeV with G2 and -175.1 and 33.2 MeV with NL3 sets. In case of vector potential, the extremum values for real and imaginary parts are 361.3 and -179.2 MeV for RLF (G2) and 279.2 and -164.8 MeV for RLF (NL3) but with MRW parametrisation these are appeared at 128.1 and -87.4 MeV in G2 and 99.2 and -76.6 MeV in NL3. From these large variation in magnitude of scalar and vector potentials, it is clear that the predicted results not only depend on the input target density, but also highly sensitive with the kinematic of the reaction dynamics. A further analysis of the results for the optical potential with NL3 and G2, it suggest that the $U_{opt}$ value extends for a larger distance in NL3 than G2. For example, with RLF the central part of $U_{opt}$ with G2 is more expanded than with NL3 and ended at $r\sim 6 fm$, whereas the optical potential persists till $r\sim 8 fm$ in NL3. Similar situation is also valid in MRW parametrisation. This nature of the potential suggests the applicability of NL3 over G2 force parameter. This is because in case of NL3 the soft-core interaction between the projectile and the target nucleon is more effective. ![[*The elastic differential scattering cross-section ($\frac{d\sigma}{d\Omega}$) as a function of scattering angle $\theta_{cm}$(deg) for $^{40,42,44,48}Ca$ using both RLF and MRW parametrisations. The value of $\frac{d\sigma}{d\Omega}$ is shown for RMF (NL3) and E-RMF (G2) densities.* ]{}](Fig2.eps){width="1.0\columnwidth"} In Fig. 2., we have plotted the elastic scattering cross-section of the proton with $^{40,42,44,48}Ca$ at laboratory energy $E_{lab}=$1000 MeV using both densities obtained in the NL3 and G2 parameter sets with RLF and MRW parametrisations. The experimental data [@expt78] are also given for comparison. It is reported in Refs. [@neil83; @horo90] the superiority of RLF over MRW for lower energy ($E_{lab}\leq 400$ MeV), however the MRW shows better results at energy $E_{lab} > 400$ MeV. In the present case, our incident energy is 1000 MeV which matches better (MRW) with experimental values. This is consistent with the optical potential also (see Fig. 1). From the differential cross-section for both NL3 and G2 densities with MRW parametrization, it is clearly seen that $\frac{d\sigma}{d\Omega}$ with NL3 desity is more closer to experimental data which insist not only the importance of parametrization (RLF or MRW) but also to choose proper density input for the reaction dynamics. Analysing the elastic differential cross-section along the isotopic chain of Ca from A=40 to 48, the calculated results improve with increasing mass number of the target. ![[*(a) The calculated values of analysing power $A_y$ as a function of scattering angle $\theta_{cm}$(deg) for $^{40}Ca$ (b) The spin observable $Q-$value as a function of scattering angle $\theta_{cm}$(deg) for $^{40}Ca$. In both (a) and (b), the RLF and MRW parametrisations are used with RMF (NL3) and E-RMF (G2) densities.* ]{}](Fig3.eps){width="1.0\columnwidth"} The analysing power for $p+^{40}Ca$ composite system is calculated from the general formulae given in eqns. (4) and (5) and are shown in Fig. 3 with RLF and MRW. The $A_y$ and $Q-$values obtained by NL3 and G2 sets almost matches with each other both in RLF and MRW. But if we compare the results with RLF and MRW it differs significantly. Again, we get a small oscillation of $A_y$ and $Q$ in G2 set with increasing scattering angle $\theta_{c.m.}^0$ for RLF which does not appear in NL3 set. There is a rotation of $Q-$value from positive to negative direction when we calculate with MRW parametrization, which does not appear in case of RLF parametrization. This rotation shows a shining path towards the formation of exotic nuclei in the laboratory. In summary, we calculate the density distribution of protons and neutrons for $^{40,42,44,48}Ca$ by using RMF (NL3) and E-RMF (G2) parameter sets. We found similar density distribution for protons and neutrons in both the sets with a small difference at the central region. This small difference in densities make a significant influence in the prediction of optical potential, elastic differential cross-section, analysing power and the spin observable for $p+Ca$ systems. The effect of kinematic parameters for reaction dynamics, RLF and MRW, are also highly sensitive to the predicted results. That means, the differential scattering cross-section and scattering observables are highly depent on the input density and the choice of parametrisation. [9]{} Faddeev in: Trudy matematicheskogo institute im. V. A. Steklova, Akad. Nauk SSSR, Moscow [**69**]{} (1963) 369. C. Mahux, Proc. Conf. on Microscopic optical potentials, (1978) Hamurg p-1. V. V. Balashov and J. V. Meboniya, Nucl. Phys. A [**107**]{} (1968) 369. R. J. Glauber, Phys. Rev. [**100**]{} (1955) 242. G. A. Lalazissis, J. König, and P. Ring, Phys. Rev. C **55** (1997) 540. R. J. Furnstahl, B. D. Serot, and H. B. Tang, Nucl. Phys. [**615**]{} (1997) 441; R. J. Furnstahl, and B. D. Serot, Nucl. Phys. A [**671**]{} (2000) 447. J. A. McNeil, L. Ray, and S. J. Wallace, Phys. Rev. C [**27**]{} (1983) 2123. M. Del Estal, M. Centelles, X. Viñas, and S. K. Patra, Phys. Rev. C [**63**]{} (2001) 044321; M. Del Estal, M. Centelles, X. Viñas, and S. K. Patra, Phys. Rev. C [**63**]{} (2001) 044314; S. K. Patra, M. Del Estal, M. Centelles, and X. Viñas, Phys. Rev. C [**63**]{} (2001) 024311; P. Arumugam, B. K. Sharma, P. K. Sahu, S. K. Patra, Tapas Sil, M. Centelles, and X. Viñas, Phys. Lett. [**B601**]{} (2004) 51. S. K. Patra, and C. R. Praharaj, Phys. Rev. C **44** (1991) 2552; Y. K. Gambhir, P. Ring, and A. Thimet, Ann. Phys. (N.Y.) **198** (1990) 132. B. D. Serot, and J. D. Walecka, Int. J. Mod. Phys. E [**6**]{} (1997) 515. R. J. Furnstahl, B. D. Serot, and H. B. Tang, Nucl. Phys. A [**598**]{} (1996) 539. R. J. Fernstahl, C. E. Price, and G. E. Walker, Phys. Rev. C [**36**]{} (1987) 2590. J. A. McNeil, J. R. Shepard, and S. J. Wallace, Phys. Rev. Lett. [**50**]{} (1983) 1439. M. Bhuyan and S. K. Patra Phys. Rev. C on preparation. R. J. Perry, Phys. Lett. [**182B**]{} (1986) 269. W. R. Fox, Nucl. Phys. A [**495**]{} (1989) 463. Murdock [*Proton scattering as a probe of Relativity Nuclei*]{} Ph.D. Thesis, MIT, (1987). F. A. Brieva, and J. R. Rook, Nucl. Phys. A [**291**]{} (1977) 317. C. J. Horowitz, and B. D. Serot, Phys. Lett. [**137B**]{} (1984) 287. R. Machleidt, and R. Brockmann, Phys. Lett. [**149B**]{} (1984) 283. B. Haar ter and R. Malfliet, Phys. Lett. [**172B**]{} (1986) 10; Phys. Rev. Lett. [**56**]{} (1986) 1237. C. J. Horowitz, and B. D. Serot, Nucl. Phys. A [**464**]{} (1987) 613; Phys. Rev. Lett. [**86**]{} (1986) 760 (E). Koonin [*Computational Physics*]{} Benjamin, Reading, MA (1986). McCarthy [*Introduction to Nuclear Theory*]{} Wiley, New York (1968). G. Bruge, International Report D.Ph-N/ME/78-1 CEN, Salay, (1978). C. J. Horowitz, D. P. Murdock and B. D. Serot, Indina University Report No. IU/NTC 90-01.
{ "pile_set_name": "ArXiv" }
--- abstract: 'An original optical tweezers using one or two chemically etched fiber nano-tips is developed. We demonstrate optical trapping of 1 micrometer polystyrene spheres at optical powers down to 2 mW. Harmonic trap potentials were found in the case of dual fiber tweezers by analyzing the trapped particle position fluctuations. The trap stiffness was deduced using three different models. Consistent values of up to 1 fN/nm were found. The stiffness linearly decreases with decreasing light intensity and increasing fiber tip-to-tip distance.' address: | Univ. Grenoble Alpes, Inst NEEL, F-38042 Grenoble, France\ CNRS, Inst NEEL, F-38042 Grenoble, France author: - 'Jean-Baptiste Decombe,\* Serge Huant, and Jochen Fick' title: 'Single and dual fiber nano-tip optical tweezers: trapping and analysis' --- [10]{} A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Observation of a single-beam gradient force optical trap for dielectric particles,” Opt. Lett. **11**, 288–290 (1986). O. G. Helleso, P. Lovhaugen, A. Z. Subramanian, J. S. Wilkinson, and B. S. Ahluwalia, “Surface transport and stable trapping of particles and cells by an optical waveguide loop,” Lab Chip **12**, 3436–3440 (2012). C. Renaut, J. Dellinger, B. Cluzel, T. Honegger, D. Peyrade, E. Picard, F. d. Fornel, and E. Hadji, “Assembly of microparticles by optical trapping with a photonic crystal nanocavity,” Appl. Phys. Lett. **100**, 101103 (2012). K. Wang, E. Schonbrun, P. Steinvurzel, and K. B. Crozier, “Scannable plasmonic trapping using a gold stripe,” Nano Lett. **10**, 3506–3511 (2010). W. Zhang, L. Lina Huang, C. Santschi, and O. J. F. Martin, “Trapping and sensing 10 nm metal nanoparticles using plasmonic dipole antennas,” Nano Lett. **10**, 1006–1011 (2010). Y. Tanaka and K. Sasaki, “Optical trapping through the localized surface-plasmon resonance of engineered gold nanoblock pairs,” Opt. Express **19**, 17462–17468 (2011). Y. Pang and R. Gordon, “Optical trapping of a single protein,” Nano Lett. **12**, 402–406 (2011). J. B. Black, D. Luo, and S. K. Mohanty, “Fiber-optic rotation of micro-scale structures enabled microfluidic actuation and self-scanning two-photon excitation,” Appl. Phys. Lett. **101**, 221105 (2012). S. Valkai, L. Oroszi, and P. Ormos, “Optical tweezers with tips grown at the end of fibers by photopolymerization,” Appl. Optics **48**, 2880–2883 (2009). E. R. Lyons and G. J. Sonek, “Confinement and bistability in a tapered hemispherically lensed optical fiber trap,” Appl. Phys. Lett. **66**, 1584–1586 (1995). T. Numata, A. Takayanagi, Y. Otani, and N. Umeda, “Manipulation of metal nanoparticles using fiber-optic laser tweezers with a microspherical focusing lens,” Japanese J. Appl. Phys. **45**, 359–363 (2006). A. L. Barron, A. K. Kar, T. J. Aspray, A. J. Waddie, M. R. Aghizadeh, and H. T. Bookey, “Two dimensional interferometric optical trapping of multiple particles and [E]{}scherichia coli bacterial cells using a lensed multicore fiber,” Opt. Express **21**, 13199–13207 (2013). Z. Liu, C. Guo, J. Yang, and L. Yuan, “Tapered fiber optical tweezers for microscopic particle trapping: fabrication and application,” Opt. Express **14**, 12510–12516 (2006). Z. Liu, L. Wang, P. Liang, Y. Zhang, J. Yang, and L. Yuan, “Mode division multiplexing technology for single-fiber optical trapping axial-position adjustment,” Opt. Lett. **38**, 2617–2620 (2013). S. K. Mondal, S. S. Pal, and P. Kapur, “Optical fiber nano-tip and 3[D]{} bottle beam as non-plasmonic optical tweezers,” Opt. Express **20**, 16180–16185 (2012). M. Michihata, T. Hayashi, D. Nakai, and Y. Takaya, “Microdisplacement sensor using an optically trapped microprobe based on the interference scale,” Rev. Sci. Instrum. **81**, 015107 (2010). K. Berg-Sørensen and H. Flyvberg, “Power spectrum analysis for optical tweezers,” Rev. Sci. Instrum. **75**, 594–612 (2004). G. M. Gibson, J. Leach, S. Keen, A. J. Wright, and M. J. Padgett, “Measuring the accuracy of particle position and force in optical tweezers using high-speed video microscopy,” Opt. Express **16**, 14561–14570 (2008). J.-B. Decombe, W. Schwartz, C. Villard, H. Guillou, J. Chevrier, S. Huant, and J. Fick, “Living cell imaging by far-field fibered interference scanning optical microscopy,” Opt. Express **19**, 2702–2710 (2011). N. Chevalier, Y. Sonnefraud, J. F. Motte, S. Huant, and K. Karrai, “Aperture-size-controlled optical fiber tips for high-resolution optical microscopy,” Rev. Sci. Instrum. **77**, 063704 (2006). J. B. Decombe, J. F. Bryche, J. F. Motte, J. Chevrier, S. Huant, and J. Fick, “Transmission and reflection characteristics of metal-coated optical fiber tip pairs,” Appl. Optics **52**, 6620–6625 (2013). Y. Tanaka, A. Sanada, and K. Sasaki, “Nanoscale interference patterns of gap-mode multipolar plasmonic fields,” Sci. Rep. **2**, 764 (2012). A. Reveaux, G. Dantelle, D. Decanini, A.-M. Haghiri-Gosnet, T. Gacoin, and J.-P. Boilot, “Synthesis of [YAG]{}:[C]{}e/[T]{}i[O]{}2 nanocomposite films,” Opt. Mater. **33**, 1124–1127 (2011). B. Masenelli, O. Mollet, O. Boisron, B. Canut, G. Ledoux, J.-M. Bluet, P. Mélinon, C. Dujardin, and S. Huant, “Y[AG]{}:[C]{}e nanoparticle lightsources,” Nanotechnology **24**, 165703 (2013). Introduction ============ Optical tweezers are now well-established since the pioneering work of A. Ashkin [@ADB+86]. This non-contact technique is of large interest in many scientific domains such as biochemistry, physics and medicine. The application of optical near-field tweezers allows to lower the trapping intensities or to trap smaller particles. Cavities of dielectric [@HLS+12] or photonic waveguides [@RDC+12] have been used to collect and trap micro-particles. Exploiting the strong optical field gradient of surface plasmon cavities allowed the realization of stable nanoparticle trapping. Different geometries such as strips [@WSS+10], dipole antennas [@ZLS+10], nano-block pairs [@TS11] or double nano-holes [@PG11] were successfully applied. The use of optical fibers attracts increasing attention as highly flexible tools for particle trapping. Fiber-based optical tweezers do not require substrates or bulky high numerical aperture objectives. They provide easy access to the trapped particle, which is useful for the implementation of further manipulation or characterization elements. Examples of tweezers with two facing optical fibers includes micro-fluidic actuators using two cleaved fibers [@BLM12], tweezers based on fiber tips grown by photo-polymerization [@VOO09] or using tapered lensed fiber tips [@LS95]. Single fiber tip tweezers were realized with a micro-lensed cleaved fibers [@NTO+06], a multicore lensed fiber [@BKA+13], single- or multi-mode chemically etched fiber tip [@LGY+06; @LWL+13] or by gradient index chemically etched fiber tips creating 3D bottle beams [@MPK12]. The measurement of the optical forces acting on the trapped particles is of paramount interest. It allows to determine the trapping efficiency. Moreover, due to the very weak optical forces, it can be used for high sensitivity force or displacement sensing [@MHN+10]. Different theoretical approaches allow to determine the optical forces of an optical tweezers from the trapped particle position fluctuations. In [@BF04], K. Berg-S[ø]{}rensen and H. Flyvberg present an overview of these models. High frequency quadrant positions sensors are currently used for particle position recording. It was, however, shown that CMOS camera videos with frame rates of some hundred Hz are sufficient for the accurate force measurements [@GLK+08]. In the present paper, we report on optical trapping of micrometer size dielectric particles using one or two bare optical fiber nano-tips. These chemically etched fiber tips with nanometer size apex were already used for high resolution optical scanning microscopy [@DSV+11]. The trapping efficiency at different light powers and fiber distances is evaluated by analyzing the experimental data within three different models that find very consistent results. Experimental ============ Optical fiber tips are elaborated by chemical etching in aqueous hydrofluoric acid of standard pure silica core single mode optical fibers (S630-HP, Nufern)  [@CSM+06]. The obtained fiber tips are reproducible with smooth surfaces, full angle of about $15^{\circ}$, and apex diameters of 60 nm \[Fig. \[fig.SEM\]\]. ![Scanning electron microscope images of an etched fiber tip.\[fig.SEM\]](fig_SEM1.eps "fig:"){width="5.cm"} ![Scanning electron microscope images of an etched fiber tip.\[fig.SEM\]](fig_SEM2.eps "fig:"){width="5.cm"} The scheme of the optical tweezers set-up is shown on Fig. \[fig.setup\]. A 808 nm single mode pigtailed laser diode with a maximum output power of 250 mW (LU0808M250, Lumics) is linearly polarized and split by a polarized beam splitter. The relative intensity in the 2 arms is controlled by an half-wave plate. In each arm, the beam is split again with a 90/10 beam splitter. 90% of the light is coupled into the optical fiber tips whereas the other 10% go directly in a photodiode to give the laser reference signal. The reflected intensity is measured by an amplified Si-photodiode (New Focus 2001) placed after the beam splitter. This “back signal” is a superposition of the transmitted light from the second, opposing fiber tip and the reflection from the tip and, if applicable, the trapped particle. ![Scheme of the experimental set-up. \[fig.setup\]](fig_setup.eps "fig:"){width="10.cm"}\ Fiber tips are mounted on two sets of $xyz$ translation stages : piezoelectric translation stages (PI P620) with sub-nanometer resolution and $50~\mu$m range, and inertial piezoelectric translation stages (Mechonics MS 30) with $\approx 30$ nm step size and up to 2.5 cm range. Transverse transmission intensity maps are recorded by scanning one fiber tip in a plane perpendicular to the tip axes. The entire set-up is controlled by homemade software in the LabView environment. A homemade microscope with a $\times50$ objective (Mitutoyo) coupled to a CMOS camera (Thorlabs) is used for visualizing the trapped particles. The objective long working distance (13.9 mm) allows easy particle observation. The main drawbacks are, however, its low resolution of about 0.5 $\mu$m and its small focus depth of 1.1 $\mu$m. The camera allows a full frame size of $1280 \times 1024$ pixels with 58 nm/pixel resolution and frame rates of 11 fps. The frame rate can be boosted to 115 fps by reducing the frame size to $130 \times 90$ pixels. For higher frame rates the limited CMOS sensitivity results in low contrast images. The fluid chamber consists of an o-ring placed in between two glass slides and cut in two parts in order to insert the fiber tips. The system is sealed using vacuum grease allowing to work several hours in a stable system, without evaporation. The chamber is fixed on a set of translation and rotation stages to allow easy alignment in respect to the fiber tips. Commercial 1 $\mu$m polystyrene spheres (Corpuscular) are used for trapping in aqueous suspension with 0.125 mg/mL concentration. Light power at the end of the fiber tips is measured by means of a power meter. Transmission measurements between two identical fiber tips indicate that the fiber-to-fiber transmission is about 3 times more efficient in water than in air. Thus, for the trapping measurements a correction factor of 1.73 is applied to the measured intensity. All values given in this paper are measured and corrected intensities at the end of one single fiber tip. An automatic particle tracking software is developed in the open source Scilab environment in order to exploit trapping videos with more than $3\times 10^5$ frames. The program detects the center of the particle surface in each frame. The estimated resolution of 50 nm is of the order of the pixel size, but below the microscope resolution. ![Transmission spot width/ waist (a) and intensity (b) as a function of fiber tip-to-tip distance. Insert : transverse transmission intensity map in air ($d=1~\mu$m). \[fig.waist\]](fig_trans_w.eps "fig:"){width="6.cm"} ![Transmission spot width/ waist (a) and intensity (b) as a function of fiber tip-to-tip distance. Insert : transverse transmission intensity map in air ($d=1~\mu$m). \[fig.waist\]](fig_trans_I.eps "fig:"){width="6.cm"} Results and discussion ====================== Optical fiber tip emission properties ------------------------------------- The light emission properties of the fiber tips are studied by transverse transmission maps. In the measurement, the laser beam is injected in one fiber and the transmitted light is collected by scanning the second, identical fiber tip in the transverse plane. The obtained transmission maps show circular Gaussian-shaped intensity spots \[insert Fig. \[fig.waist\](a)\]. To obtain the emission spot width of a single fiber tip, the recorded transmission maps have to be corrected by the capture function of the second fiber tip. For two identical fiber tips with Gaussian emission profiles the corrected waist $w$ is calculated by $w=(\tilde w^2-\frac{1}{2}w_0^2)^{1/2}$ with $\tilde w$ the as-measured waist and $w_{0}$ the measured waist at smallest tip-to-tip distances [@DBM+13]. Measurements are performed for tip-to-tip distances between 1 and 25 $\mu$m in air and in water \[Fig. \[fig.waist\]\]. At smallest distances, the minimal measured waists are $\sim$900 nm in both media. As shown previously, sub-wavelength spot sizes can only be obtained with metalized fiber tips [@DBM+13]. The actual beam size is, however, of the same order that the trapped particles. The waist increases linearly with tip-to-tip distance. In air, the full angle beam divergence is 30$^{\circ}$, corresponding to a numerical aperture of 0.25. This result is confirmed by far-field angular measurements using an optical goniometer. Due to the higher refractive index the beam is much less divergent in water. In this case, the emission angle is $\sim$ 8$^{\circ}$ with N.A. = 0.07. The transmission intensity is decreasing with fiber tip-to-tip distance with a linear behavior in the $I^{-1/2}$ plot. As already stated, the intensity is about three times higher in water than in air. ![Optical microscope images of a trapped 1 $\mu$m polystyrene sphere. (a) Laser off. (b) Laser on (9 mW) ([Media 1](Media 1)). \[fig.t1f\]](fig_trap_1f.eps "fig:"){width="7.cm"}\ Single fiber tip tweezers ------------------------- Transient trapping for few minutes of 1 $\mu$m spheres is obtained with one single fiber tip. In this configuration the attractive optical gradient force ($F_{grad}$) attracts the particle towards the optical axes at the end of the fiber tip \[Fig. \[fig.t1f\] and [Media 1](Media 1)\]. Because of the Brownian motion the particle can temporarily leave this stable trapping position and be ejected by the repulsive optical scattering force ($F_{scat}$). ![(a) Evolution of particle position (blue dots) and velocity (red). The line corresponds to the calculated velocity curve. (b) Optical force as a function of particle position for 4 different laser powers. The inset shows the optical force as a function of laser intensity at a particle position of 0$~\mu$m . \[fig.t1ff\]](fig_trap_1f_v.eps "fig:"){width="6.cm"} ![(a) Evolution of particle position (blue dots) and velocity (red). The line corresponds to the calculated velocity curve. (b) Optical force as a function of particle position for 4 different laser powers. The inset shows the optical force as a function of laser intensity at a particle position of 0$~\mu$m . \[fig.t1ff\]](fig_trap_1f_F.eps "fig:"){width="6.cm"}\ The observation of the particle speed during ejection allows to deduce the optical forces using Newton’s second law : $$\label{eq.newton} F_{opt}+F_{stokes}=m\dot{v}$$ with $m$ and $v$ the particle mass and speed. The Stokes force is defined by $F_{stokes}= -\gamma_0 v$, with $\gamma_0=6\pi a\eta$ the friction coefficient, $a$ the sphere radius, and $\eta$ the dynamic viscosity of water. $F_{opt}= F_{grad}+F_{scat}$ is the total optical force. In the case of particle ejection we suppose that $F_{grad}\ll F_{scat}$. The scattering force is scaling with the light intensity which itself is decreasing by ${1/x^2}$ with tip-to-tip distance \[Fig. \[fig.waist\](b)\]. Now, Eq. \[eq.newton\] becomes $m\ddot x +\gamma_0 \dot v - F_0/x^2=0$ with $F_0$ the maximal optical force, which is the only free parameter. This equation can be solved by numerical differential equation solvers. Particle ejection events are observed for different light intensities. The particle position and speed as a function of time are obtained from the recorded videos using our tracking software \[Fig. \[fig.t1ff\](a)\]. The particles are first accelerated by the strong optical forces, before being slowed down by the Stokes drag. Finally, particles comes to a standstill as the optical force becomes negligible. $F_0$ is determinate by adjusting the calculated velocity curve to the experimental data. The experimentally observed particle acceleration is slower than predicted by our model. This is mainly due to the neglect of the gradient force near the fiber tip. The accordance with theory is good during the particle deceleration phase. The deduced optical forces are in the 0.4 – 0.7 pN range. They are linearly scaling with the light intensity \[Fig. \[fig.t1ff\](b)\]. Using one fiber tip, spheres can only be trapped very close to the tip apex where the gradient force dominates the repulsive scattering force. Thus the particle can stick to the fiber due to strong Van der Waals forces. In the present case particles leave the fiber tip shortly after the laser is switched off. Dual fiber nano-tips tweezers ----------------------------- 1 $\mu$m spheres are also stably trapped between two fiber tips. In the dual-beam configuration when the fibers are well aligned coaxially, the particle is confined on the common optical axis by the transverse gradient forces and stabilized at a point on the optical axis where the two repulsive scattering forces are canceled. Experimentally, fiber tips are aligned by scanning one fiber relative to the second one as previously discussed. Then they are placed at the maximum transmission intensity position. Stable trapping for several hours is observed for one sphere with light power of 3.5 - 10 mW and fiber tip-to-tip distances of 6 to 17 $\mu$m \[Fig. \[trap\](a)\]. The sphere position along the optical axis can be controlled by varying the relative optical intensity injected into both fibers. By increasing the intensity in the left hand side fiber, the trapped particle moves to the right \[Figs. \[trap\](c) and \[trap\](d)\]. We were also able to trap 2 and 3 spheres at the same time during few minutes \[see Fig. \[trap\](b) and [Media 2](Media 2)\]. ![Trapping of one (a) and two (b) spheres with 2 fiber nano-tips ([Media 2](Media 2)) ($I=6$ mW, $d= 17~\mu$m). (c)-(d) Control of the particle position by modifying the relative light intensities injected in the two facing nano-tips. \[trap\]](fig_trap_2f_1p.eps "fig:"){width="4.5cm"} ![Trapping of one (a) and two (b) spheres with 2 fiber nano-tips ([Media 2](Media 2)) ($I=6$ mW, $d= 17~\mu$m). (c)-(d) Control of the particle position by modifying the relative light intensities injected in the two facing nano-tips. \[trap\]](fig_trap_2f_2p.eps "fig:"){width="4.5cm"}\ ![Trapping of one (a) and two (b) spheres with 2 fiber nano-tips ([Media 2](Media 2)) ($I=6$ mW, $d= 17~\mu$m). (c)-(d) Control of the particle position by modifying the relative light intensities injected in the two facing nano-tips. \[trap\]](fig_trap_2f_1pL.eps "fig:"){width="4.5cm"} ![Trapping of one (a) and two (b) spheres with 2 fiber nano-tips ([Media 2](Media 2)) ($I=6$ mW, $d= 17~\mu$m). (c)-(d) Control of the particle position by modifying the relative light intensities injected in the two facing nano-tips. \[trap\]](fig_trap_2f_1pR.eps "fig:"){width="4.5cm"}\ Besides the observation by the microscope, the back signal reflected by the fibers tips gives valuable information about the trapping events \[Fig. \[fig.back\]\]. Particle trapping results in significant intensity modulations. This effect is even more pronounced for trapping of multiple particles. The relation between the back signal and particle trapping can be investigated with a particle which is trapped for some seconds in the central position between the fibers before oscillating between two metastable positions \[Figs. \[fig.back\](b)–\[fig.back\](d)\] and [Media 3](Media 3)). In the case of stable trapping strong oscillations with main frequency peaks at 10 Hz are observed. When the particle starts oscillating a second strong peak at 25 Hz appears. The comparison between the particle position obtained from the video and the back signal, shows clear dips of the back signal when the particle changes its metastable position. The observed influence of particle trapping on the back signal is very useful for future nanoparticle trapping experiments. The back signal can thus replace the direct optical visualization which is impossible for nanoparticles. ![Back signal as a function of the trap state: (a) zero, two, and one trapped particle. (b) No particle (red), stable trapping (green), and metastable trapping (blue). (b) Fast Fourier Transform of these 3 states (lines are Lorentzian best fits). (d) Comparison between back signal (red) and particle position (blue) for metastable particle trapping ([Media 3](Media 3)). \[fig.back\]](fig_backsig1.eps "fig:"){width="6.cm"} ![Back signal as a function of the trap state: (a) zero, two, and one trapped particle. (b) No particle (red), stable trapping (green), and metastable trapping (blue). (b) Fast Fourier Transform of these 3 states (lines are Lorentzian best fits). (d) Comparison between back signal (red) and particle position (blue) for metastable particle trapping ([Media 3](Media 3)). \[fig.back\]](fig_backsig2.eps "fig:"){width="6.cm"}\ ![Back signal as a function of the trap state: (a) zero, two, and one trapped particle. (b) No particle (red), stable trapping (green), and metastable trapping (blue). (b) Fast Fourier Transform of these 3 states (lines are Lorentzian best fits). (d) Comparison between back signal (red) and particle position (blue) for metastable particle trapping ([Media 3](Media 3)). \[fig.back\]](fig_backsig_fft.eps "fig:"){width="6.cm"} ![Back signal as a function of the trap state: (a) zero, two, and one trapped particle. (b) No particle (red), stable trapping (green), and metastable trapping (blue). (b) Fast Fourier Transform of these 3 states (lines are Lorentzian best fits). (d) Comparison between back signal (red) and particle position (blue) for metastable particle trapping ([Media 3](Media 3)). \[fig.back\]](fig_backsig_tr2.eps "fig:"){width="6.cm"} Theoretical description of trapping efficiency ---------------------------------------------- There are three main models to deduce the trapping efficiency from trapped particle position fluctuations. They are related to different fluctuation properties: the analysis of position oscillation power spectra, the position autocorrelation, and the position statistics. For sake of clarity, the models will be described for a motion along only one dimension ($x$). Their generalization to two or three dimensions is straightforward. The motion of a sphere in a harmonic trapping potential can be described by [@BF04]: $$\label{eq.eou} m\ddot x(t)+\gamma_0\dot x(t)+\kappa x(t)=(2k_BT\gamma_0)^{1/2}\xi(t)$$ with $x(t)$ the trajectory of the Brownian particle, $m$ its mass, $a$ its radius, $\gamma_0=6\pi\eta a$ the friction coefficient deduced from Stokes’s law, $-\kappa x(t)$ the harmonic force from the trap and $(2k_BT\gamma_0)^{1/2}\xi(t)$ the Brownian force at temperature T. The characteristic time for loss of kinetic energy through friction ($m/\gamma_0\approx6\times10^{-8}$ s) is much shorter that our experimental time resolution at 100 Hz sampling rate. Consequently we can neglect the first term in Eq. \[eq.eou\]. Performing a Fourier transform, the position power spectrum can then be approximated by the Lorentzian: $$\label{eq.lor} P_k=\frac{2k_bT}{\gamma_0(f_c^2+f_k^2)}$$ with $f_c=\kappa /2\pi \gamma_0$ the corner frequency and $f_k$ the oscillation frequency from the Fourier transform. The power spectra shows two distinct regimes \[Fig. \[fig.ec\](a)\]. For frequencies below the corner frequency ($f_k \ll f_c$) the power spectra is constant ($P^{\Downarrow }=8\pi^2k_BT\gamma_0/\kappa^2$) and directly dependent on the trapping stiffness $\kappa$. At high frequencies ($f_k \gg f_c$) the Brownian motion is dominant and the power spectrum is independent from $\kappa$ ($P^{\Uparrow}=2k_BT/\gamma_0 f_k^2$). ![Power spectra (a) and autocorrelation (b) of transversal position of a trapped particle for different light intensities ($d=11.5 \mu$m). Lines are best fits to the experimental data. Insert: plot of the particle position fluctuations.\[fig.ec\]](fig_eou.eps "fig:"){width="6.cm"} ![Power spectra (a) and autocorrelation (b) of transversal position of a trapped particle for different light intensities ($d=11.5 \mu$m). Lines are best fits to the experimental data. Insert: plot of the particle position fluctuations.\[fig.ec\]](fig_corr.eps "fig:"){width="6.cm"} These two regimes are also visible in the autocorrelation function. In fact the residual Brownian motion $\langle x^2\rangle$ of the particle is given by the equipartition of energy [@BF04; @GLK+08]: $$\frac{1}{2} k_BT=\frac{1}{2}\kappa\langle x^2\rangle.$$ Since the oscillator is significantly over damped, the autocorrelation of the particle position is described by a single exponential decay of time constant $\tau_0=1/2\pi f_c$ \[Fig. \[fig.ec\](b)\]. Finally, the probability density $P$ of finding the particle in the potential well $U$ at a certain position $x$ can be described using Boltzmann statistics [@TSS12]: $$P(x)=\frac{1}{Z}e^{\frac{-U(x)}{k_B T}}$$ with $Z$ the partition function. For harmonic trapping potentials the trap stiffness can be directly obtained by fitting the probability density to the Gaussian function $P(r)=exp(-\kappa x^2/2k_B T)$ \[Fig. \[fig.prob\]\]. This last model is the most versatile, as it is not restricted to harmonic potentials. Moreover it gives direct access to the trapping potential. This point is of great interest for traps with multiple (meta-)stable trapping positions. The first two models are based on a frequency analysis of the particle position fluctuations. Valuable results require position detection at frequencies above the corner frequency $f_c$ of the trapped particle. In the present case $f_c$ is below 10 Hz. The CMOS camera readout frequency of $\approx100$ fps is thus sufficient. For higher trap stiffnesses this condition can, however, be a serious limitation. ![Transverse position distribution of the trapped particle for different light intensities (a) and fiber tip-to-tip distances (b) and the corresponding trap potentials (c),(d). \[fig.prob\]](fig_prob_I.eps "fig:"){width="6.cm" height="4.5cm"} ![Transverse position distribution of the trapped particle for different light intensities (a) and fiber tip-to-tip distances (b) and the corresponding trap potentials (c),(d). \[fig.prob\]](fig_prob_d.eps "fig:"){width="6.cm" height="4.5cm"}\ ![Transverse position distribution of the trapped particle for different light intensities (a) and fiber tip-to-tip distances (b) and the corresponding trap potentials (c),(d). \[fig.prob\]](fig_prob_UI.eps "fig:"){width="6.cm" height="4.5cm"} ![Transverse position distribution of the trapped particle for different light intensities (a) and fiber tip-to-tip distances (b) and the corresponding trap potentials (c),(d). \[fig.prob\]](fig_prob_Ud.eps "fig:"){width="6.cm" height="4.5cm"}\ Experimental trapping efficiencies ---------------------------------- The experimental particle position fluctuations are obtained using our tracking program. Videos of typical length of 5 minutes and over $3\times 10^5$ frames ensure good statistics \[insert Fig. \[fig.ec\](a)\]. The position for the transverse and longitudinal positions are determined separately, thus allowing to calculate the corresponding forces independently. Two series of trapping experiments are conducted in one go using a pair of identical fiber tips. The first series is recorded at a fixed fiber distance of 11.5 $\mu$m and light powers between 2.6 and 9.0 mW. The seconds series is recorded at fixed power of 6.2 mW and fiber tip-to-tip distances of 6 to 28 $\mu$m. Their transverse probability density functions are displayed on Figs. \[fig.prob\](a)–\[fig.prob\](b). The experimental curves fits very well to the Gaussian function. Depending on light intensity and tip-to-tip distance, the calculated trap stiffnesses $\kappa_t$ are of the order 0.1 to 1 fN/nm and 0.05 to 0.2 fN/nm for the transverse and longitudinal direction, respectively \[Fig. \[fig.stiff\]\]. The trapping forces increase linearly with increasing light intensity and decreasing tip-to-tip distance. The important difference between the longitudinal and transversal trapping forces is due to the non-conformity of the optical trap. The scattering forces from the two counter-propagating beams push the particle in two opposite directions whereas the gradient forces pull it into the same direction. Consequently the sphere is strongly stabilized in the transverse direction. A control video of a particle fixed on a substrate is recorded to evaluate the noise of the experimental set. The obtained probability distribution is of Gaussian shape with a width of the order of 50 nm. This value is used for the correction of the measured trap stiffness \[Fig. \[fig.stiff\]\]. The correction becomes significant for higher power transverse trapping, with its narrow position distribution. ![Trap stiffness along transversal and longitudinal directions as a function of laser power at the end of each fibers (a) and as a function of fiber tip-to-tip distance (b).[]{data-label="fig.stiff"}](fig_kappa_I.eps "fig:"){width="6.cm"} ![Trap stiffness along transversal and longitudinal directions as a function of laser power at the end of each fibers (a) and as a function of fiber tip-to-tip distance (b).[]{data-label="fig.stiff"}](fig_kappa_d.eps "fig:"){width="6.cm"} The potential wells calculated from the position distributions are shown on Fig. \[fig.prob\]. With the exception of the low intensity case, the potentials are of clear parabolic shape. This point justifies the subsequent application of the power spectra and autocorrelation models for the determination of the trapping forces. The results of these models will be presented only for the series with a fixed tip-to-tip distance. The autocorrelation functions of the transverse position fluctuations are shown on Fig. \[fig.ec\](b). The corner frequencies $f_c$ are obtained by fitting to single exponential decay curves. The autocorrelation function decreases faster for stronger trapping with, for example, $f_c= 7.07$ and 1.53 Hz for light intensities of 9.0 and 2.6 mW respectively. The corresponding trapping stiffnesses are calculated using $\gamma_0=9.44 \times 10^{-9}$ Ns/m for 1 $\mu$m particles in water at room temperature \[Fig. \[fig.stiff\]\]. Finally, the transverse spectral densities are plotted on Fig. \[fig.ec\](a). The agreement with the best numerical fit to Eq. \[eq.eou\] is very good in the low frequency regime. A slight shift is, however, observed for frequencies above the corner frequency. As stated before, our model comprises any free fitting parameter in this frequency regime. We suggest that the observed discrepancy is caused by the limited performances of our CMOS camera at these frequencies. The $\kappa$ values obtained by the low frequency fits are includes in Fig. \[fig.stiff\](a). The agreement of the values deduced by the three presented models is very good for the weaker longitudinal trapping direction. The agreement remains satisfactory in the transverse direction. In this case the correction made for Boltzmann model becomes, however, significant. Conclusions =========== Optical fiber tips with nanometer size apex are used for trapping experiments in single fiber and dual fiber optical tweezers. In the single fiber tip geometry, 1 $\mu$m size dielectric particles in water are trapped for some seconds before being ejected by the scattering forces of about 0.4 – 0.7 fN. Stable trapping of the same particles is observed in the dual fiber tip tweezers for light intensities down to 2.6 mW. Trap stiffnesses of up to 1 fN/nm are deduced from particle position fluctuation traces using three different theoretical approaches. The results of the three models are in very good agreement. The Boltzmann statistics model is, however, found to be the most appropriate as it allows to deal with non-harmonic traps and to include corrections for experimental set-up noise. The trap stiffness is about 2.5 times higher in the transverse direction than in the longitudinal direction. It is linearly increasing with light intensities and decreasing with tip-to-tip distance. The presented results are promising for future nanoparticle trapping experiments. In this respect, preliminary trapping of single fluorescent YAG particles [@RDD+11; @MMB+13] of deep sub-micron size with our dual nano-tip tweezers confirms the potential of this trapping scheme. Acknowledgments {#acknowledgments .unnumbered} =============== Funding for this project was provided by the French National Research Agency in the framework of the FiPlaNT project (ANR-12-BS10-002). Authors thank Jean-François Motte for the elaboration of the applied fiber tips. Helpful discussions with G. Colas des Francs, G. Dantelle, and T. Gacoin are gratefully acknowledged.
{ "pile_set_name": "ArXiv" }
--- abstract: | Let $\mathds{k}$ be an algebraically closed field. We give a complete isomorphism classification of non-connected pointed Hopf algebras of dimension $16$ with $\operatorname{char}\mathds{k}=2$ that are generated by group-like elements and skew-primitive elements. It turns out that there are infinitely many classes (up to isomorphism) of pointed Hopf algebras of dimension 16. In particular, we obtain infinitely many new examples of non-commutative non-cocommutative finite-dimensional pointed Hopf algebras. [**Keywords:**]{} Nichols algebra; Pointed Hopf algebra; Positive characteristic; Lifting method. address: 'School of Mathematical Sciences, Shanghai Key Laboratory of PMMP,East China Normal University, Shanghai 200241, China' author: - Rongchuan Xiong title: 'On non-connected pointed Hopf algebras of dimension 16 in characteristic $2$' --- \[section\] \[defi\][Theorem]{} \[defi\][Lemma]{} \[defi\][Proposition]{} \[defi\][Corollary]{} \[defi\][Remark]{} \[section\] \[maint\][Theorem]{} Introduction ============ Let $\mathds{k}$ be an algebraically closed field of positive characteristic. It is a difficult question to classify Hopf algebras over ${\mathds{k}}$ of a given dimension. Indeed, the complete classifications have been done only for prime dimensions (see [@NW18]). One may obtain partial classification results by determining Hopf algebras with some properties. To date, pointed ones are the class best classified. Let $p,q,r$ be distinct prime numbers and ${\operatorname{char}}{\mathds{k}}=p$. G. Henderson classified cocommutative connected Hopf algebras of dimension less than or equal to $p^3$ [@Hen95]; X. Wang classified connected Hopf algebras of dimension $p^2$ [@W1] and pointed ones with L. Wang [@WW]; V. C. Nguyen, L. Wang and X. Wang determined connected Hopf algebras of dimension $p^3$ [@NWW1; @NWW2]; Nguyen-Wang [@NW] studied the classification of non-connected pointed Hopf algebras of dimension $p^3$ and classified coradically graded ones; motivated by [@SO; @NW], the author gave a complete classification of pointed Hopf algebras of dimension $pq$, $pqr$, $p^2q$, $2q^2$, $4p$ and pointed Hopf algebras of dimension $pq^2$ whose diagrams are Nichols algebras. It should be mentioned that S. Scherotzke classified finite-dimensional pointed Hopf algebras whose infinitesimal braidings are one-dimensional and the diagrams are Nichols algebras [@S]; N. Hu, X. Wang and Z. Tong constructed many examples of pointed Hopf algebras of dimension $p^n$ for some $n\in{\mathds{N}}$ via quantizations of the restricted universal enveloping algebras of the restricted modular simple Lie algebras of Cartan type, see [@HW07; @HW11; @THW15; @TH16]; C. Cibils, A. Lauve and S. Witherspoon constructed several examples of finite-dimensional pointed Hopf algebras whose diagrams are Nichols algebras of Jordan type [@CLW]; N. Andruskiewitsch, et al. constructed some examples of finite-dimensional coradically graded pointed Hopf algebras whose diagram are Nichols algebras of non-diagonal type [@AAH19], which extends the work in [@CLW]. Until now, it is still an open question to give a complete classification of non-connected pointed Hopf algebras of dimension $p^3$ or pointed ones of dimension $pq^2$ whose diagrams are not Nichols algebras for odd prime numbers $p,q$. In this paper, we study the classification of non-connected pointed Hopf algebras over ${\mathds{k}}$ of dimension $16$ that are generated by group-like elements and skew-primitive elements. Indeed, S. Caenepeel, S. Dăscălescu and S. Raianu classified all pointed complex Hopf algebras of dimension $16$ [@CDR00]. We mention that the classification of pointed Hopf algebras $H$ over ${\mathds{k}}$ with $(\dim H, {\operatorname{char}}{\mathds{k}})=1$ yields similar isomorphism classes as in the case of characteristic zero. Therefore, we deal with pointed Hopf algebras of dimension $16$ with ${\operatorname{char}}{\mathds{k}}=2$. The strategy follows the ideas in [@AS98b], that is, the so-called lifting method. Let $H$ be a finite dimensional Hopf algebra such that the coradical $H_0$ is a Hopf subalgebra, then ${\operatorname{gr}}H$, the graded coalgebra of $H$ associated to the coradical filtration, is a Hopf algebra with projection onto the coradical $H_0$. By [@R85 Theorem 2], there exists a connected graded braided Hopf algebra $R=\oplus_{n=0}^{\infty}R(n)$ in ${}^{H_0}_{H_0}\mathcal{YD}$ such that ${\operatorname{gr}}H\cong R\sharp H_0$. We call $R$ and $R(1)$ the *diagram* and *infinitesimal braiding* of $H$, respectively. Furthermore, the diagram $R$ is coradically graded and the subalgebra generated by $V$ is the so-called Nichols algebra ${\mathcal{B}}(V)$ over $V:=R(1)$, which plays a key role in the classification of pointed complex Hopf algebras. In particular, pointed Hopf algebras are generated by group-like elements and skew-primitive elements if and only if the diagrams are Nichols algebras. See [@AS02] for details. By means of the lifting method [@AS98b], we classify all non-connected Hopf algebras of dimension $16$ with ${\operatorname{char}}{\mathds{k}}=2$ whose diagrams are Nichols algebras. See Theorem \[thm:16-diagram-Nichols-algebra\] for the classification results. Contrary to the case of characteristic zero, there exist infinitely many isomorphism classes, which provides a counterexample to Kaplansky’s 10-th conjecture, and there are infinitely many classes of pointed Hopf algebras of dimension $16$ with non-abelian coradicals. Besides, we also classify pointed Hopf algebras of dimension $p^4$ with some properties, see e.g. Theorem \[thm:p4-x1y0z0\]. In particular, we obtain infinitely many new examples of non-commutative non-cocommutative finite-dimensional pointed Hopf algebras. The paper is organized as below: In section \[secPre\], we introduce necessary notations and materials that we will need to study pointed Hopf algebras in positive characteristic. In section \[sec:p4\], we study pointed Hopf algebras of dimension $p^4$ with some properties. In section \[sec:16\], we classify non-connected pointed Hopf algebras of dimension $16$ whose diagrams are Nichols algebras. The classification of pointed ones whose diagrams are not Nichols algebras is much more difficult and requires different techniques, such as the Hochschild cohomology of coalgebras (see e.g. [@SO; @NW; @WZZ]). We shall treat them in a subsequent work. Preliminaries {#secPre} ============= Conventions {#conventions .unnumbered} ----------- We work over an algebraically closed field ${\mathds{k}}$ of positive characteristic. Denote by ${\operatorname{char}}{\mathds{k}}$ the characteristic of ${\mathds{k}}$, by ${\mathds{N}}$ the set of natural numbers, and by $C_n$ the cyclic group of order $n$. ${\mathds{k}}^{\times}={\mathds{k}}-\{0\}$. Given $n\geq k\geq 0$, ${\mathbb{I}}_{k,n}=\{k,k+1,\ldots,n\}$. Let $C$ be a coalgebra. Then the set ${\mathbf{G}}(C):=\{c\in C\mid \Delta(c)=c\otimes c,\ \epsilon(c)=1\}$ is called the set of *group-like* elements of $C$. For any $g,h\in{\mathbf{G}}(C)$, the set ${\mathcal{P}}_{g,h}(C):=\{c\in C\mid \Delta(c)=c\otimes g+h\otimes c\} $ is called the space of $(g,h)$-*skew primitive elements* of $C$. In particular, the linear space ${\mathcal{P}}(C):={\mathcal{P}}_{1,1}(C)$ is called the set of *primitive elements*. Unless otherwise stated, “pointed" refers to “nontrivial pointed" in our context. Our references for Hopf algebra theory are [@R11]. Yetter-Drinfeld modules and bonsonizations ------------------------------------------ Let $H$ be a Hopf algebra with bijective antipode. A left *Yetter-Drinfeld module* $M$ over $H$ is a left $H$-module $(M,\cdot)$ and a left $H$-comodule $(M,\delta)$ satisfying $$\begin{aligned} \label{eq-YD-Def} \delta(h\cdot v)=h_{(1)}v_{(-1)}S(h_{(3)})\otimes h_{(2)}\cdot v_{(0)}, \quad\forall v\in V,h\in H.\end{aligned}$$ Let ${}^{H}_{H}\mathcal{YD}$ be the category of Yetter-Drinfeld modules over $H$. Then ${}^{H}_{H}\mathcal{YD}$ is braided monoidal. For $V,W\in {}^{H}_{H}\mathcal{YD}$, the braiding $c_{V,W}$ is given by $$\begin{aligned} \label{equbraidingYDcat} c_{V,W}:V\otimes W\mapsto W\otimes V,\ v\otimes w\mapsto v_{(-1)}\cdot w\otimes v_{(0)},\ \forall\,v\in V, w\in W.\end{aligned}$$ In particular, $c:=c_{V,V}$ is a linear isomorphism satisfying the braid equation $(c\otimes\text{id})(\text{id}\otimes c)(c\otimes\text{id})=(\text{id}\otimes c)(c\otimes\text{id})(\text{id}\otimes c)$, that is, $(V,c)$ is a braided vector space. \[rmk:dimV=1\] Let $V\in{{}^{H}_{H}\mathcal{YD}}$ such that $\dim V=1$. Let $\{v\}$ be a basis of $V$. By definition, there is an algebra map $\chi:H\rightarrow{\mathds{k}}$ and $g\in{\mathbf{G}}(H)$ satisfying $$\begin{aligned} h_{(1)}\chi(h_{(2)})g=gh_{(2)}\chi(h_{(1)}),\end{aligned}$$ such that $\delta(v)=g\otimes v$, $h\cdot v=\chi(h)v$. Moreover, $g$ lies in the center of ${\mathbf{G}}(H)$. Suppose that $H={\mathds{k}}[G]$, where $G$ is a group. We write ${}_G^G\mathcal{YD}$ for the category of Yetter-Drinfeld modules over ${\mathds{k}}[G]$. Let $V\in{}_G^G\mathcal{YD}$. Then $V$ as a $G$-comodule is just a $G$-graded vector space $V:=\oplus_{g\in G}V_g$, where $V_g:=\{v\in V\mid \delta(v)=g\otimes v\}$. In this case, the condition is equivalent to the condition $g\cdot V_h\subset V_{ghg^{-1}}$. Assume in addition that the action of $G$ is diagonalizable, that is, $V=\oplus_{\chi\in \widehat{G}}V^{\chi}$, where $V^{\chi}:=\{v\in V\mid g\cdot v=\chi(g)v,\;\forall g\in G\}$. Then $$\begin{aligned} V=\oplus_{g\in G,\chi\in \widehat{G}}V_{g}^{\chi}, \text{ where }V_{g}^{\chi}=V_g\cap V^{\chi}.\end{aligned}$$ Let $G$ be a finite group. For any $g\in G$, we denote by $\mathcal{O}_g$ the conjugacy class of $g$, by $C_G(g)$ the isotropy subgroup of $g$ and by $\mathcal{O}(G)$ be the set of conjugacy classes of $G$. For any $\Omega\in\mathcal{O}(G)$, fix $g_{\Omega}\in\Omega$, then $G=\sqcup_{\Omega\in\mathcal{O}(G)}\mathcal{O}_{g_{\Omega}}$ is a decomposition of conjugacy classes of $G$. Let $\psi:{\mathds{k}}[C_G(g_{\Omega})]\rightarrow {\operatorname{End}}(V)$ be a representation of ${\mathds{k}}[C_G(g_{\Omega})]$, denoted by $(V,\psi)$. Then the induced module $M(g_{\Omega},\psi):={\mathds{k}}[G]\otimes_{{\mathds{k}}[C_G(g_{\Omega})]}V$ can be an object in ${}_G^G\mathcal{YD}$ by $$\begin{gathered} h\cdot (g\otimes v)=hg\otimes v, \quad \delta(g\otimes v)=gg_{\Omega}g^{-1}\otimes (g\otimes v),\quad h,g\in G, v\in V.\end{gathered}$$ In particular, $\dim M(g_{\Omega},\psi)=[G,C_G(g_{\Omega})]\times \dim V$. Furthermore, indecomposable objects in ${}_G^G\mathcal{YD}$ are indexed by the pairs $(V,\psi)$, see e.g. [@M; @Witherspoon]. [@M; @Witherspoon]\[thm:indecomposable-object-YD-over-groups\] $M(g_{\Omega},\psi)$ is an indecomposable object in ${}_G^G\mathcal{YD}$ if and only if $(V,\psi)$ is an indecomposable ${\mathds{k}}[C_G(g_{\Omega})]$-module. Furthermore, any indecomposable object in ${}_G^G\mathcal{YD}$ is isomorphic to $M(g_{\Omega},\psi)$ for some $\Omega\in\mathcal{O}(G)$ and indecomposable ${\mathds{k}}[C_G(g_{\Omega})]$-module $(V,\psi)$. Let $C_{p^s}:=\langle g\rangle$ and ${\operatorname{char}}{\mathds{k}}=p$. Then the $p^s$ non-isomorphic indecomposable $C_{p^s}$-modules consist of $r$-dimensional modules $V_r={\mathds{k}}\{v_1,v_2,\cdots,v_r\}$ for $r\in{\mathbb{I}}_{1,p^s}$, whose module structure given by $$\begin{gathered} g\cdot v_1=v_1,\quad g\cdot v_{m}=v_{m}+v_{m-1},\quad 1<m\leq r.\end{gathered}$$ The following well-known result follows directly by Theorem \[thm:indecomposable-object-YD-over-groups\]. See e.g.[@DC] for details. \[pro:indecomposable-object-cyclic-p-group\] Let $C_{p^s}:=\langle g\rangle$ and ${\operatorname{char}}{\mathds{k}}=p$. The indecomposable objects in ${}_{C_{p^s}}^{C_{p^s}}\mathcal{YD}$ consist of $r$-dimensional objects $M_{i,r}:=M(g^i,V_r)={\mathds{k}}\{v_1,v_2,\cdots,v_r\}$ for $r\in{\mathbb{I}}_{1,p^s}$, $i\in{\mathbb{I}}_{0,p^s-1}$, whose Yetter-Drinfeld module structure given by $$\begin{gathered} g\cdot v_1=v_1,\quad g\cdot v_{m}=v_{m}+v_{m-1},\quad 1<m\leq r;\quad \delta(v_n)=g^i\otimes v_n,\quad n\in{\mathbb{I}}_{1,r}.\end{gathered}$$ Let $R$ be a braided Hopf algebra in ${}^{H}_{H}\mathcal{YD}$. We write $\Delta_R(r)=r^{(1)}\otimes r^{(2)}$ for the comultiplication to avoid confusions. The *bosonization or Radford biproduct* $R\sharp H$ of $R$ by $H$ is a Hopf algebra over ${\mathds{k}}$ defined as follows: $R\sharp H=R\otimes H$ as a vector space, and the multiplication and comultiplication are given by the smash product and smash coproduct, respectively: $$\begin{aligned} (r\sharp g)(s\sharp h)=r(g_{(1)}\cdot s)\sharp g_{(2)}h,\quad \Delta(r\sharp g) =r^{(1)}\sharp (r^{(2)})_{(-1)}g_{(1)}\otimes (r^{(2)})_{(0)}\sharp g_{(2)}.\end{aligned}$$ Clearly, the map $\iota:H\rightarrow R\sharp H, h\mapsto 1\sharp h,\ h\in H$ is injective and the map $\pi:R\sharp H\rightarrow H,r\sharp h\mapsto \epsilon_R(r)h,\ r\in R, h\in H$ is surjective such that $\pi\circ\iota={\operatorname{id}}_H$. Furthermore, $R=(R\sharp H)^{coH}=\{x\in R\sharp H\mid ({\operatorname{id}}\otimes\pi)\Delta(x)=x\otimes 1\}$. Conversely, if $A$ is a Hopf algebra and $\pi:A\rightarrow H$ is a bialgebra map admitting a bialgebra section $\iota:H\rightarrow A$ such that $\pi\circ\iota={\operatorname{id}}_H$, then $A\simeq R\sharp H$, where $R=A^{coH}$ is a braided Hopf algebra in ${}^{H}_{H}\mathcal{YD}$. See [@R11] for details. Braided vector spaces and Nichols algebras ------------------------------------------ We follows [@AS02] to introduce the definition of Nichols algebras. Let $(V, c)$ be a braided vector space. Then the tensor algebra $T(V)=\oplus_{n\geq 0}T^n(V):=\oplus_{n\geq 0}V^{\otimes n}$ admits a connected braided Hopf algebra structure with the comultiplication determined by $\Delta(v)=v\otimes 1+1\otimes v$ for any $v\in V$. The braiding can be extended to $c : T(V)\otimes T(V)\rightarrow T(V)\otimes T(V)$ in the usual way. Then the braided commutator is defined by $$\begin{aligned} [x,y]_c=xy-m_{T(V)}\cdot c(x\otimes y),\quad x, y \in T(V).\end{aligned}$$ Let $\mathbb B_n$ be the braid group presented by generators $(\tau_j)_{j \in {\mathbb{I}}_{1,n-1}}$ with the defining relations $$\begin{aligned} \label{eq:braid-rel} \tau_i\tau_j = \tau_j\tau_i,\quad \tau_i \tau_{i+1}\tau_i = \tau_{i+1}\tau_i\tau_{i+1}, \quad\text{ for } i\in{\mathbb{I}}_{1,n-2},~j\neq i+1.\end{aligned}$$ Then there exists naturally the representation $\varrho_n$ of $\mathbb B_n$ on $T^n(V)$ for $n \geq 2$ given by $$\varrho_n: \sigma_j \mapsto c_j:={\operatorname{id}}_{V^{\otimes (j-1)}} \otimes c \otimes {\operatorname{id}}_{V^{\otimes (n - j-1)}}.$$ Let $M_n: \mathbb{S}_n \rightarrow \mathbb B_n$ be the (set-theoretical) Matsumoto section, that preserves the length and satisfies $M_n(s_j) = \sigma_j$. Then the *quantum symmetrizer* $\Omega_n:V^{\otimes n}\rightarrow V^{\otimes n}$ is defined by $$\begin{aligned} \Omega_n = \sum_{\sigma \in \mathbb{S}_n} \varrho_n (M_n(\sigma)).\end{aligned}$$ \[defi-Nicholsalgebra\] Let $(V, c)$ be a braided vector space. The Nichols algebra ${\mathcal{B}}(V)$ is defined by $$\begin{aligned} {\mathcal{B}}(V) = T(V) / {\mathcal{J}}(V),\quad \text{where }{\mathcal{J}}(V) = \oplus_{n\geq 2}{\mathcal{J}}^n(V) \text{ and }{\mathcal{J}}^n(V) = \ker \Omega_n.\end{aligned}$$ Indeed, ${\mathcal{J}}(V)$ coincides with the largest homogeneous ideal of $T(V)$ generated by elements of degree bigger than $2$ that is also a coideal. Moreover, ${\mathcal{B}}(V)=\oplus_{n\geq 0}{\mathcal{B}}^n(V)$ is a connected ${\mathds{N}}$-graded Hopf algebra. A braided vector space $(V, c)$ of rank $m$ is said to be of diagonal type, if there exists a basis $\{x_i\}_{i\in{\mathbb{I}}_{1,m}}$ such that $c(x_i\otimes x_j)=q_{ij}x_j\otimes x_i$ for $q_{i,j}\in{\mathds{k}}^{\times}$. Rank 2 and 3 Nichols algebras of diagonal type with finite PBW-generators were classified in [@WH; @W]. A braided vector space $(V,c)$ of rank $m>1$ is said to be of Jordan type, denoted by $\mathcal{V}(s,m)$, if there exists a basis $\{x_i\}_{i\in{\mathbb{I}}_{1,m}}$ such that $$\begin{gathered} c(x_i\otimes x_1)=sx_1\otimes x_i,\quad \text{and}\quad c(x_i\otimes x_j)=(sx_j+x_{j-1})\otimes x_i,\quad i\in{\mathbb{I}}_{1,m},j\in{\mathbb{I}}_{2,m}.\end{gathered}$$ Let ${\operatorname{char}}{\mathds{k}}=p$. Then it is easy to see that $\dim{\mathcal{B}}(\mathcal{V}(1,m))\geq p^m$. See e.g. [@AAH19] for details. A ${\mathds{N}}$-graded Hopf algebra $R=\oplus_{n\geq 0} R(n)$ in ${}^{H}_{H}\mathcal{YD}$ is a Nichols algebra if and only if - $R(0)\cong{\mathds{k}}$, $(2)$ ${\mathcal{P}}(R)=R(1)$, $(3)$ $R$  is generated as an algebra by $R(1)$. Recall that an object in the category of Yetter-Drinfeld modules is a braided vector space. [@T00 Theorem 5.7]\[pro-Nichols-YD-Realization\] Let $(V,c)$ be a rigid braided vector space. Then ${\mathcal{B}}(V)$ can be realized as a braided Hopf algebra in ${{}^{H}_{H}\mathcal{YD}}$ for some Hopf algebra $H$. By Definition \[defi-Nicholsalgebra\] and Proposition \[pro-Nichols-YD-Realization\], ${\mathcal{B}}(V)$ depends only on $(V, c_{V,V})$ and the same braided vector space can be realized in ${{}^{H}_{H}\mathcal{YD}}$ in many ways and for many $H$’s. Several lemmas and propositions ------------------------------- We introduce some important skills in positive characteristic. For more details, we refer to [@J; @NWW1; @NWW2; @NW; @S] and references therein. Let $({\text{ad}_L\,}x)(y) := [x, y]$ and $(x)({\text{ad}_R\,}y)=[x, y]$. The following propositions are very useful in positive characteristic. [@J]\[proJ\] Let $A$ be any associative algebra over a field. For any $a, b\in A$, $$\begin{gathered} ({\text{ad}_L\,}a)^p(b)=[a^p,b],\quad ({\text{ad}_L\,}a)^{p-1}(b)=\sum_{i=0}^{p-1}a^{i}ba^{p-1-i};\\ (a)({\text{ad}_R\,}b)^{p}=[a,b^p],\quad (a)({\text{ad}_R\,}b)^{p-1}=\sum_{i=0}^{p-1}b^{p-1-i}ab^i.\end{gathered}$$ Furthermore, $$\begin{aligned} (a + b)^p=a^p+b^p+\sum_{i=1}^{p-1}s_i(a,b),\end{aligned}$$ where $is_i(a,b)$ is the coefficient of $\lambda^{i-1}$ in $(a)({\text{ad}_R\,}\lambda a+b)^{p-1}$, $\lambda$ an indeterminate. \[pqlem1\] Let $A$ be an associative algebra over ${\mathds{k}}$ with generators $g$, $x$, subject to the relations $g^n=1, gx-xg= g(1-g)$. Assume that ${\operatorname{char}}{\mathds{k}}=p>0$ and $p\mid n$. Then (1) : $g^ix=xg^i+ig^{i}-ig^{i+1}$. In particular, $g^px=xg^p$. (2) : [@NW Lemma5.1(1)] $(g)({\text{ad}_R\,}x)^{p-1}=g-g^p$, $(g)({\text{ad}_R\,}x)^{p}=[g,x]$. (3) : $({\text{ad}_L\,}x)^{p-1}(g)=g-g^p$, $[x^p, g]=({\text{ad}_L\,}x)^p(g)=[x,g]$. \[pqlem2\][@NW] Let ${\operatorname{char}}{\mathds{k}}=p>0$, $k\in{\mathds{N}}-\{0\}$ and $\mu\in{\mathbb{I}}_{1,pk-1}$. Let $A$ be an associative algebra generated by $g$, $x$, $y$. Assume that the relations $$\begin{gathered} g^{pk}=1, \quad gx-xg=\lambda_1(g-g^2),\quad gy-yg=\lambda_2(g-g^{\mu+1}),\\x^p-\lambda_1x=0,\quad y^p-\lambda_2y=0,\quad xy-yx+\mu\lambda_1 y-\lambda_2x=\lambda_3(1-g^{\mu+1}),\end{gathered}$$ hold in $A$ for some $\lambda_1,\lambda_2\in{\mathbb{I}}_{0,1},\lambda_3\in{\mathds{k}}$. Then (1) : $(x)({\text{ad}_R\,}y)^n=\lambda_2^{n-1}(x)({\text{ad}_R\,}y)-\lambda_3\sum_{i=0}^{n-2}\lambda_2^i(g^{\mu+1})({\text{ad}_R\,}y)^{n-1-i}$. In particular, if $k=1$, then $(x)({\text{ad}_R\,}y)^p=\lambda_2^{p-1}(x)({\text{ad}_R\,}y)$. (2) : $({\text{ad}_L\,}x)^n(y)=(-\mu\lambda_1)^{n-1}({\text{ad}_L\,}x)(y)-\lambda_3\sum_{i=0}^{n-2}(-\mu\lambda_1)^i({\text{ad}_L\,}x)^{n-1-i}(g^{\mu+1})$. In particular, if $k=1$, then $({\text{ad}_L\,}x)^p(y)=(-\mu\lambda_1)^{p-1}({\text{ad}_L\,}x)(y)$. The following lemma extends [@NW Proposition 3.9] \[lem:R-V\] Let ${\operatorname{char}}{\mathds{k}}=p$ and $G$ be a group of order $p^m$ for $m\in{\mathbb{I}}_{1,3}$. Let $V\in{}_{G}^{G}\mathcal{YD}$ such that $\dim V>4-m$. Then $\dim {\mathcal{B}}(V)>p^{4-m}$. The proof follows the same lines of [@NW Proposition 3.9]. Now we introduce the following proposition, which is useful to determine when a coalgebra map is one-one. [@R11 Proposition 4.3.3]\[pro:R11-4.3.3\] Let $C, D$ be coalgebras over ${\mathds{k}}$ and $f: C \rightarrow D$ is a coalgebra map. Assume that $C$ is pointed. Then the following are equivalent: - $f$ is one-one. - For any $g,h\in{\mathbf{G}}(C)$, $f|_{{\mathcal{P}}_{g,h}(C)}$ is one-one. - $f|_{C_1}$ is one-one. On pointed Hopf algebras of dimension $p^4$ {#sec:p4} =========================================== Let $p$ be a prime number and ${\operatorname{char}}{\mathds{k}}=p$. We study pointed Hopf algebras of dimension $p^4$ with some properties, which will be used to obtain our main results. In particular, we obtain some classification results of pointed Hopf algebras of dimension $p^4$ with some properties. We mention that N. Andruskiewitsch and H. J. Schneider classified pointed complex Hopf algebra of $p^4$ for an odd prime $p$ [@AS00b]; S. Caenepeel, S. Dăscălescu and S. Raianu classified all pointed complex Hopf algebras of dimension $16$ [@CDR00]; and the Hopf subalgebra of dimension $p^3$ have already appeared in [@NW]. \[lem:cyclic-groups-dimV=2\] Let ${\operatorname{char}}{\mathds{k}}=p$, $C_{p^s}:=\langle g\rangle$ and $V$ be an object in ${}_{C_{p^s}}^{C_{p^s}}\mathcal{YD}$ such that $\dim{\mathcal{B}}(V)=p^2$. Then $\dim V=2$. Furthermore, - If ${\mathcal{B}}(V)$ is of diagonal type, then $V\cong M_{i,1}\oplus M_{j,1}$ for $i,j\in{\mathbb{I}}_{0,p^s-1}$ or $M_{k,2}$ for $p\mid k\in{\mathbb{I}}_{0,p^s-1}$ and hence ${\mathcal{B}}(V)\cong{\mathds{k}}[x,y]/(x^p,y^p)$. - If ${\mathcal{B}}(V)$ is not of diagonal type, then $p>2$, $V\cong M_{i,2}$ for $p\nmid i\in{\mathbb{I}}_{1,p^s-1}$ and hence ${\mathcal{B}}(V)\cong {\mathds{k}}\langle x,y\rangle/(x^p,y^p,yx-xy+\frac{1}{2}x^2)$. Observe that $\dim{\mathcal{B}}(V)=p$ if $\dim V=1$. Then by [@NW Proposition 3.9], $\dim V=2$. By Proposition \[pro:indecomposable-object-cyclic-p-group\], $V\cong M_{i,1}\oplus M_{j,1}$ for $i,j\in{\mathbb{I}}_{0,p^s-1}$ or $M_{k,2}$ for $k\in{\mathbb{I}}_{0,p^s-1}$. Assume that $V\cong M_{i,1}\oplus M_{j,1}$ for $i,j\in{\mathbb{I}}_{0,p^s-1}$. Then $V$ is of diagonal type with trivial braiding, which implies that ${\mathcal{B}}(V)\cong{\mathds{k}}[x,y]/(x^p,y^p)$. Assume that $V\cong M_{k,2}:={\mathds{k}}\{v_1,v_2\}$ for $k\in{\mathbb{I}}_{0,p^s-1}$. Then the braiding of $V$ is $$\begin{aligned} c(\left[\begin{array}{ccc} x\\y \end{array}\right]\otimes\left[\begin{array}{ccc} x~y \end{array}\right])= \left[\begin{array}{ccc} x\otimes x & (y+kx)\otimes x \\ x\otimes y & (y+kx)\otimes y \end{array}\right]. \end{aligned}$$ If $p\mid k$, then $V$ is of diagonal type with trivial braiding and hence ${\mathcal{B}}(V)\cong{\mathds{k}}[x,y]/(x^p,y^p)$. If $p\nmid k$, then $V$ is of Jordan type and hence by [@CLW Theorem 3.1 and 3.5], $p>2$ and ${\mathcal{B}}(V)\cong {\mathds{k}}\langle x,y\rangle/(x^p,y^p,yx-xy+\frac{1}{2}x^2)$. Let $G$ be a finite group and $V\in{}_G^G\mathcal{YD}$. If $\dim V=2$, then by [@NW Proposition 3.3], $V$ is either of diagonal type or of Jordan type. \[lem:cyclic-groups-dimV=3\] Let ${\operatorname{char}}{\mathds{k}}=p$, $C_p:=\langle g\rangle$ and $V$ be a decomposable object in ${}_{C_p}^{C_p}\mathcal{YD}$ such that $\dim{\mathcal{B}}(V)=p^3$. Then $\dim V=3$. Furthermore, - If ${\mathcal{B}}(V)$ is of diagonal type, then $V\cong M_{i,1}\oplus M_{j,1}\oplus M_{k,1}$ for $i,j,k\in{\mathbb{I}}_{0,p-1}$ or $M_{0,2}\oplus M_{0,1}$ and hence ${\mathcal{B}}(V)\cong{\mathds{k}}[x,y,z]/(x^p,y^p,z^p)$. - If ${\mathcal{B}}(V)$ is not of diagonal type, then $p>2$, $V\cong M_{i,2}\oplus M_{0,1}$ for $i\in{\mathbb{I}}_{1,p-1}$ and hence ${\mathcal{B}}(V)\cong {\mathds{k}}\langle x,y,z\rangle/(x^p,y^p,z^p,yx-xy+\frac{1}{2}x^2,[x,z],[y,z])$. By Lemma \[lem:R-V\], $\dim V<4$. If $\dim V=1$, then ${\mathcal{B}}(V)\cong{\mathds{k}}[x]/(x^p)$ and hence $\dim{\mathcal{B}}(V)=p$. If $\dim V=2$, then $V\cong M_{i,1}\oplus M_{j,1}$ or $M_{i,2}$ for $i,j\in{\mathbb{I}}_{0,p-1}$ and hence $V$ is of diagonal type or of Jordan type. Then by [@NW Proposition 3.7], $\dim{\mathcal{B}}(V)=p^2$ or $16$. Consequently, $\dim V=3$. Observe that $V$ is a decomposable object in ${}_{C_p}^{C_p}\mathcal{YD}$. Then $V\cong M_{i,1}\oplus M_{j,1}\oplus M_{k,1}$ or $M_{i,2}\oplus M_{j,1}$ for $i,j,k\in{\mathbb{I}}_{0,p-1}$. Assume that $V\cong M_{i,1}\oplus M_{j,1}\oplus M_{k,1}$ for $i,j,k\in{\mathbb{I}}_{0,p-1}$. Then $V$ is of diagonal type with trivial braiding and hence ${\mathcal{B}}(V)\cong{\mathds{k}}[x,y,z]/(x^p,y^p,z^p)$. Assume that $V\cong M_{i,2}\oplus M_{j,1}:={\mathds{k}}\{x,y\}\oplus{\mathds{k}}\{z\}$ for $i,j\in{\mathbb{I}}_{0,p-1}$. Then the braiding of $V$ is $$\begin{aligned} c(\left[\begin{array}{ccc} x\\y\\z\end{array}\right]\otimes\left[\begin{array}{ccc} x~y~z\end{array}\right])= \left[\begin{array}{ccc} x\otimes x & (y+ix)\otimes x & z \otimes x\\ x\otimes y & (y+ix)\otimes y & z\otimes y\\ x\otimes z, & (y+jx)\otimes z& z\otimes z \end{array}\right]. \end{aligned}$$ If $i=0=j$, then $V$ has trivial braiding and hence ${\mathcal{B}}(V)\cong{\mathds{k}}[x,y,z]/(x^p,y^p,z^p)$. If $i=0$ and $j\neq 0$, then $V$ is not of diagonal type, which also appeared in [@AAH19 7.1]. We claim that $\dim{\mathcal{B}}(V)>p^3$. Indeed, if $p>2$, then by [@AAH19 Proposition 7.1], $\dim{\mathcal{B}}(V)=2^pp^2$; if $p=2$, then the proof following the same lines. Indeed, it is easy to show that $\{x^iy^j[z,x]^kz^k\}_{i,j,k,l\in{\mathbb{I}}_{0,1}}$ is linearly independent in ${\mathcal{B}}(V)$. If $i\neq 0$ and $j\neq 0$, then without loss of generality, we assume that $i=1$. In this case, $V$ is not of diagonal type, which also appeared in [@AAH19]. If $p=2$, then by [@CLW Theorem 3.1], $\dim{\mathcal{B}}(V)>16$, a contradiction. If $p>2$, then by [@AAH19], $\dim{\mathcal{B}}(V)>p^3$, a contradiction. Consequently, if $V$ is not of diagonal type, then $p>2$ and $V\cong M_{i,2}\oplus M_{0,1}$ for $i\in{\mathbb{I}}_{1,p-1}$. Clearly, $c^2={\operatorname{id}}$ if and only if $j=0$. Hence by [@G Theorem 2.2], ${\mathcal{B}}(V)\cong{\mathcal{B}}(M_{i,2})\otimes{\mathcal{B}}(M_{0,1})$. If $p=2$, then by Proposition \[pro:indecomposable-object-cyclic-p-group\], the objects of dimension greater than 2 in ${}_{C_2}^{C_2}\mathcal{YD}$ must be decomposable in ${}_{C_2}^{C_2}\mathcal{YD}$. \[lem:p4-x1y1z1\] Let $p$ be a prime number and ${\operatorname{char}}{\mathds{k}}=p$. Let $H$ be a pointed Hopf algebra over ${\mathds{k}}$ of dimension $p^4$. Assume that ${\operatorname{gr}}H={\mathds{k}}[g,x,y,z]/(g^p-1,x^p,y^p,z^p)$ with $g\in{\mathbf{G}}(H)$ and $x,y,z\in{\mathcal{P}}_{1,g}(H)$. Then the defining relations of $H$ are $$\begin{gathered} g^p=1,\quad gx-xg=\lambda_1g(1-g),\quad gy-yg=\lambda_2g(1-g),\quad gz-zg=\lambda_3g(1-g),\\ x^p-\lambda_1x=0,\quad y^p-\lambda_2y=0,\quad z^p-\lambda_3z=0,\quad xy-yx-\lambda_2x+\lambda_1y=\lambda_4(1-g^2),\\ xz-zx-\lambda_3x+\lambda_1z=\lambda_5(1-g^2),\quad yz-zy-\lambda_3y+\lambda_2z=\lambda_6(1-g^2).\end{gathered}$$ for some $\lambda_1,\lambda_2,\lambda_3\in{\mathbb{I}}_{0,1},~\lambda_4,\lambda_5,\lambda_6\in{\mathds{k}}$ with ambiguity conditions $$\begin{aligned} \lambda_2\lambda_5=\lambda_3\lambda_4+\lambda_1\lambda_6.\end{aligned}$$ It follows by a direct computation that $$\begin{aligned} \Delta(gx-xg)=(gx-xg)\otimes g+g^2\otimes (gx-xg)\Rightarrow gx-xg\in{\mathcal{P}}_{g,g^2}(H)\cap H_0.\end{aligned}$$ Hence $gx-xg=\lambda_1g(1-g)$ for some $\lambda_1\in{\mathds{k}}$. By rescaling $x$, we can take $\lambda_1\in{\mathbb{I}}_{0,1}$. Then by Proposition \[proJ\] and Lemma \[pqlem1\], $$\begin{aligned} \Delta(x^p)=(x\otimes 1+g\otimes x)^p=x^p\otimes 1+1\otimes x^p +\lambda_1(g-1)\otimes x,\end{aligned}$$ which implies that $x^p-\lambda_1x\in{\mathcal{P}}(H)$. Since ${\mathcal{P}}(H)=0$, it follows that $x^p-\lambda_1x=0$ in $H$. Similarly, we have $$\begin{aligned} gy-yg=\lambda_2g(1-g),\quad y^p-\lambda_2y=0,\quad\lambda_2\in{\mathbb{I}}_{0,1};\\ gz-zg=\lambda_3g(1-g),\quad z^p-\lambda_3z=0,\quad\lambda_3\in{\mathbb{I}}_{0,1}.\end{aligned}$$ Then a direct computation shows that $$\begin{aligned} \Delta(xy-yx)=(xy-yx)\otimes 1+\lambda_2g(1-g)\otimes x-\lambda_1g(1-g)\otimes y+g^2\otimes (xy-yx),\end{aligned}$$ which implies that $xy-yx-\lambda_2x+\lambda_1y\in{\mathcal{P}}_{1,g^2}(H)$. Since ${\mathcal{P}}_{1,g^2}(H)={\mathds{k}}\{1-g^2\}$, it follows that $xy-yx-\lambda_2x+\lambda_1y=\lambda_4(1-g^2)$ for some $\lambda_4\in{\mathds{k}}$. Similarly, we have $$\begin{aligned} xz-zx-\lambda_3x+\lambda_1z=\lambda_5(1-g^2),\quad yz-zy-\lambda_3y+\lambda_2z=\lambda_6(1-g^2),\end{aligned}$$ for some $\lambda_5,\lambda_6\in{\mathds{k}}$. Applying the Diamond Lemma [@B] to show that $\dim H=p^4$, it suffices to show that the following ambiguities $$\begin{gathered} a^pb=a^{p-1}(ab), \quad a(b^p)=(ab)b^{p-1},\quad b<a,~\text{and}~a,b\in\{g,x,y,z\},\\ (ab)c=a(bc),\quad c<b<a~\text{and}~a,b,c\in\{g,x,y,z\}, \end{gathered}$$ are resolvable with the order $z<y<x<h<g$. By Lemma \[pqlem1\], $[g, x^p]=(g)({\text{ad}_R\,}x)^p=\lambda_1^{p-1}[g,x]$ and $[g^p, x]=pg^{p-1}[g,x]=0$. Then a direct computation shows that the ambiguities $(g^{p})x=g^{p-1}(gx)$ and $g (x^p)=(gx)x^{p-1}$ are resolvable. Similarly, $(g^{p})a=g^{p-1}(ga)$ and $g (a^p)=(ga)a^{p-1}$ are resolvable for $a\in\{y,z\}$. By Lemma \[pqlem2\], $[x,y^p]=(x)({\text{ad}_R\,}y)^p=\lambda_2^{p-1}[x,y]$ and $[x^p,y]=({\text{ad}_L\,}x)^p(y)=(-\lambda_1)^{p-1}[x,y]$. Then a direct computation shows that the ambiguity $(x^{p})y=x^{p-1}(xy)$ and $g (x^p)=(gx)x^{p-1}$ are resolvable. Similarly, $a^pb=a^{p-1}(ab)$ and $a(b^p)=(ab)b^{p-1}$, for $b<a$, $a,b\in\{x,y,z\}$. Now we claim that the ambiguity $g(xy)=(gx)y$ is resolvable. Indeed $$\begin{aligned} g(xy)&=g(yx+\lambda_2x-\lambda_1y+\lambda_4(1-g^2))=(gy)x+\lambda_2gx-\lambda_1gy+\lambda_4g(1-g^2)\\ &=y(gx)+\lambda_2gx+\lambda_2xg-\lambda_2g^2x+\lambda_4g(1-g^2)-\lambda_1yg\\ &=yxg-\lambda_1yg^2+2\lambda_2xg+\lambda_1\lambda_2(g-g^2)-\lambda_2xg^2-2\lambda_1\lambda_2g^2(1-g)+\lambda_4g(1-g^2)\\ &=xyg+\lambda_2x(g-g^2)+\lambda_1yg+\lambda_1\lambda_2(g-g^2)-\lambda_1yg^2-\lambda_1\lambda_2g^2(1-g)\\ &=x(gy)+\lambda_1gy-\lambda_1g^2y=(gx)y.\end{aligned}$$ Similarly, $g(xz)=(gx)z$ and $g(yz)=(gy)z$ are resolvable. We claim that the ambiguity $x(yz)=(xy)z$ imposes $\lambda_2\lambda_5=\lambda_3\lambda_4+\lambda_1\lambda_6$. Indeed, $$\begin{aligned} (xy)z&=[yx+\lambda_2x-\lambda_1y+\lambda_4(1-g^2)]z=y(xz)+\lambda_2xz-\lambda_1yz+\lambda_4z-\lambda_4g^2z\\ &=(yz)x+\lambda_3yx-2\lambda_1yz+\lambda_2xz+\lambda_5y(1-g^2)+\lambda_4z(1-g^2)-2\lambda_3\lambda_4g^2(1-g)\\ &=zyx+2\lambda_3yx+\lambda_2[x,z]-2\lambda_1yz+\lambda_5y(1-g^2)+\lambda_6x(1-g^2)+\lambda_4z(1-g^2)\\&\quad -2\lambda_1\lambda_6g^2(1-g)-2\lambda_3\lambda_4g^2(1-g)\\ &=zyx+2\lambda_3yx+\lambda_2\lambda_3x+\lambda_1\lambda_2z-2\lambda_1zy-2\lambda_1\lambda_3y+\lambda_5y(1-g^2)\\&\quad +\lambda_6x(1-g^2)+\lambda_4z(1-g^2)+\lambda_2\lambda_5(1-g^2)-2\lambda_1\lambda_6(1-g^2)\\&\quad -2\lambda_1\lambda_6g^2(1-g)-2\lambda_3\lambda_4g^2(1-g);\end{aligned}$$ $$\begin{aligned} x(yz)&=x(zy+\lambda_3y-\lambda_2z+\lambda_6(1-g^2))=(xz)y+\lambda_3xy-\lambda_2xz+\lambda_6x(1-g^2)\\ &=z(xy)+2\lambda_3xy-\lambda_1zy+\lambda_5(1-g^2)y-\lambda_2xz+\lambda_6x(1-g^2)\\ &=zyx-\lambda_2[x,z]-2\lambda_1zy+2\lambda_3xy+\lambda_4z(1-g^2)+\lambda_6x(1-g^2)\\&\quad+\lambda_5y(1-g^2)-2\lambda_2\lambda_5g^2(1-g)\\ &=zyx+2\lambda_3yx+\lambda_2\lambda_3x+\lambda_1\lambda_2z-2\lambda_1zy-2\lambda_1\lambda_3y+\lambda_5y(1-g^2)\\&\quad +\lambda_6x(1-g^2)+\lambda_4z(1-g^2)+2\lambda_3\lambda_4(1-g^2) -2\lambda_2\lambda_5g^2(1-g)- \lambda_2\lambda_5 (1-g^2).\end{aligned}$$ The Hopf subalgebras of $H$ in Lemma \[lem:p4-x1y1z1\] generated by $g,x,y$ appeared in [@NW] as examples of pointed Hopf algebras over ${\mathds{k}}$ of dimension $p^3$. \[thm:p4-x1y0z0\] Let $p$ be a prime number and ${\operatorname{char}}{\mathds{k}}=p$. Let $H$ be a pointed Hopf algebra over ${\mathds{k}}$ of dimension $p^4$. Assume that ${\operatorname{gr}}H={\mathds{k}}[g,x,y,z]/(g^p-1,x^p,y^p,z^p)$ with $g\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g}$ and $y,z\in{\mathcal{P}}(H)$. Then $H$ is isomorphic to one of the following Hopf algebras: (1) : $H_1(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-\lambda x,[x,z],[y,z]-z,x^p,y^p-y,z^p)$, (2) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z]-(1-g),[y,z]-z,x^p,y^p-y,z^p)$, (3) : ${\mathds{k}}\langle g,x\rangle/(g^p-1,gx-xg-g(1-g),x^p-x)\otimes {\mathds{k}}\langle y,z\rangle/(y^p-y,z^p,[y,z]-z)$, (4) : ${\mathds{k}}[ g,x]/(g^p-1, x^p)\otimes {\mathds{k}}[y,z]/(y^p-y,z^p-z)$, (5) : $H_2(\lambda):={\mathds{k}}[ g,x,y,z]/(g^p-1, x^p-y-\lambda z,y^p-y,z^p-z)$, (6) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^p,y^p-y,z^p-z)$, (7) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^p- z,y^p-y,z^p-z)$, (8) : $H_3(\lambda,\gamma):={\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^p-x-\lambda y-\gamma z,y^p-y,z^p-z)$, (9) : ${\mathds{k}}[g,x]/(g^p-1,x^p)\otimes{\mathds{k}}[y,z]/(y^p-y,z^p)$, (10) : ${\mathds{k}}[g,x,y,z]/(g^p-1,x^p-z,y^p-y,z^p)$, (11) : ${\mathds{k}}[g,x,y,z]/(g^p-1,x^p-y,y^p-y,z^p)$, (12) : ${\mathds{k}}[g,x,y,z]/(g^p-1,x^p-y-z,y^p-y,z^p)$, (13) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y],[x,z]-(1-g),[y,z],x^p, y^p-y,z^p)$, (14) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y],[x,z]-(1-g),[y,z],x^p- y, y^p-y,z^p)$, (15) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^p, y^p-y,z^p)$, (16) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^p-z, y^p-y,z^p)$, (17) : $H_4(\lambda,i):={\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^p-x-\lambda y-iz,y^p-y,z^p)$, for $i\in{\mathbb{I}}_{0,1}$, (18) : $H_5(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z]-(1-g),[y,z],x^p-x-\lambda y, y^p-y,z^p)$, (19) : ${\mathds{k}}[g,x]/(g^p-1,x^p)\otimes{\mathds{k}}[y,z]/(y^p-z,z^p)$, (20) : ${\mathds{k}}[g,x,y,z]/(g^p-1,x^p-z,y^p-z,z^p)$, (21) : ${\mathds{k}}[g,x,y,z]/(g^p-1,x^p-y,y^p-z,z^p)$, (22) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[y,z],[x,z],x^p,y^p-z,z^p)$, (23) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[y,z],[x,z],x^p- z,y^p-z,z^p)$, (24) : ${\mathds{k}}\langle g,x\rangle/(g^p-1,gx-xg-g(1-g),x^p-x)\otimes{\mathds{k}}[y,z]/(y^p-z,z^p)$, (25) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,gx-xg-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^p-x-z,y^p-z,z^p)$, (26) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,gx-xg-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^p-x-y,y^p-z,z^p)$, (27) : $H_6(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,gx-xg-g(1-g),[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^p-x-\lambda z,y^p-z,z^p)$, (28) : ${\mathds{k}}[g,x]/(g^p-1,x^p)\otimes{\mathds{k}}[y,z]/(y^p,z^p)$, (29) : ${\mathds{k}}\langle g,x\rangle/(g^p-1,gx-xg-g(1-g),x^p-x)\otimes{\mathds{k}}[y,z]/(y^p,z^p)$, (30) : ${\mathds{k}}[g,x,y,z]/(g^p-1,x^p-y, y^p,z^p)$, (31) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,gx-xg=g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^p-x-y,y^p,z^p)$, (32) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^p,y^p,z^p)$, (33) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^p-z,y^p,z^p)$, (34) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,gx-xg=g(1-g),[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^p-x,y^p,z^p)$, (35) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^p-1,gx-xg=g(1-g),[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^p-x-z,y^p,z^p)$, Furthermore, for $\lambda,\gamma\in{\mathds{k}}$, - $H_1(\lambda)\cong H_1(\gamma)$, if and only if, $\lambda=\gamma$; - $H_2(\lambda)\cong H_2(\gamma)$, if and only if, there exist $\alpha_1,\alpha_2,\beta_1,\beta_2\in{\mathds{k}}$ satisfying $\alpha_i^p-\alpha_i=0=\beta_i^p-\beta_i$ for $i\in{\mathbb{I}}_{1,2}$ such that $(\alpha_1+\beta_1\lambda)\gamma=(\alpha_2+\beta_2\lambda)$ and $\alpha_1\beta_2-\alpha_2\beta_1\neq 0$; - $H_3(\lambda,\gamma)\cong H_3(\mu,\nu)$, if and only if, there exist $\alpha_i,\beta_i\in{\mathds{k}}$ satisfying $\alpha_i^p-\alpha_i=0=\beta_i^p-\beta_i$ for $i\in{\mathbb{I}}_{0,1}$ such that $\alpha_1\beta_2-\alpha_2\beta_1\neq 0$ and $\lambda\alpha_1+\gamma\beta_1=\mu$, $\lambda\alpha_2+\gamma\beta_2=\nu$; - $H_4(\lambda,i)\cong H_4(\gamma,j)$, if and only if, there is $\alpha\neq 0\in{\mathds{k}}$ satsifying $\alpha^p=\alpha$ such that $\lambda\alpha=\gamma$ and $i=j$; - $H_5(\lambda)\cong H_5(\gamma)$, if and only if, there is $\alpha\neq 0\in{\mathds{k}}$ satsifying $\alpha^p=\alpha$ such that $\lambda\alpha=\gamma$; - $H_6(\lambda)= H_6(\gamma)$, if and only if, $\lambda=\gamma$. Similar to the proof of Lemma \[lem:p4-x1y1z1\], we have $$\begin{gathered} gx-xg=\lambda_1g(1-g),\quad gy-yg=0,\quad gz-zg=0,\\ x^p-\lambda_1x\in{\mathcal{P}}(H),\quad y^p\in{\mathcal{P}}(H),\quad z^p\in{\mathcal{P}}(H),\\ xy-yx\in{\mathcal{P}}_{1,g}(H),\quad xz-zx\in{\mathcal{P}}_{1,g}(H),\quad yz-zy\in{\mathcal{P}}(H).\end{gathered}$$ for some $\lambda_1\in{\mathbb{I}}_{0,1}$. Since ${\mathcal{P}}(H)={\mathds{k}}\{y,z\}$ and ${\mathcal{P}}_{1,g}(H)={\mathds{k}}\{x,1-g\}$, it follows that $$\begin{gathered} x^p-\lambda_1x=\mu_1y+\mu_2z,\quad y^p=\mu_3y+\mu_4z,\quad z^p=\mu_5y+\mu_6z,\\ xy-yx=\nu_1x+\nu_2(1-g),\quad xz-zx=\nu_3x+\nu_4(1-g),\quad yz-zy=\nu_5y+\nu_6z,\end{gathered}$$ for some $\mu_1,\cdots,\mu_6,\nu_1,\cdots,\nu_6\in{\mathds{k}}$. It follows by Lemmas \[pqlem1\]–\[pqlem2\] that $$\begin{aligned} [g,x^p]&=(g)({\text{ad}_R\,}x)^p=[g,x],\quad g^px=xg^p,\\ [x^p,y]&=({\text{ad}_L\,}x)^p(y)=-\nu_2({\text{ad}_L\,}x)^{p-1}(g)=\nu_2\lambda_1(1-g), \\ [x^p,z]&=({\text{ad}_L\,}x)^p(z)=-\nu_4({\text{ad}_L\,}x)^{p-1}(g)=\nu_4\lambda_1(1-g),\\ [x,y^p]&=(x)({\text{ad}_R\,}y)^p=\nu_1(x)({\text{ad}_R\,}y)^{p-1}=\nu_1^{p-1}[x,y]=\nu_1^px+\nu_1^{p-1}\nu_2(1-g),\\ [x,z^p]&=(x)({\text{ad}_R\,}z)^p=\nu_3(x)({\text{ad}_R\,}z)^{p-1}=\nu_3^{p-1}[x,z]=\nu_3^px+\nu_3^{p-1}\nu_4(1-g),\\ [y^p,z]&=({\text{ad}_L\,}y)^p(z)=\nu_6({\text{ad}_L\,}y)^{p-1}(z)=\nu_6^{p-1}[y,z]=\nu_6^{p-1}\nu_5y+\nu_6^pz,\\ [y,z^p]&=(y)({\text{ad}_R\,}z)^p=\nu_5(y)({\text{ad}_R\,}z)^{p-1}=\nu_5^{p-1}[y,z]=\nu_5^py+\nu_5^{p-1}\nu_6z.\end{aligned}$$ Then the verification of $(a^p)b=a^{p-1}(ab)$ for $a,b\in\{g,x,y,z\}$ and $(gx)y=g(xy), g(xz)=(gx)z$ amounts to the conditions $$\begin{gathered} \lambda_1\nu_1=\mu_2\nu_5=\mu_2\nu_6=0,\quad \lambda_1\nu_3=\mu_1\nu_5=\mu_1\nu_6=0, \\ \mu_3\nu_1+\mu_4\nu_3=\nu_1^p,~ \mu_3\nu_2+\mu_4\nu_4=\nu_1^{p-1}\nu_2,~\mu_5\nu_1+\mu_6\nu_3=\nu_3^p,~ \mu_5\nu_2+\mu_6\nu_4=\nu_3^{p-1}\nu_4, \\ \mu_3\nu_5=\nu_6^{p-1}\nu_5,\quad \mu_3\nu_6=\nu_6^{p},\quad \mu_6\nu_5=\nu_5^{p},\quad \mu_6\nu_6=\nu_5^{p-1}\nu_6, \\ \mu_1\nu_1+\mu_2\nu_3=0=\mu_1\nu_2+\mu_2\nu_4,\quad \mu_4\nu_5=\mu_4\nu_6=\mu_5\nu_5=\mu_5\nu_6=0.\end{gathered}$$ Finally, the verification of $(xy)z=x(yz)$ amounts to the conditions $$\begin{gathered} \nu_1\nu_5=\nu_3\nu_6=0,\quad \nu_2\nu_3+\nu_2\nu_5+\nu_4\nu_6=\nu_1\nu_4.\end{gathered}$$ By the Diamond lemma, $\dim H=p^4$. Let $L$ be the subalgebra of $H$ generated by $y,z$. It is clear that $L$ is a Hopf subalgebra of $H$. Indeed, $L\cong U^L({\mathcal{P}}(H))$, where $U^L({\mathcal{P}}(H))$ is a restricted universal enveloping algebra of ${\mathcal{P}}(H)$. Then by [@W1 Proposition A.3], $L$ is isomorphic to one of the following Hopf algebras 1. ${\mathds{k}}\langle y,z\rangle/(y^p-y,z^p,[y,z]-z)$, 2. ${\mathds{k}}[y,z]/(y^p-y,z^p-z)$, 3. ${\mathds{k}}[y,z]/(y^p-y,z^p)$, 4. ${\mathds{k}}[y,z]/(y^p-z,z^p)$, 5. ${\mathds{k}}[y,z]/(y^p,z^p)$. Moreover, $H\cong L+{\mathds{k}}\langle g,x\rangle$. **Case (a).** Assume that $L$ is isomorphic to the Hopf algebra described in $(a)$. Without loss of generality, we can assume that $\mu_3-1=0=\mu_4=\mu_5=\mu_6$ and $\nu_5=0=\nu_6-1$. Then $\mu_1=0=\mu_2$, $\lambda_1\nu_1=0=\nu_3$, $\nu_1^p=\nu_1$, $\nu_4=\nu_1\nu_4$, $\nu_2=\nu_1^{p-1}\nu_2$ and we can take $\nu_4\in{\mathbb{I}}_{0,1}$ by rescaling $z$. If $\lambda_1=0=\nu_4$, then we can take $\nu_2=0$. Indeed, if $\nu_1=0$, then $\nu_2=0$, otherwise we can take $\nu_2=0$ via the linear translation $x:=x+a(1-g)$ satisfying $\nu_1a=\nu_2$. Hence $H\cong H_1(\nu_1)$ described in $(1)$. If $\lambda_1=0=\nu_4-1$, then $\nu_1=1$ and we can take $\nu_2=0$ via the linear translation $x:=x+\nu_2(1-g)$, which gives one class of $H$ described in $(2)$. If $\lambda_1=1$, then $\nu_1=0=\nu_2=\nu_4$, which gives one class of $H$ described in $(3)$. **Claim:** $H_1(\lambda)\cong H_1(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, $\lambda=\gamma$. Suppose that $\phi: H_{1}(\lambda)\rightarrow H_{1}(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Write $g^{\prime},x^{\prime},y^{\prime},z^{\prime}$ to distinguish the generators of $H_{1}(\gamma)$. Observe that ${\mathcal{P}}_{1, g^{\prime} }(H_1(\gamma))={\mathds{k}}\{x^{\prime}\}\oplus {\mathds{k}}\{1- g^{\prime} \}$ and ${\mathcal{P}}(H_1(\gamma))={\mathds{k}}\{y^{\prime},z^{\prime}\}$. Then $$\begin{aligned} \label{eq:iso-1-0-0} \phi(g)=g^{\prime}, \quad \phi(x)=\alpha_1x^{\prime}+\alpha_2(1-g^{\prime}),\quad\phi(y)=\beta_1y^{\prime}+\beta_2z^{\prime},\quad \phi(z)=\gamma_1y^{\prime}+\gamma_2z^{\prime}, \end{aligned}$$ for some $\alpha_i,\beta_i,\gamma_i\in{\mathds{k}}$ and $i\in{\mathbb{I}}_{1,2}$. Applying $\phi$ to the relation $[y,z]-z=0$, we have $$\begin{aligned} \gamma_1=0,\quad (\beta_1-1)\gamma_2=0\quad \Rightarrow \quad \beta_1=1.\end{aligned}$$ Then applying $\phi$ to the relation $[x,y]-\lambda x=0$, we have $$\begin{aligned} \lambda=\gamma.\end{aligned}$$ Conversely, it is easy to see that $ H_{1}(\lambda)\cong H_{1}(\gamma)$ if $\lambda=\gamma$. Similarly, we can also show that the Hopf algebras described in $(1)$–$(3)$ are pairwise isomorphic. Indeed, direct computations show that there are no elements $\alpha_i,\beta_i,\gamma_i\in{\mathds{k}}$ for $i\in{\mathbb{I}}_{1,2}$ such that the morphism is an isomorphism. **Case (b).** Assume that $L$ is isomorphic to the Hopf algebra described in $(b)$. Without loss of generality, we can assume that $\mu_3-1=0=\mu_4=\mu_5=\mu_6-1$ and $\nu_5=0=\nu_6$. Then $\lambda_1\nu_1=0=\lambda_1\nu_3$, $\nu_1^p=\nu_1$, $\nu_3^p=\nu_3$, $\nu_4=\nu_3^{p-1}\nu_4$, $\nu_2=\nu_1^{p-1}\nu_2$, $\mu_1\nu_1+\mu_2\nu_3=0=\mu_1\nu_2+\mu_2\nu_4$ and $\nu_2\nu_3=\nu_1\nu_4$. Hence we can take $\nu_1,\nu_3\in\{0,1\}$ by rescaling $y,z$. If $\lambda_1=0$ and $\nu_1=0=\nu_3$, then $\nu_2=0=\nu_4$ and we can take $\mu_1\in{\mathbb{I}}_{0,1}$ or $\mu_2\in{\mathbb{I}}_{0,1}$ by rescaling $x$. If $\mu_1=0=\mu_2$, then $H$ is isomorphic to the Hopf algebra described in $(4)$. If $\mu_1=1$, then $H\cong H_2(\mu_2)$ described in $(5)$. If $\mu_1=0$ and $\mu_2\neq 0$, then by rescaling $x$, we have $\mu_2=1$, and hence by swapping $x$ and $y$, $H\cong H_2(0)$. If $\lambda_1=0$ and $\nu_1-1=0=\nu_3$, then $\nu_4=0=\mu_1$, and we can take $\nu_2=0$ via the linear translation $x:=x+\nu_2(1-g)$. Moreover, we can take $\mu_2\in{\mathbb{I}}_{0,1}$ by rescaling $x$, which gives two classes of $H$ described in $(6)$–$(7)$. If $\lambda_1=0$ and $\nu_1=0=\nu_3-1$, then it can be reduced to the case $\lambda_1=0$ and $\nu_1-1=0=\nu_3$ by swapping $x$ and $y$. If $\lambda_1=0$ and $\nu_1=1=\nu_3$, then $\mu_1+\mu_2=0=\nu_2-\nu_4$ and hence we can take $\nu_2=0=\nu_4$ via the linear translations $x:=x+\nu_2(1-g)$. Therefore, it can be reduced to the case $\lambda_1=0$ and $\nu_1-1=0=\nu_3$ via the linear translation $z:=z-y$. If $\lambda_1=1$, then $\nu_1=0=\nu_3$ and hence $\nu_2=0=\nu_4$. Therefore $H\cong H_3(\mu_1,\mu_2)$ described in $(8)$. Similar to the proof of Case $(a)$, $H_2(\lambda)\cong H_2(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, there exist $\alpha_1,\alpha_2,\beta_1,\beta_2\in{\mathds{k}}$ satisfying $\alpha_i^p-\alpha_i=0=\beta_i^p-\beta_i$ for $i\in{\mathbb{I}}_{1,2}$ such that $(\alpha_1+\beta_1\lambda)\gamma=(\alpha_2+\beta_2\lambda)$ and $\alpha_1\beta_2-\alpha_2\beta_1\neq 0$. $H_3(\lambda,\gamma)\cong H_3(\mu,\nu)$ if and only if, there exist $\alpha_i,\beta_i\in{\mathds{k}}$ satisfying $\alpha_i^p-\alpha_i=0=\beta_i^p-\beta_i$ for $i\in{\mathbb{I}}_{0,1}$ such that $\alpha_1\beta_2-\alpha_2\beta_1\neq 0$ and $\lambda\alpha_1+\gamma\beta_1=\mu$, $\lambda\alpha_2+\gamma\beta_2=\nu$. The Hopf algebras from the different items are pairwise non-isomorphic. **Case (c).** Assume that $L$ is isomorphic to the Hopf algebra described in $(c)$. Without loss of generality, we can assume that $\mu_3-1=0=\mu_4=\mu_5=\mu_6=\nu_5=\nu_6$. Then $\lambda_1\nu_1=0=\nu_3$, $\nu_1=\nu_1^p$, $\nu_2=\nu_1^{p-1}\nu_2$, $\mu_1\nu_2+\mu_2\nu_4=0=\mu_1\nu_1=\nu_1\nu_4$ and we can take $\nu_1\in{\mathbb{I}}_{0,1}$ by rescaling $y$. If $\lambda_1=0=\nu_1$, then $\nu_2=0=\mu_2\nu_4$ and we can take $\nu_4\in{\mathbb{I}}_{0,1}$ by rescaling $z$. If $\nu_4=0$, then we can take $\mu_1,\mu_2\in{\mathbb{I}}_{0,1}$ by rescaling $x,z$, which gives four classes of $H$ described in $(9)$–$(12)$. If $\nu_4=1$, then $\mu_2=0$ and we can take $\mu_1\in{\mathbb{I}}_{0,1}$ by rescaling $x,z$. Indeed, if $\mu_1\neq 0$, then we can take $\mu_1=1$ via $x:=ax,z:=a^{-1}z$ satisfying $a^p=\mu_1$. Therefore $H$ is isomorphic to one of the Hopf algebras in $(13)$–$(14)$. If $\lambda_1=0=\nu_1-1$, then $\mu_1=0=\nu_4$ and we can take $\nu_2=0$ via the linear translation $x:=x+\nu_2(1-g)$. Hence we can take $\mu_2\in{\mathbb{I}}_{0,1}$ by rescaling $x$, which gives two classes of $H$ described in $(15)$–$(16)$. If $\lambda_1=1$, then $\nu_1=0=\nu_2=\mu_2\nu_4$ and we can take $\nu_4\in{\mathbb{I}}_{0,1}$ by rescaling $z$. If $\nu_4=0$, then we can take $\mu_2\in{\mathbb{I}}_{0,1}$ by rescaling $z$ and hence $H\cong H_4(\mu_1,\mu_2)$ described in $(17)$. If $\nu_4=1$, then $\mu_2=0$ and hence $H\cong H_5(\mu_1)$ described in $(18)$. Similar to the proof of Case $(a)$, $H_4(\lambda,i)\cong H_4(\gamma,j)$ if and only if there is $\alpha\neq 0\in{\mathds{k}}$ satsifying $\alpha^p=\alpha$ such that $\lambda\alpha=\gamma$ and $i=j$. $H_5(\lambda)\cong H_5(\gamma)$ if and only if there is $\alpha\neq 0\in{\mathds{k}}$ satsifying $\alpha^p=\alpha$ such that $\lambda\alpha=\gamma$. The Hopf algebras from different items are pairwise non-isomorphic. **Case (d).** Assume that $L$ is isomorphic to the Hopf algebra described in $(d)$. Without loss of generality, we can assume that $\mu_3=0=\mu_4-1=\mu_5=\mu_6=\nu_5=\nu_6$. Then $\nu_1=\nu_3=\nu_4=\mu_1\nu_2=0$. If $\lambda_1=0$, then $\nu_2\in{\mathbb{I}}_{0,1}$ by rescaling $x$. If $\nu_2=0$, then we can take $\mu_1\in{\mathbb{I}}_{0,1}$ by rescaling $x$. If $\mu_1=0$, then we can take $\mu_2\in{\mathbb{I}}_{0,1}$. If $\mu_1=1$, then we can take $\mu_2=0$ via the linear translation $y:=y+\mu_2z$. Therefore, we obtain three classes of $H$ described in $(19)$–$(21)$. If $\nu_2=1$, then $\mu_1=0$ and we can take $\mu_2\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(22)$–$(23)$. Indeed, if $\mu_2\neq 0$, then we can take $\mu_2=1$ via $x:=ax,y:=a^{-1}y,z:=a^{-p}z$ satisfying $a^{-2p}=\mu_2$. If $\lambda_1-1=0=\nu_2$, then we can take $\mu_1\in{\mathbb{I}}_{0,1}$ by rescaling $y,z$. If $\mu_1=0$, then we can take $\mu_2\in{\mathbb{I}}_{0,1}$ by rescaling $y,z$. If $\mu_1=1$, then we can take $\mu_2=0$ via the linear translation $y:=y+\mu_2z$. Therefore, we obtain three classes of $H$ described in $(24)$–$(26)$. If $\lambda_1=1$ and $\nu_2\neq 0$, then $\mu_1=0$ and we can take $\nu_2=1$ by rescaling $y,z$. Therefore, $H\cong H_7(\mu_2)$ described in $(27)$. Similar to the proof of Case $(a)$, $H_7(\lambda)= H_7(\gamma)$, if and only if, $\lambda=\gamma$. The Hopf algebras from different items are pairwise non-isomorphic. **Case (e).** Assume that $L$ is isomorphic to the Hopf algebra described in $(e)$. Without loss of generality, we can assume that $\mu_3=\mu_4=\mu_5=\mu_6=\nu_5=\nu_6=0$. Then $\nu_1=0=\nu_3$, $\mu_1\nu_2+\mu_2\nu_4=0$ and we can take $\nu_2,\nu_4\in{\mathbb{I}}_{0,1}$ by rescaling $y,z$. If $\nu_2=0=\nu_4$ and $\mu_1=0=\mu_2$, then $H$ is isomorphic to one of the Hopf algebras described in $(28)$–$(29)$. If $\nu_2=0=\nu_4$ and $\mu_1\neq 0$ or $\mu_2\neq 0$, then $H$ is isomorphic to one of the Hopf algebras described in $(30)$–$(31)$. Indeed, if $\mu_1\neq 0$, then we can take $\mu_1=1$ and $\mu_2=0$ via the linear translation $y:=\mu_1y+\mu_2z$, $z:=z$; if $\mu_2\neq 0$, then we can take $\mu_1=1$ and $\mu_2=0$ via the linear translation $y:=\mu_1y+\mu_2z$, $z:=y$; If $\nu_2-1=0=\nu_4$, then $\mu_1=0$ and $\mu_2\in{\mathbb{I}}_{0,1}$ by rescaling $z$, which gives four classes of $H$ described in $(32)$–$(35)$. If $\nu_2=0=\nu_4-1$, then it can be reduced to the case $\nu_2-1=0=\nu_4$ by swapping $y$ and $z$. If $\nu_2=1=\nu_4$, then $\mu_1+\mu_2=0$ and hence it can be reduced to the case $\nu_2-1=0=\nu_4$ via the linear translation $z:=z-y$. Similar to the proof of Case (a), the Hopf algebras from different items are pairwise non-isomorphic. In Theorem \[thm:p4-x1y0z0\], there are six infinite families of Hopf algebras of dimension $p^4$, which constitute new examples of Hopf algebras. Moreover, the Hopf algebras described in $(1)$–$(2)$, $(6)$–$(8)$, $(13)$–$(18)$, $(22)$–$(23)$, $(25)$–$(27)$, $(31)$–$(35)$ are not tensor product Hopf algebras and constitute new examples of non-commutative and non-cocommutative pointed Hopf algebras. In particular, up to isomorphism, there are infinitely many Hopf algebras of dimension $p^4$ that are generated by group-like elements and skew-primitive elements. \[lem:p4-x1y1z0\] Let $p$ be a prime number and ${\operatorname{char}}{\mathds{k}}=p$. Let $H$ be a pointed Hopf algebra over ${\mathds{k}}$ of dimension $p^4$. Assume that ${\operatorname{gr}}H={\mathds{k}}[g,x,y,z]/(g^p-1,x^p,y^p,z^p)$ with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}_{1,g}(H)$ and $z\in{\mathcal{P}}(H)$. Then the defining relations of $H$ have the following form $$\begin{gathered} g^p=1,\quad gx-xg=\lambda_1g(1-g),\quad gy-yg=\lambda_2g(1-g),\quad gz-zg=0,\\ x^p-\lambda_1x=\lambda_3z,\quad y^p-\lambda_2y=\lambda_4z,\quad z^p=\lambda_5z,\\ xz-zx=\gamma_1x+\gamma_2y+\gamma_3(1-g),\quad yz-zy=\gamma_4x+\gamma_5y+\gamma_6(1-g),\\ xy-yx-\lambda_2x+\lambda_1y=\left\{ \begin{array}{ll} \lambda_6z, & p=2, \\ \lambda_7(1-g^2), & p>2. \end{array} \right.\end{gathered}$$ for some $\lambda_1,\cdots,\lambda_7, \gamma_1,\cdots,\gamma_6\in{\mathds{k}}$. Suppose that $p=2$. Then the ambiguity conditions are given by $$\begin{gathered} \lambda_6\gamma_1=\lambda_3\gamma_4, \quad\lambda_6\gamma_2=\lambda_3\gamma_5,\quad \lambda_6\gamma_3=\lambda_3\gamma_6,\label{eq:x1y1z0-1}\\ \lambda_1\gamma_1=\lambda_2\gamma_2,\quad \lambda_6\gamma_2=0,\quad \lambda_1\gamma_4=\lambda_2\gamma_5,\quad \lambda_6\gamma_4=0,\label{eq:x1y1z0-2}\\ \lambda_6\gamma_4=\lambda_4\gamma_1,\quad \lambda_6\gamma_5=\lambda_4\gamma_2,\quad\lambda_6\gamma_6=\lambda_4\gamma_3,\label{eq:x1y1z0-3}\\ (\lambda_5-\gamma_1)\gamma_1+\gamma_2\gamma_4=(\lambda_5-\gamma_1)\gamma_2+\gamma_2\gamma_5=(\lambda_5-\gamma_1)\gamma_3+\gamma_2\gamma_6=0,\label{eq:x1y1z0-4}\\ (\lambda_5-\gamma_5)\gamma_4+\gamma_1\gamma_4=(\lambda_5-\gamma_5)\gamma_5+\gamma_2\gamma_4=(\lambda_5-\gamma_5)\gamma_6+\gamma_3\gamma_4=0,\label{eq:x1y1z0-5}\\ \lambda_3\gamma_1=\lambda_3\gamma_2=\lambda_3\gamma_3=0=\lambda_4\gamma_4=\lambda_4\gamma_5=\lambda_4\gamma_6,\label{eq:x1y1z0-6}\\ \lambda_6\gamma_1=\lambda_6\gamma_5.\label{eq:x1y1z0-7}\end{gathered}$$ Similar to the proof of Lemma \[lem:p4-x1y1z1\], we have $gx-xg=\lambda_1g(1-g)$, $gy-yg=\lambda_2g(1-g)$ and $gz-zg=0$ in $H$ for some $\lambda_1,\lambda_2\in{\mathbb{I}}_{0,1}$. Moreover, $x^p-\lambda_1x,y^p-\lambda_2y,z^p\in{\mathcal{P}}(H)$, $xy-yx-\lambda_2x+\lambda_1y\in{\mathcal{P}}_{1,g^2}(H)$ and $xz-zx,yz-zy\in{\mathcal{P}}_{1,g}(H)$. Since ${\mathcal{P}}(H)={\mathds{k}}\{z\}$ and ${\mathcal{P}}_{1,g}(H)={\mathds{k}}\{1-g,x,y\}$, it follows that $$\begin{gathered} x^p-\lambda_1x=\lambda_3z,\quad y^p-\lambda_2y=\lambda_4z,\quad z^p=\lambda_5z,\\ xz-zx=\gamma_1x+\gamma_2y+\gamma_3(1-g),\quad yz-zy=\gamma_4x+\gamma_5y+\gamma_6(1-g),\end{gathered}$$ for $\lambda_3,\lambda_4,\lambda_5,\gamma_1,\cdots,\gamma_6\in{\mathds{k}}$. If $g^2=1$, then $xy-yx-\lambda_2x+\lambda_1y\in{\mathcal{P}}(H)$ and hence $xy-yx-\lambda_2x+\lambda_1y=\lambda_6z$ for some $\lambda_6\in{\mathds{k}}$; otherwise, $xy-yx-\lambda_2x+\lambda_1y=\lambda_7(1-g^2)$ for some $\lambda_7\in{\mathds{k}}$. Assume that $p=2$. Then it follows by a direct computation that $$\begin{aligned} [x,[x,y]]-[x^2,y]=\lambda_6[x,z]-\lambda_3[z,y],\\ [x,[x,z]]-[x^2,z]=\gamma_2[x,y]+\gamma_3[g,x]-\lambda_1[x,z],\\ [[x,y],y]-[x,y^2]=\lambda_6[y,z]-\lambda_4[x,z],\\ [[x,z],z]-[x,z^2]=\gamma_1[x,z]+\gamma_2[y,z]-\lambda_5[x,z],\\ [y,[y,z]]-[y^2,z]=\gamma_4[x,y]+\gamma_6[g,y]-\lambda_2[y,z],\\ [[y,z],z]-[y,z^2]=\gamma_4[x,z]+\gamma_5[y,z]-\lambda_5[y,z].\end{aligned}$$ Then the verification of $(a^2)b=a(ab)$ and $a(b^2)=(ab)b$ for $a,b\in\{g,x,y,z\}$ amounts to the conditions –. Then it follows by a direct computation that the ambiguities $(ab)c=a(bc)$ for $a,b,c\in\{g,x,y,z\}$ give the conditions . \[lem:p4-x1yu\] Let $p$ be a prime number and ${\operatorname{char}}{\mathds{k}}=p$. Let $H$ be a pointed Hopf algebra over ${\mathds{k}}$. Assume that ${\operatorname{gr}}H={\mathds{k}}[g,h,x,y]/(g^p-1,h^{p^n}-1,x^p,y^p)$ with $g,h\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g}(H)$ and $y\in{\mathcal{P}}_{1,g^{\mu}}(H)$ for $\mu\in{\mathbb{I}}_{0,p-1}$. If $\mu=0$, then the defining relations of $H$ are $$\begin{gathered} g^p=1,\quad h^{p^n}=1,\quad gx-xg=\lambda_1g(1-g), \quad gy-yg=0,\\ hx-xh=\lambda_3h(1-g),\quad hy-yh=0,\\ x^p-\lambda_1x=\mu_1y,\quad y^p=\mu_2y,\quad xy-yx=\mu_3x+\mu_4(1-g),\end{gathered}$$ for $\lambda_1\in{\mathbb{I}}_{0,1},\lambda_3,\mu_1,\cdots,\mu_4\in{\mathds{k}}$ with ambiguity conditions $$\begin{aligned} \mu_1\mu_3=0=\mu_1\mu_4,\quad \mu_2\mu_3=\mu_3^p,\quad \mu_2\mu_4=\mu_3^{p-1}\mu_4,\quad \lambda_1\mu_3=0=\mu_3\lambda_3.\end{aligned}$$ If $\mu\neq 0$, then the defining relations are $$\begin{gathered} g^p=1,\quad h^{p^n}=1,\quad gx-xg=\lambda_1g(1-g), \quad gy-yg=\lambda_2g(1-g^{\mu}),\\ hx-xh=\lambda_3h(1-g),\quad hy-yh=\lambda_4h(1-g^{\mu}),\\ x^p-\lambda_1x=0,\quad y^p- \lambda_2y=0,\quad xy-yx+\mu\lambda_1y-\lambda_2x=\lambda_5(1-g^{\mu+1}).\end{gathered}$$ for $\lambda_1,\lambda_2\in{\mathbb{I}}_{0,1},\lambda_3,\cdots,\lambda_5\in{\mathds{k}}$ with ambiguity conditions $$\begin{aligned} \lambda_1\lambda_4(1-g^{\mu+1})=0=\lambda_2\lambda_3(1-g^{\mu+1}).\end{aligned}$$ By similar computations as before, we have $$\begin{gathered} gx-xg=\lambda_1g(1-g), \quad gy-yg=\lambda_2g(1-g^{\mu}),\\ hx-xh=\lambda_3h(1-g),\quad hy-yh=\lambda_4h(1-g^{\mu}),\\ x^p-\lambda_1x\in{\mathcal{P}}(H),\quad y^p-\mu^{p-1}\lambda_2y\in{\mathcal{P}}(H),\quad xy-yx+\mu\lambda_1y-\lambda_2x\in{\mathcal{P}}_{1,g^{\mu+1}}(H).\end{gathered}$$ for some $\lambda_1,\lambda_2\in{\mathbb{I}}_{0,1}$, $\lambda_3,\lambda_4\in{\mathds{k}}$. If $\mu=0$, then ${\mathcal{P}}(H)={\mathds{k}}\{y\}$ and ${\mathcal{P}}_{1,g}(H)={\mathds{k}}\{1-g,x\}$. Hence $$\begin{aligned} x^p-\lambda_1x=\mu_1y,\quad y^p=\mu_2y,\quad xy-yx=\mu_3x+\mu_4(1-g).\end{aligned}$$ for some $\mu_1,\cdots,\mu_4\in{\mathds{k}}$. The verification of $(x^p)x=x(x^p)$ and $(y^p)y=y(y^p)$ amounts to the conditions $$\begin{aligned} \mu_1\mu_3=0=\mu_1\mu_4.\end{aligned}$$ By induction, for any $n>1$, we have $(x)({\text{ad}_R\,}y)^n=\mu_3(x)({\text{ad}_R\,}y)^{n-1}$ and $({\text{ad}_L\,}x)^n(y)=(-\mu_4)({\text{ad}_L\,}x)^{n-1}(g)$. Then by Lemma \[pqlem1\], $$\begin{aligned} [x,y^p]&=\mu_2[x,y]=\mu_2\mu_3x+\mu_2\mu_4(1-g),\\ (x)({\text{ad}_R\,}y)^p&=\mu_3(x)({\text{ad}_R\,}y)^{p-1}=\mu_3^{p-1}[x,y]=\mu_3^{p}x+\mu_3^{p-1}\mu_4(1-g);\\ [x^p,y]&=\lambda_1[x,y]=\lambda_1\mu_3x+\lambda_1\mu_4(1-g),\\ ({\text{ad}_L\,}x)^p(y)&=-\mu_4({\text{ad}_L\,}x)^{p-1}(g)=-\mu_4\lambda_1^{p-1}(g-1)=\mu_4\lambda_1^{p-1}(1-g).\end{aligned}$$ Hence by Proposition \[proJ\], $[x,y^p]=(x)({\text{ad}_R\,}y)^p$ and $[x^p,y]=({\text{ad}_L\,}x)^p(y)$, which implies that $$\begin{aligned} \mu_2\mu_3=\mu_3^p,\quad \mu_2\mu_4=\mu_3^{p-1}\mu_4,\quad \lambda_1\mu_3=0.\end{aligned}$$ Finally, it follows by a direct computation that $a(xy)=(ax)y$ and $(gh)b=g(hb)$ for $a\in\{g,h\}, b\in\{x,y\}$ amounts to the conditions $$\begin{aligned} \mu_3\lambda_3=0=\mu_3\lambda_1.\end{aligned}$$ If $\mu\neq 0$, then ${\mathcal{P}}(H)=0$ and ${\mathcal{P}}_{1,g^{\mu+1}}(H)={\mathds{k}}\{1-g^{\mu+1}\}$. By Fermat’s little theorem, $\mu^{p-1}=1$. Hence $$\begin{aligned} x^p-\lambda_1x=0,\quad y^p- \lambda_2y=0,\quad xy-yx+\mu\lambda_1y-\lambda_2x=\lambda_5(1-g^{\mu+1}).\end{aligned}$$ The verification of $(hx)y=h(xy)$ amounts to the conditions $$\begin{aligned} \lambda_1\lambda_4(1-g^{\mu+1})=0=\lambda_2\lambda_3(1-g^{\mu+1}).\end{aligned}$$ Then using Lemmas \[pqlem1\] and \[pqlem2\], it follows by a direct computation that the ambiguities $a^{p-1}(ab)=(a^p)b$, $(ab)b^{p-1}=a(b^p)$ for $a,b\in\{g,x,y\}$ and $g(xy)=(gx)y$ are resolvable. By the Diamond lemma, $\dim H=p^{3+n}$. \[lem:p4-xg1yhu\] Let $p$ be a prime number and ${\operatorname{char}}{\mathds{k}}=p$. Let $H$ be a pointed Hopf algebra over ${\mathds{k}}$ of dimension $p^4$. Assume that ${\operatorname{gr}}H={\mathds{k}}[g,h,x,y]/(g^p-1,h^p-1,x^p,y^p)$ with $g,h\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g}(H)$ and $y\in{\mathcal{P}}_{1,h^{\mu}}(H)$ for $\mu\in{\mathbb{I}}_{1,p-1}$. Then the defining relations of $H$ have the following form $$\begin{gathered} gx-xg=\lambda_1g(1-g),\quad hx-xh=\lambda_2h(1-g), \quad x^p-\lambda_1x=0\\ gy-yg=\lambda_3g(1-h^{\mu}), \quad hy-yh=\lambda_4h(1-h^{\mu}),\quad y^p- \lambda_4y=0,\\ xy-yx-\lambda_3x+\mu\lambda_2y=\lambda_4(1-gh^{\mu}).\end{gathered}$$ for some $\lambda_1,\lambda_4\in{\mathbb{I}}_{0,1}$, $\lambda_2,\lambda_3,\lambda_4\in{\mathds{k}}$. Observe that $\mu\neq 0$, then ${\mathcal{P}}(H)=0$ and ${\mathcal{P}}_{1,gh^{\mu}}(H)={\mathds{k}}\{1-gh^{\mu}\}$. By Fermat’s little theorem $\mu^{p-1}=1$. By similar computations as before, we have $$\begin{aligned} gx-xg&=\lambda_1g(1-g),\quad hx-xh=\lambda_2h(1-g), \quad x^p-\lambda_1x=0,\\ gy-yg&=\lambda_3g(1-h^{\mu}), \quad hy-yh=\lambda_4h(1-h^{\mu}),\quad y^p- \lambda_4y=0.\end{aligned}$$ for some $\lambda_1,\lambda_4\in{\mathbb{I}}_{0,1}$, $\lambda_2,\lambda_3\in{\mathds{k}}$. Now we determine $\Delta(xy-yx)$. Observe that $h^{\mu}x=xh^{\mu}+\lambda_2\mu h^{\mu}(1-g)$. Then $$\begin{aligned} \Delta(xy-yx)&=(x\otimes 1+g\otimes x)(y\otimes 1+h^{\mu}\otimes y)-(y\otimes 1+h^{\mu}\otimes y)(x\otimes 1+g\otimes x)\\ &=(xy-yx)\otimes 1+(gy-yg)\otimes x-(h^{\mu}x-xh^{\mu})\otimes y+gh^{\mu}\otimes (xy-yx)\\ &=(xy-yx)\otimes 1+\lambda_3g(1-h^{\mu})\otimes x-\lambda_2\mu h^{\mu}(1-g)\otimes y+gh^{\mu}\otimes(xy-yx).\end{aligned}$$ One can check that $xy-yx-\lambda_3x+\mu\lambda_2y\in{\mathcal{P}}_{1,gh^{\mu}}(H)$, which implies that $$\begin{aligned} xy-yx-\lambda_3x+\mu\lambda_2y=\lambda_5(1-gh^{\mu}).\end{aligned}$$ for some $\lambda_5\in{\mathds{k}}$. Non-connected pointed Hopf algebras of dimension $16$ whose diagrams are Nichols algebras {#sec:16} ========================================================================================= We classify non-connected pointed Hopf algebras of dimension $16$ whose diagrams are Nichols algebras. It turns out that there exist infinitely many such Hopf algebras up to isomorphism. \[lem:16-group-like\] Let $H$ be a pointed non-connected Hopf algebras over ${\mathds{k}}$ of dimension $16$. Then ${\mathbf{G}}(H)$ is isomorphic to the Dihedral group $D_4$, the quaternions group $Q_8$, $C_8$, $C_4\times C_2$, $C_2\times C_2\times C_2$, $C_4$, $C_2\times C_2$ or $C_2$. By Nichols Zoeller theorem, $|{\mathbf{G}}(H)|$ must divide $16$. By the assumption, $|{\mathbf{G}}(H)|\in\{8,4,2\}$ and hence the lemma follows. Recall that $D_4:=\langle g,h\mid g^4=1,h^2=1,hg=g^3h\rangle$, $Q_8:=\langle g,h\mid g^4=1,hg=g^3h,g^2=h^2\rangle$. Now we give a complete classification of non-connected pointed Hopf algebras of dimension $16$ whose diagrams are Nichols algebras. \[thm:16-diagram-Nichols-algebra\] Let $H$ be a non-trivial non-connected pointed Hopf algebras over ${\mathds{k}}$ of dimension $16$ whose diagram is a Nichols algebra. Then $H$ is isomorphic to one of the following Hopf algebras (1) : ${\mathds{k}}[D_4]\otimes{\mathds{k}}[x]/(x^2)$, (2) : ${\mathds{k}}[D_4]\otimes{\mathds{k}}[x]/(x^2-x)$, with $x\in{\mathcal{P}}(H)$; (3) : ${\mathds{k}}\langle g,h,x\rangle/(g^4-1,h^2-1,hg-g^3h,[g,x],[h,x],x^2)$, (4) : ${\mathds{k}}\langle g,h,x\rangle/(g^4-1,h^2-1,hg-g^3h,[g,x],[h,x]-h(1-g^2),x^2)$, (5) : ${\widetilde{H}}_1(\lambda):={\mathds{k}}\langle g,h,x\rangle/(g^4-1,h^2-1,hg-g^3h,[g,x]-g(1-g^2),[h,x]-\lambda h(1-g^2),x^2)$, for $\lambda\in{\mathds{k}}$, with $g,~h\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^2}(H)$; moreover, - ${\widetilde{H}}_1(\lambda)\cong{\widetilde{H}}_1(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, $\lambda=\gamma+i$ for some $i\in{\mathbb{I}}_{0,1}$; (6) : ${\mathds{k}}[Q_8]\otimes{\mathds{k}}[x]/(x^2)$, (7) : ${\mathds{k}}[Q_8]\otimes{\mathds{k}}[x]/(x^2-x)$, with $x\in{\mathcal{P}}(H)$; (8) : ${\mathds{k}}\langle g,h,x\rangle/(g^4-1,hg-g^3h,g^2-h^2,[g,x],[h,x],x^2)$, (9) : ${\widetilde{H}}_2(\lambda):={\mathds{k}}\langle g,h,x\rangle/(g^4-1,hg-g^3h,g^2-h^2,[g,x]-g(1-g^2),[h,x]-\lambda h(1-g^2),x^2)$, for $\lambda \in{\mathds{k}}$, with $g,~h\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^2}(H)$; moreover, - ${\widetilde{H}}_2(\lambda)\cong{\widetilde{H}}_2(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, $\lambda=\gamma+i$ or $(\lambda-j)(\gamma-i)=1$ for some $i,j\in{\mathbb{I}}_{0,1}$; (10) : ${\mathds{k}}[C_8]\otimes{\mathds{k}}[x]/(x^2)$, (11) : ${\mathds{k}}[C_8]\otimes{\mathds{k}}[x]/(x^2-x)$, with $x\in{\mathcal{P}}(H)$; (12) : ${\mathds{k}}[g,x]/(g^8-1,x^2)$, (13) : ${\mathds{k}}\langle g,x\rangle/(g^8-1,[g,x]-g(1-g^{\mu}),x^2-\mu x)$ for $\mu\in\{1,4\}$, with $g\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^{\mu}}(H)$ for $\mu\in\{1,2,4\}$; (14) : ${\mathds{k}}[C_4\times C_2]\otimes{\mathds{k}}[x]/(x^2)$, (15) : ${\mathds{k}}[C_4\times C_2]\otimes{\mathds{k}}[x]/(x^2-x)$, with $x\in{\mathcal{P}}(H)$; (16) : ${\mathds{k}}[g,h,x]/(g^4-1,h^2-1, x^2)$, (17) : ${\mathds{k}}\langle g,h,x\rangle/(g^4-1,h^2-1,[g,h], [g,x], [h,x]-h(1-g^{\mu}),x^2)$, (18) : ${\widetilde{H}}_{3,\mu}(\lambda):={\mathds{k}}\langle g,h,x\rangle/(g^4-1,h^2-1,[g,h], [g,x]-g(1-g^{\mu}), [h,x]-\lambda h(1-g^{\mu}),x^2-\mu x)$ for $\lambda\in{\mathds{k}}$, with $g,h\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^{\mu}}(H)$ for $\mu\in\{1,2\}$; (19) : ${\mathds{k}}[g,h,x]/(g^4-1,h^2-1,x^2)$, (20) : ${\mathds{k}}\langle g,h,x\rangle/(g^4-1,h^2-1,[g,h], [g,x]- g(1-h), [h,x],x^2)$, (21) : ${\widetilde{H}}_{4}(\lambda):={\mathds{k}}\langle g,h,x\rangle/(g^4-1,h^2-1, [g,h], [g,x]-\lambda g(1-h), [h,x]-h(1-h),x^2-x)$ for $\lambda\in{\mathds{k}}$, with $g,h\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,h}(H)$; moreover, - ${\widetilde{H}}_{3,1}(\lambda)\cong{\widetilde{H}}_{3,1}(\gamma)$, if and only if, $\lambda=\gamma$; - ${\widetilde{H}}_{3,2}(\lambda)\cong{\widetilde{H}}_{3,2}(\gamma)$, if and only if, $\lambda=\gamma$ or $\lambda\gamma=\lambda+\gamma$; - ${\widetilde{H}}_4(\lambda)\cong{\widetilde{H}}_4(\gamma)$, if and only if, $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$; (22) : ${\mathds{k}}[C_2\times C_2\times C_2]\otimes {\mathds{k}}[x]/(x^2)$, (23) : ${\mathds{k}}[C_2\times C_2\times C_2]\otimes {\mathds{k}}[x]/(x^2-x)$, with $x\in{\mathcal{P}}(H)$; (24) : ${\mathds{k}}[g,h,k,x]/(g^2-1,h^2-1,k^2-1,x^2)$, (25) : ${\widetilde{H}}_5(\lambda):={\mathds{k}}\langle g,h,k,x\rangle/(g^2-1,h^2-1,k^2-1, [g,h],[g,k],[h,k], [g,x], [h,x]-h(1-g), [k,x]-\lambda k(1-g),x^2)$ for $\lambda\in{\mathds{k}}$, (26) : ${\widetilde{H}}_6(\lambda,\gamma):={\mathds{k}}\langle g,h,k,x\rangle/(g^2-1,h^2-1,k^2-1, [g,h], [g,k], [h,k], [g,x]-g(1-g), [h,x]-\lambda h(1-g), [k,x]-\gamma k(1-g),x^2-x)$ for $\lambda,\gamma\in{\mathds{k}}$, with $g,h,k\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g}(H)$; moreover, - ${\widetilde{H}}_5(\lambda)\cong{\widetilde{H}}_5(\gamma)$, if and only if, $$\begin{aligned} \lambda\gamma=\lambda+\gamma,\quad\text{or }(1+\lambda)\gamma=1, \quad\text{or } \lambda=\gamma+i,\quad \text{or } 1+i\gamma=\lambda\gamma,\quad i\in{\mathbb{I}}_{0,1};\end{aligned}$$ - ${\widetilde{H}}_6(\lambda_1,\lambda_2)\cong{\widetilde{H}}_6(\gamma_1,\gamma_2)$, if and only if, there exist $q,r,\nu,\iota\in{\mathbb{I}}_{0,1}$ such that $$\begin{aligned} q\iota+r\nu=1,\quad q\gamma_1+r\gamma_2 =\lambda_1,\quad \nu\gamma_1+\iota\gamma_2=\lambda_2;\end{aligned}$$ (27) : ${\mathds{k}}[C_4]\otimes{\mathds{k}}[x,y]/(x^2,y^2)$, (28) : ${\mathds{k}}[C_4]\otimes{\mathds{k}}[x,y]/(x^2-x,y^2)$, (29) : ${\mathds{k}}[C_4]\otimes{\mathds{k}}[x,y]/(x^2-y,y^2)$, (30) : ${\mathds{k}}[C_4]\otimes{\mathds{k}}[x,y]/(x^2-x,y^2-y)$, (31) : ${\mathds{k}}[C_4]\otimes{\mathds{k}}\langle x,y\rangle/([x,y]-y,x^2-x,y^2)$, with $x,y\in{\mathcal{P}}(H)$; (32) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2,y^2)$, (33) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y],x^2,y^2,[x,y]-(1-g))$, (34) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2-x,y^2)$, (35) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y],x^2-x,y^2,[x,y]-y)$, (36) : ${\mathds{k}}\langle g,y\rangle/(g^4-1,[g,y]-g(1-g),y^2-y)\otimes{\mathds{k}}[x]/(x^2)$, (37) : ${\mathds{k}}\langle g,y\rangle/(g^4-1,[g,y]-g(1-g),y^2-y)\otimes{\mathds{k}}[x]/(x^2-x)$, with $g\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}(H)$ and $y\in{\mathcal{P}}_{1,g}(H)$; (38) : ${\mathds{k}}[g,y]/(g^4-1,y^2)\otimes{\mathds{k}}[x]/(x^2)$, (39) : ${\mathds{k}}\langle g,y\rangle/(g^4-1,[g,y]-g(1-g^2),y^2)\otimes{\mathds{k}}[x]/(x^2)$, (40) : ${\mathds{k}}[g,x,y]/(g^4-1, x^2,y^2-x)$, (41) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y]-g(1-g^2),x^2,y^2-x,[x,y])$, (42) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y],x^2,y^2,[x,y]-(1-g^2))$, (43) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y]-g(1-g^2),x^2,y^2,[x,y]-(1-g^2))$, (44) : ${\mathds{k}}[g,y]/(g^4-1,y^2)\otimes{\mathds{k}}[x]/(x^2-x)$, (45) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2-x,y^2-x)$, (46) : ${\widetilde{H}}_{7}(\lambda):={\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y]-g(1-g^2),x^2-x,y^2-\lambda x,[x,y])$, (47) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y],x^2-x,y^2,[x,y]-y)$, (48) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y]-g(1-g^2),x^2-x,y^2,[x,y]-y)$, with $g\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}(H)$ and $y\in{\mathcal{P}}_{1,g^2}(H)$; - ${\widetilde{H}}_7(\lambda)\cong{\widetilde{H}}_7(\gamma)$, if and only if, $\lambda=\gamma$; (49) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2,y^2)$, (50) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y],x^2,y^2,[x,y]-(1-g^2))$, (51) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g),[g,y],x^2-x,y^2,[x,y]+y)$, (52) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g),[g,y],x^2-x,y^2,[x,y]+y-(1-g^2))$, with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}_{1,g}(H)$; (53) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2,y^2)$, (54) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2-y,y^2)$, (55) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],[g,y],x^2,y^2,[x,y]-(1-g^3))$, (56) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g),[g,y],x^2-x,y^2,[x,y])$, (57) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g),[g,y],x^2-x-y,y^2,[x,y])$, (58) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g),[g,y],x^2-x,y^2,[x,y]-(1-g^3))$, with $g\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g}(H)$ and $y\in{\mathcal{P}}_{1,g^2}(H)$; (59) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2,y^2)$, (60) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g),[g,y],x^2-x,y^2,[x,y]+y)$, (61) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g),[g,y]-g(1-g^3),x^2-x,y^2-y,[x,y]+y-x)$, with $g\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g}(H)$ and $y\in{\mathcal{P}}_{1,g^3}(H)$; (62) : ${\mathds{k}}[g,x,y]/(g^4-1,x^2,y^2)$, (63) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g^2),[g,y],x^2,y^2,[x,y])$, with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}_{1,g^2}(H)$; (64) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],gy-(y+x)g,[x,y],x^2,y^2)$ (65) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],gy-(y+x)g,[x,y],x^2-x,y^2)$ (66) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],gy-(y+x)g,[x,y],x^2-y,y^2)$ (67) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],gy-(y+x)g,[x,y],x^2-x,y^2-y)$ (68) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],gy-(y+x)g,[x,y]-y,x^2-x,y^2)$ with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}(H)$; (69) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x],gy-(y+x)g,[x,y],x^2,y^2)$, (70) : ${\mathds{k}}\langle g,x,y\rangle/(g^4-1,[g,x]-g(1-g^2),gy-(y+x)g,[x,y],x^2,y^2)$, with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}_{1,g^2}(H)$; (71) : ${\mathds{k}}[C_2\times C_2]\otimes{\mathds{k}}[x,y]/(x^2,y^2)$, (72) : ${\mathds{k}}[C_2\times C_2]\otimes{\mathds{k}}[x,y]/(x^2-x,y^2)$, (73) : ${\mathds{k}}[C_2\times C_2]\otimes{\mathds{k}}[x,y]/(x^2-y,y^2)$, (74) : ${\mathds{k}}[C_2\times C_2]\otimes{\mathds{k}}[x,y]/(x^2-x,y^2-y)$, (75) : ${\mathds{k}}[C_2\times C_2]\otimes{\mathds{k}}\langle x,y\rangle/([x,y]-y,x^2-x,y^2)$, with $x,y\in{\mathcal{P}}(H)$; (76) : ${\mathds{k}}[g,h,x,y]/(g^2-1,h^2-1,x^2,y^2)$, (77) : ${\mathds{k}}[g,h,x,y]/(g^2-1,h^2-1,x^2-y,y^2)$, (78) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x]-h(1-g),[g,y],[h,y],x^2,y^2,[x,y])$, (79) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x]-h(1-g),[g,y],[h,y],x^2-y,y^2,[x,y])$, (80) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x],[g,y],[h,y],x^2,y^2,[x,y]-(1-g))$, (81) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x]-h(1-g),[g,y],[h,y],x^2,y^2,[x,y]-(1-g))$, (82) : ${\mathds{k}}[g,h,x,y]/(g^2-1,h^2-1,x^2,y^2-y)$ (83) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x]-h(1-g),[g,y],[h,y],x^2,y^2-y,[x,y])$, (84) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x],[g,y],[h,y],x^2-y,y^2-y,[x,y])$, (85) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x]-h(1-g),[g,y],[h,y],x^2-y,y^2-y,[x,y])$, (86) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x],[g,y],[h,y],x^2,y^2-y,[x,y]-x)$, (87) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x],[g,y],[h,y],x^2-y,y^2-y,[x,y]-x)$, (88) : ${\widetilde{H}}_8(\lambda):= {\mathds{k}}\langle g,h,x\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[h,x]-\lambda h(1-g),x^2-x)\otimes{\mathds{k}}[y]/(y^2)$, (89) : ${\widetilde{H}}_{9}(\lambda):={\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[h,x]-\lambda h(1-g),[g,y],[h,y],x^2-x-y,y^2,[x,y])$, (90) : ${\widetilde{H}}_{10}(\lambda):={\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[h,x]-\lambda h(1-g),[g,y],[h,y],x^2-x,y^2,[x,y]-(1-g))$, (91) : ${\widetilde{H}}_{11}(\lambda,\gamma):={\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[h,x]-\lambda h(1-g),[g,y],[h,y],x^2-x-\gamma y,y^2-y,[x,y])$, with $g,h\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g}(H)$ and $y\in{\mathcal{P}}(H)$; - ${\widetilde{H}}_n(\lambda)\cong{\widetilde{H}}_n(\gamma)$ for $n\in{\mathbb{I}}_{8,10}$, if and only if, $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$; - ${\widetilde{H}}_{11}(\lambda,\mu)\cong{\widetilde{H}}_{11}(\gamma,\nu)$ if and only if $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$ and $\mu=\nu$; (92) : ${\mathds{k}}[g,h,x,y]/(g^2-1,h^2-1,x^2,y^2)$, (93) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[g,y],[h,x]-h(1-g),[h,y],x^2,y^2,[x,y])$, (94) : ${\widetilde{H}}_{12}(\lambda):={\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[g,y],[h,x]-\lambda h(1-g),[h,y],x^2-x,y^2,[x,y]+y)$, (95) : ${\widetilde{H}}_{13}(\lambda):={\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[g,y],[h,x]-\lambda h(1-g),[h,y]-h(1-g),x^2-x,y^2,[x,y]+y)$, with $g,h\in{\mathbf{G}}(H)$ and $x,y\in{\mathcal{P}}_{1,g}(H)$; moreover, - ${\widetilde{H}}_{n}(\lambda)\cong{\widetilde{H}}_{n}(\gamma)$ for $n\in{\mathbb{I}}_{12,13}$, if and only if, $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$; (96) : ${\mathds{k}}[g,h,x,y]/(g^2-1,h^2-1,x^2,y^2)$, (97) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[g,y],[h,x],[h,y],x^2,y^2,[x,y]-(1-gh))$, (98) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[g,y],[h,x],[h,y],x^2-x,y^2,[x,y])$, (99) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[g,y],[h,x]-h(1-g),[h,y],x^2-x,y^2,[x,y]+y)$, (100) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[g,y],[h,x],[h,y]-h(1-h),x^2-x,y^2-y,[x,y])$, (101) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x]-g(1-g),[g,y]-g(1-h),[h,x]-h(1-g),[h,y]-h(1-h),x^2-x,y^2-y,[x,y]-x+y)$, with $g,h\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g}(H)$ and $y\in{\mathcal{P}}_{1,h}(H)$; (102) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],gy-(y+x)g,[h,x],hy-(y+\lambda x)h,[x,y],x^2,y^2)$, (103) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],gy-(y+x)g,[h,x],hy-(y+\lambda x)h,[x,y],x^2-x,y^2)$, (104) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],gy-(y+x)g,[h,x],hy-(y+\lambda x)h,[x,y],x^2-y,y^2)$, (105) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],gy-(y+x)g,[h,x],hy-(y+\lambda x)h,[x,y],x^2-x,y^2-y)$, (106) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],gy-(y+x)g,[h,x],hy-(y+\lambda x)h,[x,y]-y,x^2-x,y^2)$, $\lambda\in{\mathds{k}}$, with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}(H)$; (107) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x],gy-(y+x)g,[h,y], [x,y],x^2,y^2)$, (108) : ${\mathds{k}}\langle g,h,x,y\rangle/(g^2-1,h^2-1,[g,h],[g,x],[h,x],gy-(y+x)g,[h,y]-h(1-h), [x,y]-x,x^2,y^2-y)$, with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}_{1,h}(H)$; (109) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}[x,y,z]/(x^2,y^2,z^2)$, (100) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}[x,y,z]/(x^2-x,y^2-y,z^2-z)$, (111) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}[x,y,z]/(x^2-y,y^2-z,z^2)$, (112) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}[x,y,z]/(x^2,y^2-z,z^2)$, (113) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}[x,y,z]/(x^2,y^2,z^2-z)$, (114) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}[x,y,z]/(x^2,y^2-y,z^2-z)$, (115) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}[x,y,z]/(x^2-y,y^2,z^2-z)$, (116) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}\langle x,y,z\rangle/([x,y]-z,[x,z],[y,z],x^2,y^2,z^2)$, (117) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}\langle x,y,z\rangle/([x,y]-z,[x,z],[y,z],x^2,y^2,z^2-z)$, (118) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}\langle x,y,z\rangle/([x,y]-y,[x,z],[y,z],x^2-x,y^2,z^2)$, (119) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}\langle x,y,z\rangle/([x,y]-y,[x,z],[y,z],x^2-x,y^2-z,z^2)$, (120) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}\langle x,y,z\rangle/([x,y]-y,[x,z],[y,z],x^2-x,y^2,z^2-z)$, (121) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}\langle x,y,z\rangle/([x,y]-y,[x,z],[y,z],x^2-x,y^2-z,z^2-z)$, (122) : ${\mathds{k}}[C_2]\otimes{\mathds{k}}\langle x,y,z\rangle/([x,y],[x,z]=x,[y,z]=y,x^2,y^2,z^2-z)$, with $x,y\in{\mathcal{P}}(H)$; (123) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2,y^2,z^2)$, (124) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^4-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]=y,[x,z]=z,[y,z],x^2-x,y^2,z^2)$, with $g\in{\mathbf{G}}(H)$ and $x,y,z\in{\mathcal{P}}_{1,g}(H)$; (125) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-z,[x,z],[y,z],x^2,y^2,z^2)$, (126) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-z,[x,z],[y,z],x^2,y^2,z^2-z)$, (127) : ${\mathds{k}}[g,x,y]/(g^2-1,x^2,y^2)\otimes{\mathds{k}}[z]/(z^2)$, (128) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y],[x,z]-(1-g),[y,z],x^2,y^2,z^2)$, (129) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y],[x,z]-y,[y,z],x^2,y^2,z^2)$, (130) : ${\mathds{k}}[g,x,y]/(g^2-1,x^2,y^2)\otimes{\mathds{k}}[z]/(z^2-z)$, (131) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y],[x,z]-x,[y,z],x^2,y^2,z^2-z)$, (132) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y],[x,z]-x,[y,z]-y,x^2,y^2,z^2-z)$, (133) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-z,y^2,z^2)$, (134) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-z,y^2,z^2-z)$, (135) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-z,[y,z],[x,z],x^2-z,y^2,z^2)$ (136) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-z,[y,z],[x,z],x^2-z,y^2,z^2-z)$ (137) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y-z,[x,z],[y,z],x^2-x,y^2,z^2)$, (138) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y-z,[x,z],[y,z],x^2-x,y^2,z^2-z)$, (139) : ${\mathds{k}}\langle g,x,y\rangle/(g^2-1,[g,x]-g(1-g),[g,y], [x,y]-y,x^2-x,y^2)\otimes{\mathds{k}}[z]/(z^2)$, (140) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z],[y,z]-(1-g),x^2-x,y^2,z^2)$, (141) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z]-(1-g),[y,z],x^2-x,y^2,z^2)$, (142) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z]-y,[y,z],x^2-x,y^2,z^2)$, (143) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z],[y,z],x^2-x-z,y^2,z^2)$, (144) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z],[y,z]-y,x^2-x,y^2,z^2-z)$, (145) : ${\widetilde{H}}_{14}(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z],[y,z],x^2-x-\lambda z,y^2,z^2- z)$, (146) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z],[y,z],x^2-x,y^2-z,z^2)$, (147) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y-z,[x,z],[y,z],x^2-x,y^2-z,z^2)$, (148) : ${\widetilde{H}}_{15}(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y-\lambda z,[x,z],[y,z],x^2-x,y^2-z,z^2-z)$, with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}_{1,g}(H)$ and $z\in{\mathcal{P}}(H)$; Moreover, - ${\widetilde{H}}_{14}(\lambda)\cong{\widetilde{H}}_{14}(\gamma)$ or ${\widetilde{H}}_{15}(\lambda)\cong{\widetilde{H}}_{15}(\gamma)$, if and only if, $\lambda=\gamma$; (149) : ${\widetilde{H}}_{16}(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-\lambda x,[x,z],[y,z]-z,x^p,y^2-y,z^2)$, (150) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z]-(1-g),[y,z]-z,x^2,y^2-y,z^2)$, (151) : ${\mathds{k}}\langle g,x\rangle/(g^2-1,[g,x]-g(1-g),x^2-x)\otimes {\mathds{k}}\langle y,z\rangle/(y^2-y,z^2,[y,z]-z)$, (152) : ${\mathds{k}}[ g,x]/(g^2-1, x^2)\otimes {\mathds{k}}[y,z]/(y^2-y,z^2-z)$, (153) : ${\widetilde{H}}_{17}(\lambda):={\mathds{k}}[ g,x,y,z]/(g^2-1, x^2-y-\lambda z,y^2-y,z^2-z)$, (154) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^2,y^2-y,z^2-z)$, (155) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^2- z,y^2-y,z^2-z)$, (156) : ${\widetilde{H}}_{18}(\lambda,\gamma):={\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^2-x-\lambda y-\gamma z,y^2-y,z^2-z)$, (157) : ${\mathds{k}}[g,x]/(g^2-1,x^2)\otimes{\mathds{k}}[y,z]/(y^2-y,z^2)$, (158 : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-z,y^2-y,z^2)$, (159) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-y,y^2-y,z^2)$, (160) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-y-z,y^2-y,z^2)$, (161) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y],[x,z]-(1-g),[y,z],x^2, y^2-y,z^2)$, (162) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y],[x,z]-(1-g),[y,z],x^2- y, y^2-y,z^2)$, (163) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^2, y^2-y,z^2)$, (164 : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-x,[x,z],[y,z],x^2-z, y^2-y,z^2)$, (165) : ${\widetilde{H}}_{19}(\lambda,i):={\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^2-x-\lambda y-iz,y^2-y,z^2)$, for $i\in{\mathbb{I}}_{0,1}$, (166) : ${\widetilde{H}}_{20}(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z]-(1-g),[y,z],x^2-x-\lambda y, y^2-y,z^2)$, (167) : ${\mathds{k}}[g,x]/(g^2-1,x^2)\otimes{\mathds{k}}[y,z]/(y^2-z,z^2)$, (168) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-z,y^2-z,z^2)$, (169) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-y,y^2-z,z^2)$, (170) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[y,z],[x,z],x^2,y^2-z,z^2)$, (171) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[y,z],[x,z],x^2- z,y^2-z,z^2)$, (172) : ${\mathds{k}}\langle g,x\rangle/(g^2-1,[g,x]-g(1-g),x^2-x)\otimes{\mathds{k}}[y,z]/(y^2-z,z^2)$, (173) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^2-x-z,y^2-z,z^2)$, (174) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^2-x-y,y^2-z,z^2)$, (175) : ${\widetilde{H}}_{21}(\lambda):={\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,gx-xg-g(1-g),[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^2-x-\lambda z,y^2-z,z^2)$, (176) : ${\mathds{k}}[g,x]/(g^2-1,x^2)\otimes{\mathds{k}}[y,z]/(y^2,z^2)$, (177) : ${\mathds{k}}\langle g,x\rangle/(g^2-1,[g,x]-g(1-g),x^2-x)\otimes{\mathds{k}}[y,z]/(y^2,z^2)$, (178) : ${\mathds{k}}[g,x,y,z]/(g^2-1,x^2-y, y^2,z^2)$, (179) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y],[x,z],[y,z],x^2-x-y,y^2,z^2)$, (180) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^2,y^2,z^2)$, (181) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^2-z,y^2,z^2)$, (182) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^2-x,y^2,z^2)$, (183) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-(1-g),[x,z],[y,z],x^2-x-z,y^2,z^2)$, with $g\in{\mathbf{G}}(H)$£¬ $x\in{\mathcal{P}}_{1,g}(H)$ and $y,z\in{\mathcal{P}}(H)$; Moreover, - ${\widetilde{H}}_{16}(\lambda)\cong {\widetilde{H}}_{16}(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, $\lambda=\gamma$; - ${\widetilde{H}}_{17}(\lambda)\cong {\widetilde{H}}_{17}(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, there exist $\alpha_1,\alpha_2,\beta_1,\beta_2\in{\mathds{k}}$ satisfying $\alpha_i^2-\alpha_i=0=\beta_i^2-\beta_i$ for $i\in{\mathbb{I}}_{1,2}$ such that $(\alpha_1+\beta_1\lambda)\gamma=(\alpha_2+\beta_2\lambda)$ and $\alpha_1\beta_2-\alpha_2\beta_1\neq 0$; - ${\widetilde{H}}_{18}(\lambda,\gamma)\cong {\widetilde{H}}_{18}(\mu,\nu)$ if and only if, there exist $\alpha_i,\beta_i\in{\mathds{k}}$ satisfying $\alpha_i^2-\alpha_i=0=\beta_i^2-\beta_i$ for $i\in{\mathbb{I}}_{1,2}$ such that $\alpha_1\beta_2-\alpha_2\beta_1\neq 0$ and $\lambda\alpha_1+\gamma\beta_1=\mu$, $\lambda\alpha_2+\gamma\beta_2=\nu$; - ${\widetilde{H}}_{19}(\lambda,i)\cong {\widetilde{H}}_{19}(\gamma,j)$ if and only if $\lambda =\gamma$ and $i=j$; - ${\widetilde{H}}_{20}(\lambda)\cong {\widetilde{H}}_{20}(\gamma)$ or ${\widetilde{H}}_{21}(\lambda)= {\widetilde{H}}_{21}(\gamma)$, if and only if, $\lambda=\gamma$; (184) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z],[y,z],x^2,y^2,z^2)$, (185) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z],[y,z],x^2-x,y^2-y,z^2-z)$, (186) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z],[y,z],x^2-y,y^2-z,z^2)$, (187) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z],[y,z],x^2,y^2-z,z^2)$, (188) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z],[y,z],x^2,y^2,z^2-z)$, (189) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z],[y,z],x^2,y^2-y,z^2-z)$, (190) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z],[y,z],x^2-y,y^2,z^2-z)$, (191) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y]-z,[x,z],[y,z],x^2,y^2,z^2)$, (192) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y]-z,[x,z],[y,z],x^2,y^2,z^2-z)$, (193) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y]-y,[x,z],[y,z],x^2-x,y^2,z^2)$, (194) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y]-y,[x,z],[y,z],x^2-x,y^2-z,z^2)$, (195) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y]-y,[x,z],[y,z],x^2-x,y^2,z^2-z)$, (196) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y]-y,[x,z],[y,z],x^2-x,y^2-z,z^2-z)$, (197) : ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x],[g,y],gz-(z+y)g,[x,y],[x,z]=x,[y,z]=y,x^2,y^2,z^2-z)$, with $g\in{\mathbf{G}}(H)$, $x,y\in{\mathcal{P}}(H)$. By Theorem \[thm:16-diagram-Nichols-algebra\], there are 197 types of non-connected pointed Hopf algebras of dimension $16$ with ${\operatorname{char}}{\mathds{k}}=2$ whose diagrams are Nichols algebras. Up to isomorphism, there are infinitely many classes of such Hopf algebras. In particular, we obtain infinitely many new examples of non-commutative non-cocommutative pointed Hopf algebras. Let $H$ be a non-trivial non-connected pointed Hopf algebra of dimension $16$. By Lemma \[lem:16-group-like\], ${\mathbf{G}}(H)$ is isomorphic to $D_4$, $Q_8$, $C_8$, $C_4\times C_2$, $C_2\times C_2\times C_2$, $C_4$, $C_2\times C_2$ or $C_2$. We will subsequently prove Theorem \[thm:16-diagram-Nichols-algebra\] by a case by case discussion. In what follows, $R$ is the diagram of $H$ and $V:=R(1)$. Coradical of dimension $8$ -------------------------- Observe that $\dim H_0=8$. Then $\dim R=2$. By Lemma \[lem:R-V\], $\dim V=1$ with a basis $\{x\}$ satisfying $c(x\otimes x)=x\otimes x$. Therefore, $R\cong{\mathds{k}}[x]/(x^2)$. ### ${\mathbf{G}}(H)\cong D_4$. Observe that $\widehat{{\mathbf{G}}(H)}=\{\epsilon\}$ and $Z(D_4)=\{1,g^2\}$. Then by Remark \[rmk:dimV=1\], $x\in V_{g^{2\mu}}^{\epsilon}$ for $\mu\in{\mathbb{I}}_{0,1}$. Therefore, $$\begin{gathered} {\operatorname{gr}}H={\mathds{k}}\langle g,h,x\mid g^4=h^2=1,hg=g^3h, gx=xg,hx=xh,x^2=0\rangle,\end{gathered}$$ with $g,h\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^{2\mu}}(H)$ for $\mu\in{\mathbb{I}}_{0,1}$. Now we determine the liftings of ${\operatorname{gr}}H$. By similar computations as before, we have $$\begin{aligned} gx-xg=\lambda_1g(1-g^{2\mu}),\quad hx-xh=\lambda_2h(1-g^{2\mu}),\quad x^2-2\mu\lambda_1x=x^2\in{\mathcal{P}}(H),\end{aligned}$$ for some $\lambda_1\in{\mathbb{I}}_{0,1}$, $\lambda_2\in{\mathds{k}}$. If $\mu=0$, then $gx-xg=0=hx-xh$ in $H$ and ${\mathcal{P}}(H)={\mathds{k}}\{x\}$, which implies that $x^2=\lambda_3x$ for some $\lambda_3\in{\mathds{k}}$. Observe that $H$ is the tensor product Hopf algebra between ${\mathds{k}}[D_4]$ and ${\mathds{k}}[x]/(x^2-\lambda_3x)$. Then $\dim H=16$. By rescaling $x$, we can take $\lambda_3\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(1)$–$(2)$. Clearly, they are non-isomorphic. If $\mu=1$, then ${\mathcal{P}}(H)=0$ and hence $x^2=0$ in $H$. Applying the Diamond Lemma [@B] to show that $\dim H=16$, it suffices to show that the following ambiguities $$\begin{aligned} (g^4)x=g^3(gx),\quad (h^2)x=h(hx),\quad (gh)x=g(hx),\end{aligned}$$ are resolvable with the order $x<h<g$. By Lemma \[pqlem1\], we have $[g^4,x]=0=[h^2,x]$ and hence the first two ambiguities are resolvable. Now we show that the ambiguity $(gh)x=g(hx)$ is resolvable: $$\begin{aligned} g(hx)&=g(xh+\lambda_2h(1-g^2))=(gx)h+\lambda_2gh(1-g^2)=xhg^3+(\lambda_1+\lambda_2)hg^3(1-g^2),\\ &=(xh+\lambda_2h(1-g^2))g^3+\lambda_1hg^3(1-g^2)=(hx)g^3+\lambda_1hg^3(1-g^2)=(hg^3)x=(gh)x.\end{aligned}$$ If $\lambda_1=0$, then by rescaling $x$, we can take $\lambda_2\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(3)$–$(4)$. If $\lambda_1=1$, then $H\cong{\widetilde{H}}_1(\lambda_2)$ described in $(5)$. Now we prove that ${\widetilde{H}}_1(\lambda)\cong{\widetilde{H}}_1(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, $\lambda=\gamma+i$ for some $i\in{\mathbb{I}}_{0,1}$. Observe that ${\operatorname{Aut}}(D_8)\cong D_8$ with generators $\psi_1,\psi_2$, where $$\begin{aligned} \psi_1(g)=g,\quad \psi_1(h)=gh;\quad \psi_2(g)=g^{-1},\quad\psi_2(h)=h.\end{aligned}$$ Suppose that $\phi:{\widetilde{H}}_1(\lambda)\rightarrow {\widetilde{H}}_1(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Write $g^{\prime},h^{\prime},x^{\prime}$ to distinguish the generators of ${\widetilde{H}}_1(\gamma)$. Therefore, $\phi(g)\in\{g^{\prime},(g^{\prime})^3\},\phi(h)=(g^{\prime})^ih^{\prime}$ for $i\in{\mathbb{I}}_{0,3}$ and $\phi(x)\in{\mathcal{P}}_{1,\phi(g^2)}({\widetilde{H}}_1(\gamma))$. Note that spaces of the skew-primitive elements of ${\widetilde{H}}_1(\gamma)$ are trivial except ${\mathcal{P}}_{1,(g^{\prime})^2}({\widetilde{H}}_1(\gamma))={\mathds{k}}\{x^{\prime}\}\oplus {\mathds{k}}\{1-(g^{\prime})^2\}$. Then $\phi(x)=a(1-(g^{\prime})^2)+bx^{\prime}$ for some $a,b\neq 0\in{\mathds{k}}$. Applying $\phi$ to relation $gx-xg=g(1-g^2)$, then $$\begin{aligned} \phi(gx-xg-g(1-g^2))&=\phi(g)\phi(x)-\phi(x)\phi(g)-\phi(g)(1-(g^{\prime})^2)\\ &=b\phi(g)x^{\prime}-bx^{\prime}\phi(g)-\phi(g)(1-(g^{\prime})^2)=(b-1)\phi(g)(1-(g^{\prime})^2)=0.\end{aligned}$$ Therefore, $b=1$. Then applying $\phi$ to the relations $hx-xh=\lambda h(1-g^2)$, then we have $$\begin{aligned} \phi(h)x^{\prime}-x^{\prime}\phi(h)-\lambda\phi(h)(1-(g^{\prime})^2)=0.\end{aligned}$$ If $\phi(h)=(g^{\prime})^{2\mu}h^{\prime}$ for $\mu\in{\mathbb{I}}_{0,1}$, then $\phi(h)x^{\prime}-x^{\prime}\phi(h)=\gamma\phi(h)(1-(g^{\prime})^2)$ and hence $\gamma=\lambda$. If $\phi(h)=(g^{\prime})^{i}h^{\prime}$ for $i\in\{1,3\}$, then $\phi(h)x^{\prime}-x^{\prime}\phi(h)=(\gamma+1)\phi(h)(1-(g^{\prime})^2)$ and hence $\gamma+1=\lambda$. Consequently, we have $$\gamma=\lambda+i,\quad\text{for }i\in{\mathbb{I}}_{0,1}.$$ Conversely, for any $\lambda\in{\mathds{k}}$, $i\in{\mathbb{I}}_{0,1}$, let $\psi:{\widetilde{H}}_1(\lambda)\rightarrow {\widetilde{H}}_1(\lambda+i)$ be the algebra map given by $$\begin{aligned} \psi(g)=g^{\prime},\quad \psi(h)=(g^{\prime})^ih^{\prime},\quad \psi(x)=x^{\prime}+b(1-(g^{\prime})^2),\quad b\in{\mathds{k}}.\end{aligned}$$ Then it is easy to see that it is an epimorphism of Hopf algebras and $\psi|_{({\widetilde{H}}_1(\lambda))_1}$ is injective. Hence $\psi$ is a Hopf algebra isomorphism. ### ${\mathbf{G}}(H)\cong Q_8$. Observe that $\widehat{Q_8}=\{\epsilon\}$ and $Z(Q_8)=\{1,g^2\}$. Then by Remark \[rmk:dimV=1\], $x\in V_{g^{2\mu}}^{\epsilon}$ for $\mu\in{\mathbb{I}}_{0,1}$. Therefore, $$\begin{aligned} {\operatorname{gr}}H={\mathds{k}}\langle g,h,x\mid g^4=1,hg=g^3h, g^2=h^2, gx=xg,hx=xh,x^2=0\rangle,\end{aligned}$$ with $g,h\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^{2\mu}}(H)$. Similar to the case ${\mathbf{G}}(H)\cong D_4$, the defining relations of $H$ are given by $$\begin{gathered} g^4=1,\quad hg=g^3h,\quad g^2=h^2,\\ gx-xg=\lambda_1g(1-g^{2\mu}),\quad hx-xh=\lambda_2h(1-g^{2\mu}),\quad x^2-\lambda_3x=0,\end{gathered}$$ for some $\lambda_1\in{\mathbb{I}}_{0,1}$, $\lambda_2\in{\mathds{k}}$ with ambiguity conditions $\lambda_3=0$ if $\mu=1$. If $\mu=0$, then $gx-xg=0=hx-xh$ in $H$. Observe that $H$ is the tensor product Hopf algebra between ${\mathds{k}}[Q_8]$ and ${\mathds{k}}[x]/(x^2-\lambda_3x)$. Then $\dim H=16$. By rescaling $x$, we can take $\lambda_3\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(6)$–$(7)$. If $\mu=1$, then it follows by a direct computation that the ambiguities $(g^4)x=g^3(gx)$, $(h^4)x=h^3(hx)$, $(gh)x=g(hx)$, are resolvable with the order $x<h<g$ and hence $\dim H=16$. If $\lambda_1=0$, then by rescaling $x$, we can take $\lambda_2\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(8)$–$(9)$. Indeed, if $\lambda_2=1$, then $H\cong{\widetilde{H}}_2(0)$ by swapping $g$ and $h$. If $\lambda_1=1$, then $H\cong{\widetilde{H}}_2(\lambda_2)$ described in $(9)$. Now we prove that ${\widetilde{H}}_2(\lambda)\cong{\widetilde{H}}_2(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$, if and only if, $\lambda=\gamma+i$ or $(\lambda-j)(\gamma-i)=1$ for $i,j\in{\mathbb{I}}_{0,1}$. Observe that ${\operatorname{Aut}}(Q_8)\cong S_4$ with generators $\psi_1,\psi_2,\psi_3$ where $$\begin{gathered} \psi_1(g)=g^{-1},\quad \psi_1(h)=gh;\quad \psi_2(g)=h,\quad \psi_2(h)=g;\quad \psi_3(g)=gh,\quad \psi_3(h)=g^2h.\end{gathered}$$ Suppose that $\phi:{\widetilde{H}}_2(\lambda)\rightarrow {\widetilde{H}}_2(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Then $\phi|_{Q_8}:Q_8\rightarrow Q_8$ is an automorphism. Hence $\phi(g)\in\{g,g^3,h,g^2h,gh,g^3h\}$ and $\phi(h)\in \{g,g^3,h,g^2h,gh,g^3h\}-\{\phi(g),g^2\phi(g)\}$. Write $g^{\prime},h^{\prime},x^{\prime}$ to distinguish the generators of ${\widetilde{H}}_2(\gamma)$. Since spaces of skew-primitive elements of ${\widetilde{H}}_2(\gamma)$ are trivial except ${\mathcal{P}}_{1,(g^{\prime})^2}({\widetilde{H}}_2(\gamma))={\mathds{k}}\{x^{\prime}\}\oplus {\mathds{k}}\{1-(g^{\prime})^2\}$, $\phi(x)=a(1-(g^{\prime})^2)+bx^{\prime}$ for some $a,b\neq 0\in{\mathds{k}}$. If $\phi(g)=(g^{\prime})^{2\mu}g^{\prime}$ for $\mu\in{\mathbb{I}}_{0,1}$, then $\phi(h)=(g^{\prime})^{2\nu}(g^{\prime})^ih^{\prime}$ for $i,\nu\in{\mathbb{I}}_{0,1}$. Applying $\phi$ to the relations $gx-xg=g(1-g^2), hx-xh=\lambda h(1-g^2)$, we have $$a=1,\quad \lambda=\gamma+i.$$ If $\phi(g)=(g^{\prime})^{2\mu}h^{\prime}$ for $\mu\in{\mathbb{I}}_{0,1}$, then $\phi(h)=(g^{\prime})^{2\nu}g^{\prime}(h^{\prime})^i$ for $i,\nu\in{\mathbb{I}}_{0,1}$. Applying $\phi$ to the relations $gx-xg=g(1-g^2), hx-xh=\lambda h(1-g^2)$, we have $$a\gamma=1,\ a(1+i\gamma)=\lambda\quad \Rightarrow\quad (\lambda-i)\gamma=1.$$ If $\phi(g)=(g^{\prime})^{2\mu}g^{\prime}h^{\prime}$ for $\mu\in{\mathbb{I}}_{0,1}$, then $\phi(h)=(g^{\prime})^{2\nu}(g^{\prime})^i(h^{\prime})^j$ for $i,j,\nu\in{\mathbb{I}}_{0,1}$ satisfying $i+j=1$. Applying $\phi$ to the relations $gx-xg=g(1-g^2), hx-xh=\lambda h(1-g^2)$, we have $$a(1+\gamma)=1,~a(i+j\gamma)=\lambda\quad \Rightarrow (\lambda-j)(\gamma+1)=1.$$ Conversely, if $\lambda=\gamma+i$ or $(\lambda-j)(\gamma-i)=1$ for $i,j\in{\mathbb{I}}_{0,1}$, then we can build an algebra map $\psi:{\widetilde{H}}_2(\lambda)\rightarrow{\widetilde{H}}_2(\gamma)$ in the form of $\phi$, it is easy to see that $\psi$ is a Hopf algebra epimorphism and $\Psi|_{({\widetilde{H}}_2(\lambda))_1}$ is injective. Hence $\psi$ is a Hopf algebra isomorphism. Assume that ${\mathbf{G}}(H)\cong C_8$. Then $\widehat{C_8}=\{\epsilon\}$ and $Z(C_8)=C_8:=\langle g\rangle$. Then by Remark \[rmk:dimV=1\], $x\in V_{g^{\mu}}^{\epsilon}$ for $\mu\in{\mathbb{I}}_{0,7}$. Therefore, $$\begin{gathered} {\operatorname{gr}}H={\mathds{k}}\langle g, x\mid g^8=1, gx=xg, x^2=0\rangle,\end{gathered}$$ with $g \in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^{\mu}}(H)$. Up to isomorphism, we can take $\mu\in\{0,1,2,4\}$. Then by a similar computation as before, we have $$\begin{aligned} gx-xg=\lambda_1g(1-g^{\mu}),\quad x^2-\mu\lambda_1x\in{\mathcal{P}}_{1,g^{2\mu}}(H),\quad \lambda_1\in{\mathbb{I}}_{0,1}.\end{aligned}$$ By [@AS00a Proposition 6.3] and [@BDG00 Theorem 2.2], up to isomorphism, we can take $\mu\in\{0,1,2,4\}$. If $\mu=0$, then $gx-xg=0$ in $H$ and ${\mathcal{P}}(H)={\mathds{k}}\{x\}$. Hence $x^2=\lambda_2x$ for $\lambda_2\in{\mathds{k}}$. Observe that $H\cong{\mathds{k}}[C_8]\otimes{\mathds{k}}[x]/(x^2-\lambda_2x)$. Then $\dim H=16$. By rescaling $x$, we can take $\lambda_2\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(10)$–$(11)$. Clearly, they are non-isomorphic. If $\mu\neq 0$, then ${\mathcal{P}}_{1,g^{2\mu}}={\mathds{k}}\{1-g^{2\mu}\}$ and hence $x^2-\mu\lambda_1x=\lambda_3(1-g^{2\mu})$ for $\lambda_3\in{\mathds{k}}$. Then we take $\lambda_3=0$ via the linear translation $x:=x-a(1-g^{\mu})$ satisfying $a^2-\mu\lambda_1a=\lambda_3$. Indeed, it is easy to see that the linear translation is a Hopf algebra isomorphism. By Lemma \[pqlem1\], we have $[g,x^2]=0$, which implies that the ambiguity $(g^2)x=g(gx)$ is resolvable. By Proposition \[proJ\], $$\begin{aligned} [g,x^2]=[[g,x],x]=\lambda_1g(1-g^{\mu})-\lambda_1(\mu+1)g^{\mu+1}(1-g^{\mu}).\end{aligned}$$ Hence the ambiguity $g(x^2)=(gx)x$ imposes the condition $\lambda_1=0$ if $\mu=2$. Then by Diamond Lemma, $\dim H=16$ with ambiguity condition: $\lambda_1=0$ if $\mu=2$. If $\lambda_1=0$, then $H$ is the Hopf algebra described in $(12)$. If $\lambda_1=1$, then $\mu\in\{1,4\}$ and $H$ is the Hopf algebra described in $(13)$. Obviously, the two Hopf algebras with $\mu=1$ and $\mu=4$ are non-isomorphic since they are not isomorphic as coalgebras. ### ${\mathbf{G}}(H)\cong C_4\times C_2=\langle g\rangle\times \langle h\rangle$. Then $\widehat{C_4\times C_2}=\{\epsilon\}$ and $Z(C_4\times C_2)=C_4\times C_2 $. Then by Remark \[rmk:dimV=1\], $x\in V_{g^{\mu}h^{\nu}}^{\epsilon}$ for $\mu\in{\mathbb{I}}_{0,3},\nu\in{\mathbb{I}}_{0,1}$. Therefore, $$\begin{gathered} {\operatorname{gr}}H={\mathds{k}}\langle g,h, x\mid g^4=1,h^2=1,gh=gh, gx=xg, x^2=0\rangle,\end{gathered}$$ with $g \in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^{\mu}h^{\nu}}(H)$. Observe that ${\operatorname{Aut}}(C_4\times C_2)\cong D_4$ with generators $\psi_1,\psi_2$, where $$\begin{gathered} \psi_1(g)=gh,\quad \psi_1(h)=g^2h;\quad \psi_2(g)=gh,\quad \psi_2(h)=h.\end{gathered}$$ Then up to isomorphism, we can take $(\mu,\nu)\in\{(0,0),(1,0),(2,0),(0,1)\}$. By similar computations as before, we have $$\begin{aligned} gx-xg=\lambda_1g(1-g^{\mu}h^{\nu}),\quad hx-xh=\lambda_2h(1-g^{\mu}h^{\nu}),\end{aligned}$$ for some $\lambda_1,\lambda_2\in{\mathds{k}}$. Then $$\begin{aligned} \Delta(x^2)&=(x\otimes 1+g^{\mu}h^{\nu}\otimes x)^2=x^2\otimes 1+[g^{\mu}h^{\nu},x]\otimes x+g^{2\mu}\otimes x^2\\ &=x^2\otimes 1+(\mu\lambda_1+\nu\lambda_2)(g^{\mu}h^{\nu}-g^{2\mu})\otimes x+g^{2\mu}\otimes x^2.\end{aligned}$$ It is easy to see that $x^2-(\mu\lambda_1+\nu\lambda_2)x\in{\mathcal{P}}_{1,g^{2\mu}}(H)$. If $(\mu,\nu)=(0,0)$, then $gx=xg,hx=xh$ in $H$ and ${\mathcal{P}}(H)={\mathds{k}}\{x\}$, which implies that $x^2=\lambda_3x$ for $\lambda_3\in{\mathbb{I}}_{0,1}$. In this case, $H\cong{\mathds{k}}[C_4\times C_2]\otimes{\mathds{k}}[x]/(x^2-\lambda_3x)$, which are described in $(14)$–$(15)$. If $(\mu,\nu)\in\{(1,0),(2,0)\}$, then ${\mathcal{P}}_{1,g^{2\mu}}(H)={\mathds{k}}\{1-g^{2\mu}\}$ and hence $x^2-\mu\lambda_1x=\lambda_3(1-g^{2\mu})$ for some $\lambda_3\in{\mathds{k}}$. We can take $\lambda_3=0$ via the linear translation $x:=x-a(1-g^{\mu})$ satisfying $a^2-\mu\lambda_1\mu=\lambda_3$. Similar to the case ${\mathbf{G}}(H)\cong C_8$, it follows by a direct computation that the ambiguities $(g^4)x=g^3(gx)$, $(h^2)x=h(hx)$, $g(x^2)=(gx)x$, $h(x^2)=(hx)x$ and $(gh)x=g(hx)$ are resolvable. Then by Diamond lemma, $\dim H=16$. By rescaling $x$, we can take $\lambda_1\in{\mathbb{I}}_{0,1}$. If $\lambda_1=0$, then by rescaling $x$, $\lambda_2\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(16)$–$(17)$. If $\lambda_1=1$, then $H\cong{\widetilde{H}}_{3,\mu}(\lambda_2)$ described in $(18)$. Obviously, ${\widetilde{H}}_{3,1}(\lambda)$ and ${\widetilde{H}}_{3,2}(\gamma)$ for any $\lambda,\gamma\in{\mathds{k}}$ are non-isomorphic since their coalgebra structure are not isomorphic. We claim that ${\widetilde{H}}_{3,1}(\lambda)\cong{\widetilde{H}}_{3,1}(\gamma)$, if and only if, $\lambda=\gamma$; ${\widetilde{H}}_{3,2}(\lambda)\cong{\widetilde{H}}_{3,2}(\gamma)$, if and only if, $\lambda=\gamma$ or $\lambda\gamma=\lambda+\gamma$. Suppose that $\phi:{\widetilde{H}}_{3,1}(\lambda)\rightarrow {\widetilde{H}}_{3,1}(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Then $\phi|_{C_4\times C_2}:C_4\times C_2\rightarrow C_4\times C_2$ is an automorphism. Then $\phi(g)\in\{g,g^3,gh,g^3h\}$ and $\phi(h)\in\{h,g^2h\}$. Write $g^{\prime},h^{\prime},x^{\prime}$ to distinguish the generators of ${\widetilde{H}}_{3,1}(\gamma)$. Since spaces of skew-primitive elements of ${\widetilde{H}}_{3,1}(\gamma)$ are trivial except ${\mathcal{P}}_{1,g^{\prime}}({\widetilde{H}}_{3,1}(\gamma))={\mathds{k}}\{x^{\prime}\}\oplus {\mathds{k}}\{1-g^{\prime}\}$, it follows that $$\phi(g)=g^{\prime},\quad\phi(x)=a(1- g^{\prime} )+bx^{\prime}$$ for some $a,b\neq 0\in{\mathds{k}}$. Applying $\phi$ to the relations $gx-xg=g(1-g)$ and $x^2-x=0$, then we have $b=1$. Observe that $\phi(h)\in\{h^{\prime},(g^{\prime})^2h^{\prime}\}$. Applying $\phi$ to the relations $hx-xh=\lambda h(1-g)$, then we have $\gamma=\lambda$. Similarly, we have ${\widetilde{H}}_{3,2}(\lambda)\cong{\widetilde{H}}_{3,2}(\gamma)$, if and only if, $\lambda=\gamma$ or $\lambda\gamma=\lambda+\gamma$. If $(\mu,\nu)=(0,1)$, then ${\mathcal{P}}(H)=0$ and hence $x^2-\lambda_2x=0$. Then by rescaling $x$, $\lambda_2\in{\mathbb{I}}_{0,1}$. Similar to the last case, it follows by a direct computation that the ambiguities $(g^4)x=g^3(gx)$, $(h^2)x=h(hx)$, $g(x^2)=(gx)x$, $h(x^2)=(hx)x$ and $(gh)x=g(hx)$ are resolvable. Then by Diamond lemma, $\dim H=16$. If $\lambda_2=0$, then we can take $\lambda_1\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(19)$–$(20)$. If $\lambda_2=1$, then $H\cong{\widetilde{H}}_4(\lambda_1)$ described in $(21)$. Similar to the last case, ${\widetilde{H}}_4(\lambda)\cong{\widetilde{H}}_4(\gamma)$, if and only if, $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$. ### Case ${\mathbf{G}}(H)\cong C_2\times C_2\times C_2$. Then $\widehat{C_{2}\times C_2\times C_2}=\{\epsilon\}$ and $Z(C_2\times C_2\times C_2)=C_2\times C_2\times C_2:=\langle g\rangle\times\langle h\rangle\times\langle k\rangle$. Then by Remark \[rmk:dimV=1\], $x\in V_{g^{\mu}h^{\nu}k^{\iota}}^{\epsilon}$ for $\mu,\nu,\iota\in{\mathbb{I}}_{0,1}$. Therefore, $$\begin{aligned} {\operatorname{gr}}H={\mathds{k}}\langle g,h,k, x\mid g^2=1,h^2=1,k^2=1, gx=xg,hx=xh,kx=xk, x^2=0\rangle,\end{aligned}$$ with $g,h,k\in{\mathbf{G}}(H)$ and $x\in{\mathcal{P}}_{1,g^{\mu}h^{\nu}k^{\iota}}(H)$. Then by a similar computation as before, we have $$\begin{gathered} gx-xg=\lambda_1g(1-g^{\mu}h^{\nu}k^{\iota}),\quad hx-xh=\lambda_2h(1-g^{\mu}h^{\nu}k^{\iota}),\quad kx-xk=\lambda_3k(1-g^{\mu}h^{\nu}k^{\iota}),\\ x^2-(\mu\lambda_1+\nu\lambda_2+\iota\lambda_3)x\in{\mathcal{P}}(H).\end{gathered}$$ for some $\lambda_1,\lambda_2,\lambda_3\in{\mathds{k}}$. Observe that $C_2\times C_2\times C_2$ is 2-torsion. Then we can take $(\mu,\nu,\iota)=(0,0,0),(1,0,0)$. If $(\mu,\nu,\iota)=(0,0,0)$, then $gx-xg=hx-xh=kx-xk=0$ in $H$ and ${\mathcal{P}}(H)={\mathds{k}}\{x\}$, which implies that $x^2=\lambda_4x$. By rescaling $x$, $\lambda_4\in{\mathbb{I}}_{0,1}$. Then $H\cong {\mathds{k}}[C_2\times C_2\times C_2]\otimes{\mathds{k}}[x]/(x^2-\lambda_4x)$, which gives two classes of $H$ described in $(22)$–$(23)$. If $(\mu,\nu,\iota)=(1,0,0)$, then ${\mathcal{P}}(H)=0$ and hence $x^2-\lambda_1x=0$ in $H$. It follows by a direct computation that the ambiguities $(a^2)b=a(ab)$ and $(ab)c=a(bc)$ for $a,b,c\in\{g,h,k,x\}$ are resolvable. By Diamond lemma, $\dim H=16$. By rescaling $x$, we can take $\lambda_1\in{\mathbb{I}}_{0,1}$. If $\lambda_1=0$, then we can take $\lambda_2\in{\mathbb{I}}_{0,1}$ by rescaling $x$. If $\lambda_2=0$, then we can also take $\lambda_3\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(24)$ and $(25)$. In fact, if $\lambda_3=1$, then $H\cong{\widetilde{H}}_5(0)$. If $\lambda_2=1$, then $H\cong{\widetilde{H}}_5(\lambda_3)$. If $\lambda_1=1$, then $H\cong{\widetilde{H}}_{6}(\lambda_2,\lambda_3)$ described in $(26)$. We claim that ${\widetilde{H}}_5(\lambda)\cong{\widetilde{H}}_5(\gamma)$, if and only if, $$\begin{aligned} \lambda\gamma=\lambda+\gamma,\quad\text{or }(1+\lambda)\gamma=1, \quad\text{or } \lambda=\gamma+i,\quad \text{or } 1+i\gamma=\lambda\gamma,\quad i\in{\mathbb{I}}_{0,1}.\end{aligned}$$ Suppose that $\phi:{\widetilde{H}}_5(\lambda)\rightarrow {\widetilde{H}}_5(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Then $\phi|_{C_2\times C_2\times C_2}:C_2\times C_2\times C_2\rightarrow C_2\times C_2\times C_2$ is an automorphism. Write $g^{\prime},h^{\prime},x^{\prime}$ to distinguish the generators of ${\widetilde{H}}_5(\gamma)$. Since spaces of skew-primitive elements of ${\widetilde{H}}_5(\gamma)$ are trivial except ${\mathcal{P}}_{1, g^{\prime} }({\widetilde{H}}_5(\gamma))={\mathds{k}}\{x^{\prime}\}\oplus {\mathds{k}}\{1- g^{\prime} \}$, it follows that $$\phi(g)=g^{\prime},\quad \phi(x)=a(1- g^{\prime} )+bx^{\prime}$$ for some $a,b\neq 0\in{\mathds{k}}$. Let $\phi(h)=(g^{\prime})^p(h^{\prime})^q(k^{\prime})^r$ for $p,q,r\in{\mathbb{I}}_{0,1}$. Then applying $\phi$ to the relation $hx-xh=h(1-g)$, we have $$(q+r\gamma)b=1.$$ Let $\phi(k)=(g^{\prime})^{\mu}(h^{\prime})^{\nu}(k^{\prime})^{\iota}$ for $\mu,\nu,\iota\in{\mathbb{I}}_{0,1}$. Then applying $\phi$ to the relation $kx-xk=\lambda k(1-g)$, we have $$(\nu+\gamma\iota)b=\lambda.$$ Observe that $\phi|_{{\mathbf{G}}({\widetilde{H}}_{5}(\lambda))}$ is an isomorphism if and only if $q\iota+r\nu=1$. Hence by a case by case discussion, we have $$\begin{aligned} \lambda\gamma=\lambda+\gamma,\quad\text{or }(1+\lambda)\gamma=1, \quad\text{or } \lambda=\gamma+i,\quad \text{or } 1+i\gamma=\lambda\gamma,\quad i\in{\mathbb{I}}_{0,1}.\end{aligned}$$ Conversely, if $\lambda\gamma=\lambda+\gamma$, then let $\psi:{\widetilde{H}}_5(\lambda)\rightarrow{\widetilde{H}}_5(\gamma)$ be the algebra given by $$\begin{aligned} \psi(g)=g^{\prime},\quad \psi(h)=h^{\prime}k^{\prime},\quad \psi(k)=k^{\prime},\quad \psi(x)=(1-\lambda) x^{\prime};\end{aligned}$$ if $(1+\gamma)\lambda=1$, then let $\psi:{\widetilde{H}}_5(\lambda)\rightarrow{\widetilde{H}}_5(\gamma)$ be the algebra given by $$\begin{aligned} \psi(g)=g^{\prime},\quad \psi(h)=h^{\prime}k^{\prime},\quad \psi(k)=h^{\prime},\quad \psi(x)=\lambda x^{\prime};\end{aligned}$$ if $i+\gamma=\lambda$ for $i\in{\mathbb{I}}_{0,1}$, then let $\psi:{\widetilde{H}}_5(\lambda)\rightarrow{\widetilde{H}}_5(\gamma)$ be the algebra given by $$\begin{aligned} \psi(g)=g^{\prime},\quad \psi(h)=h^{\prime},\quad \psi(k)=(h^{\prime})^ik^{\prime},\quad \psi(x)= x^{\prime};\end{aligned}$$ if $1+i\gamma=\lambda\gamma$ for $i\in{\mathbb{I}}_{0,1}$, then let $\psi:{\widetilde{H}}_5(\lambda)\rightarrow{\widetilde{H}}_5(\gamma)$ be the algebra given by $$\begin{aligned} \psi(g)=g^{\prime},\quad \psi(h)=k^{\prime},\quad \psi(k)= h^{\prime} (k^{\prime})^i,\quad \psi(x)= \gamma^{-1}x^{\prime}.\end{aligned}$$ It follows by a direct computation that $\psi$ is a well-defined Hopf algebra epimorphism. Observe that $\psi|_{{\mathcal{P}}_{1,g}(H_5(\lambda))}$ is injective. Then $\psi$ is a Hopf algebra isomorphism. We claim that ${\widetilde{H}}_6(\lambda_1,\lambda_2)\cong{\widetilde{H}}_6(\gamma_1,\gamma_2)$, if and only if, there exists $q,r,\nu,\iota\in{\mathbb{I}}_{0,1}$ such that $$\begin{aligned} \label{eq:HH6-condition-iso} q\iota+r\nu=1,\quad q\gamma_1+r\gamma_2 =\lambda_1,\quad \nu\gamma_1+\iota\gamma_2=\lambda_2.\end{aligned}$$ Suppose that $\phi:{\widetilde{H}}_6(\lambda_1,\lambda_2)\rightarrow {\widetilde{H}}_6(\gamma_1,\gamma_2)$ for $\lambda_1,\lambda_2,\gamma_1,\gamma_2\in{\mathds{k}}$ is a Hopf algebra isomorphism. Similar to the last case, we have $$\phi(g)=g^{\prime},\quad \phi(x)=a(1- g^{\prime} )+bx^{\prime}$$ for some $a,b\neq 0\in{\mathds{k}}$. Applying $\phi$ to the relations $gx-xg=g(1-g), x^2-x=0$, we have $b=1$. Let $\phi(h)=(g^{\prime})^p(h^{\prime})^q(k^{\prime})^r$ and $\phi(k)=(g^{\prime})^{\mu}(h^{\prime})^{\nu}(k^{\prime})^{\iota}$ for $\mu,\nu,\iota\in{\mathbb{I}}_{0,1}$, $p,q,r\in{\mathbb{I}}_{0,1}$. Observe that $q\iota+r\nu=1$ since $\phi$ is an isomorphism. Then applying $\phi$ to the relations $hx-xh=\lambda_1h(1-g)$ and $kx-xk=\lambda_2k(1-g)$, we have $$q\gamma_1+r\gamma_2 =\lambda_1,\quad \nu\gamma_1+\iota\gamma_2=\lambda_2.$$ Conversely, if there exist $q,r,\nu,\iota$ satisfying conditions , then let $\psi:{\widetilde{H}}_6(\lambda_1,\lambda_2)\rightarrow{\widetilde{H}}_6(\gamma_1,\gamma_2)$ be the algebra defined by $$\begin{aligned} \psi(g)=g^{\prime},\quad \psi(h)= (h^{\prime})^q(k^{\prime})^r,\quad \phi(k)= (h^{\prime})^{\nu}(k^{\prime})^{\iota},\quad \psi(x)= x^{\prime}.\end{aligned}$$ It follows by a direct computation that $\psi$ is a well-defined Hopf algebra epimorphism. Observe that $\psi|_{{\mathcal{P}}_{1,g}(H_6(\lambda_1,\lambda_2))}$ is injective. Then $\psi$ is a Hopf algebra isomorphism. Coradical of dimension $4$ -------------------------- In this case, ${\mathbf{G}}(H)\cong C_4$ or $C_2\times C_2$. Then $\dim R=4$. Observe that $\widehat{{\mathbf{G}}(H)}=\{\epsilon\}$. Then there is an element $x\in V$ such that $c(x\otimes x)=x\otimes x$. Hence $\dim{\mathcal{B}}({\mathds{k}}\{x\})=2$. By assumption, $R\cong{\mathcal{B}}(V)$ and hence $\dim V>1$. If $\dim V>2$, then $\dim{\mathcal{B}}(V)>4$, a contradiction. Therefore, $\dim V=2$. Observe that $V$ is either of diagonal type or of Jordan type. If $V$ is of Jordan type, then by [@CLW Theorem 3.1], $\dim{\mathcal{B}}(V)=16$. Hence $V$ is of diagonal type. Moreover $R\cong{\mathds{k}}[x,y]/(x^2,y^2)$. ### ${\mathbf{G}}(H)\cong C_4:=\langle g\rangle$. Then by Lemma \[lem:cyclic-groups-dimV=2\], $V\cong M_{i,1}\oplus M_{j,1}$ for $i,j\in{\mathbb{I}}_{0,3}$ or $M_{k,2}$ for $k\in\{0,2\}$. Assume that $V\cong M_{i,1}\oplus M_{j,1}$ for $i,j\in{\mathbb{I}}_{0,3}$, that is, $x\in V_{g^i}^{\epsilon},~y\in V_{g^j}^{\epsilon}$. Then $$\begin{aligned} {\operatorname{gr}}H:={\mathds{k}}\langle g,x,y\mid g^4=1,gx=xg,gy=yg,x^2=0, y^2=0,xy-yx\rangle,\end{aligned}$$ with $g\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g^i}(H)$ and $y\in{\mathcal{P}}_{1,g^j}(H)$. Observe that ${\operatorname{Aut}}(C_4)\cong C_2$. Up to isomorphism, we can take $$(i,j)\in\{(0,0),(0,1),(0,2),(1,1),(1,2),(1,3),(2,2)\}.$$ By similar computations as before, we have $$\begin{gathered} gx-xg=\lambda_1g(1-g^i),\quad gy-yg=\lambda_2g(1-g^j),\\ x^2-i\lambda_1x\in{\mathcal{P}}_{1,g^{2i}}(H),\quad y^2-j\lambda_2y\in{\mathcal{P}}_{1,g^{2j}}(H),\\ xy-yx+\lambda_1jy-\lambda_2ix\in{\mathcal{P}}_{1,g^{i+j}}(H).\end{gathered}$$ for $\lambda_1,\lambda_2\in{\mathbb{I}}_{0,1}$. Assume that $(i,j)=(0,0)$. Then $gx=xg$, $gy=yg$ in $H$ and ${\mathcal{P}}(H)=\{x,y\}$. Then $$\begin{aligned} x^2=\mu_1x+\mu_2y,\quad y^2=\mu_3x+\mu_4y,\quad xy-yx=\mu_5x+\mu_6y,\end{aligned}$$ for some $\mu_1,\mu_2,\cdots,\mu_6\in{\mathds{k}}$. Observe that ${\mathcal{P}}(H)$ is a two-dimensional restricted Lie algebra and $H\cong{\mathds{k}}[C_4]\otimes U^L({\mathcal{P}}(H)$, where $U^L({\mathcal{P}}(H))$ is a restricted universal enveloping algebra. Then by [@W1 Theorem 7.4], we obtain five classes of $H$ described in $(27)$–$(31)$. Assume that $(i,j)=(0,1)$. Then ${\mathcal{P}}(H)={\mathds{k}}\{x\}$, ${\mathcal{P}}_{1,g}(H)={\mathds{k}}\{1-g,y\}$ and ${\mathcal{P}}_{1,g^2}(H)={\mathds{k}}\{1-g^2\}$. Hence $$x^2=\mu_1x, \quad y^2-\lambda_2y=\mu_2(1-g^2),\quad xy-yx=\mu_3y+\mu_4(1-g),$$ for some $\mu_1,\mu_2,\mu_3,\mu_4\in{\mathds{k}}$. We can take $\mu_1\in{\mathbb{I}}_{0,1}$ and $\mu_2=0$ by rescaling $x,y$ and via the linear translation $y:=y-a(1-g)$ satisfying $a^2-\lambda_2a=\mu_2$. Then it follows by a direct computation that $$\begin{aligned} [x,[x,y]]&=\mu_3[x,y]=\mu_3^2y+\mu_3\mu_4(1-g),\quad [x^2,y]=[\mu_1x,y]=\mu_1\mu_3y+\mu_1\mu_4(1-g),\\ [[x,y],y]&=-\mu_4[g,y]=-\mu_4\lambda_2g(1-g),\quad [x,y^2]=\lambda_2[x,y]=\lambda_2\mu_3y+\lambda_2\mu_4(1-g).\end{aligned}$$ By Proposition \[proJ\], $[x,[x,y]]=[x^2,y]$ and $[[x,y],y]=[x,y^2]$, which implies that $$\begin{aligned} (\mu_1-\mu_3)\mu_3=0,\quad (\mu_1-\mu_3)\mu_4=0,\quad \lambda_2\mu_3=0,\quad \lambda_2\mu_4=0.\end{aligned}$$ Then it is easy to verify that the ambiguities $(g^4)x=g^3(gx)$, $(g^4)y=g^3(gx)$, $(x^2)y=x(xy)$, $(xy)y=x(y^2)$, $(gx)y=g(xy)$, $(x^2)x=x(x^2)$ and $(y^2)y=y(y^2)$ are resolvable. By Diamond lemma, $\dim H=16$. If $\lambda_2=0=\mu_1$, then $\mu_3=0$ and we can take $\mu_4\in{\mathbb{I}}_{0,1}$ by rescaling $x$, which gives two classes of $H$ described in $(32)$ and $(33)$. If $\lambda_2=0=\mu_1-1$, then $\mu_3^2=\mu_3$ and $\mu_4=\mu_3\mu_4$ and hence we can take $\mu_3\in{\mathbb{I}}_{0,1}$ by rescaling $x$. If $\mu_3=0$, then $\mu_4=0$, which gives one class of $H$ described in $(34)$. If $\mu_3=1$, then we can take $\mu_4=0$ via the linear translation $y:=y-\mu_4(1-g)$, which gives one class of $H$ described in $(35)$. If $\lambda_2=1$, then $\mu_3=0=\mu_4$, which gives two classes of $H$ described in $(36)$–$(37)$. Assume that $(i,j)=(0,2)$. Then ${\mathcal{P}}(H)={\mathds{k}}\{x\}$, ${\mathcal{P}}_{1,g^2}(H)={\mathds{k}}\{1-g^2,y\}$. Hence $$\begin{aligned} x^2=\mu_1x,\quad y^2=\mu_2x,\quad xy-yx=\mu_3y+\mu_4(1-g^2).\end{aligned}$$ From $[x,[x,y]]=[x^2,y]$, $[[x,y],y]=[x,y^2]$, $(x^2)x=x(x^2)$ and $(y^2)y=y(y^2)$, we have $$\begin{aligned} (\mu_1-\mu_3)\mu_3=0,\quad (\mu_1-\mu_3)\mu_4=0,\quad \mu_2\mu_3=0=\mu_2\mu_4.\end{aligned}$$ Then it is easy to verify that the ambiguities $(g^4)x=g^3(gx)$, $(g^4)y=g^3(gy)$, $(x^2)y=x(xy)$, $(xy)y=x(y^2)$, $(gx)y=g(xy)$ are resolvable. By Diamond lemma, $\dim H=16$. By rescaling $x,y$, $\lambda_2,\mu_1\in{\mathbb{I}}_{0,1}$. If $\mu_1=0$, then $\mu_3=0$ and $\mu_2\mu_4=0$. If $\mu_2=0$, then we can take $\mu_4\in{\mathbb{I}}_{0,1}$ by rescaling $x$. If $\mu_4=0$, then we can take $\mu_2\in{\mathbb{I}}_{0,1}$. Therefore, $(\mu_2,\mu_4)$ admits three possibilities and $\lambda_2\in{\mathbb{I}}_{0,1}$, which gives six classes of $H$ described in $(38)$–$(43)$. If $\mu_1=1$, then $\mu_3^2=\mu_3$ and $\mu_4=\mu_3\mu_4$, which implies that $\mu_3\in{\mathbb{I}}_{0,1}$ by rescaling $x$. - If $\mu_3=0$, then $\mu_4=0$, which impies that $xy-yx=0$ in $H$. If $\lambda_2=0$, then by rescaling $y$, we can take $\mu_2\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(44)$–$(45)$. If $\lambda_2=1$, then $H\cong{\widetilde{H}}_7(\mu_2)$ described in $(46)$. - If $\mu_3=1$, then $\mu_2=0$, that is, $y^2=0$ in $H$. Hence we can take $\mu_4=0$ via the linear translation $y:=y-\mu_4(1-g^2)$. Indeed, it is easy to see that the translation is a well-defined Hopf algebra isomorphism. Therefore, we obtain two classes of $H$ described in $(47)$–$(48)$. Now we claim that ${\widetilde{H}}_7(\lambda)\cong{\widetilde{H}}_7(\gamma)$, if and only if, $\lambda=\gamma$. Suppose that $\phi:{\widetilde{H}}_7(\lambda)\rightarrow {\widetilde{H}}_7(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Write $g^{\prime},x^{\prime},y^{\prime}$ to distinguish the generators of ${\widetilde{H}}_7(\gamma)$. Observe that spaces of skew-primitive elements of ${\widetilde{H}}_7(\gamma)$ are trivial except ${\mathcal{P}}_{1,(g^{\prime})^2}({\widetilde{H}}_7(\gamma))={\mathds{k}}\{y^{\prime}\}\oplus {\mathds{k}}\{1-(g^{\prime})^2\}$ and ${\mathcal{P}}({\widetilde{H}}_7(\gamma))={\mathds{k}}\{x^{\prime}\}$. Then $$\begin{aligned} \phi(g)=g^{\prime\,\pm 1},\quad \phi(x)=\alpha x^{\prime},\quad \phi(y)=a(1-(g^{\prime})^2)+by^{\prime}\end{aligned}$$ for some $\alpha\neq 0, a,b\neq 0\in{\mathds{k}}$. Applying $\phi$ to the relation $x^2-x=0$, we have $\alpha=1$. Applying $\phi$ to the relation $gy-yg=g(1-g^2)$, we have $b=1$. Then applying $\phi$ to the relation $y^2-\lambda x=0$, we have $$\begin{aligned} \phi(y^2-\lambda x)=(y^{\prime})^2-\lambda x^{\prime}=(\gamma-\lambda)x^{\prime}=0\quad \Rightarrow\quad \gamma=\lambda.\end{aligned}$$ Assume that $(i,j)=(1,1)$. Then ${\mathcal{P}}_{1,g}(H)={\mathds{k}}\{1-g,x,y\}$ and ${\mathcal{P}}_{1,g^2}={\mathds{k}}\{1-g^2\}$. Hence $$\begin{aligned} x^2-\lambda_1x=\mu_1(1-g^2),\quad y^2-\lambda_2y=\mu_2(1-g^2),\quad xy-yx+\lambda_1y-\lambda_2x=\mu_3(1-g^2),\end{aligned}$$ for $\mu_1,\mu_2,\mu_3\in{\mathds{k}}$. It follows by a direct computation that all ambiguities are resolvable and hence by the Diamond lemma, $\dim H=16$. We can take $\mu_1=0=\mu_2$ via the linear translation $x:=x-a(1-g)$, $y:=y-b(1-g)$ satisfying $a^2-\lambda_1a=\mu_2$ and $b^2-\lambda_2b=\mu_3$. If $\lambda_1=0$ or $\lambda_2=0$, then we can take $\mu_3\in{\mathbb{I}}_{0,1}$ by rescaling $x$ or $y$. If $\lambda_1=0=\lambda_2$, then $\mu_3\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(49)$–$(50)$. If $\lambda_1-1=0=\lambda_2$, then $\mu_3\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(51)$–$(52)$. If $\lambda_1=0=\lambda_2-1$, then $\mu_3\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(51)$–$(52)$ by swapping $x$ and $y$. If $\lambda_1=\lambda_2=1$, then $H$ is isomorphic to one of the Hopf algebras described in $(51)$–$(52)$. Indeed, in this case, consider the translation $y:=y+x+a(1-g)$ satisfying $a^2=\mu_3$, it is easy to see that $H$ is isomorphic to the Hopf algebras defined by $$\begin{aligned} {\mathds{k}}\langle g,x,y\mid g^4=1, [g,x]=g(1-g), [g,y]=0, x^2=x,y^2=0,[x,y]=y+(a+\mu_3)(1-g^2)\rangle.\end{aligned}$$ If $a+\mu_3=0$, then $H$ is isomorphic to the Hopf algebra described in $(51)$. If $a+\mu_3\neq 0$, then by rescaling $y$, $H$ is isomorphic to the Hopf algebra described in $(52)$. Assume that $(i,j)=(1,2)$. Then ${\mathcal{P}}(H)=0$, ${\mathcal{P}}_{1,g}(H)=\{1-g,x\}$, ${\mathcal{P}}_{1,g^2}(H)=\{1-g^2,y\}$ and ${\mathcal{P}}_{1,g^3}(H)={\mathds{k}}\{1-g^3\}$. Hence, $$\begin{aligned} x^2-\lambda_1x=\mu_1y+\mu_2(1-g^2),\quad y^2=0,\quad xy-yx-\lambda_2x=\mu_3(1-g^3),\end{aligned}$$ for some $\mu_1,\mu_2,\mu_3\in{\mathds{k}}$. The verification of the ambiguities $(a^2)b=a(ab)$ and $(ab)b=a(b^2)$ for all $a,b\in\{g,x,y\}$ and $(gx)y=g(xy)$ amount to the conditions $$\begin{aligned} \mu_1\lambda_2=0=\mu_1\mu_3,\quad \lambda_2=0.\end{aligned}$$ Then by Diamond lemma, $\dim H=16$. We can take $\mu_2=0$ via the linear translation $x:=x-a(1-g)$ satisfying $a^2-\lambda_1a=\mu_2$ and take $\mu_3\in{\mathbb{I}}_{0,1}$ by rescaling $y$. If $\lambda_1=0$, then we can take $\mu_1\in{\mathbb{I}}_{0,1}$ by rescaling $x$, which gives three classes of $H$ described in $(53)$–$(55)$. If $\lambda_1-1=0=\mu_3$, then by rescaling $y$, we can take $\mu_1\in{\mathbb{I}}_{0,1}$, which gives two classes of $H$ described in $(56)$–$(57)$. If $\lambda_1-1=0=\mu_3-1$, then $\mu_1=0$, which gives one class of $H$ described in $(58)$. Assume that $(i,j)=(1,3)$. Then ${\mathcal{P}}(H)=0$, ${\mathcal{P}}_{1,g}(H)={\mathds{k}}\{1-g,x\}$, ${\mathcal{P}}_{1,g^2}(H)={\mathds{k}}\{1-g^2\}$ and ${\mathcal{P}}_{1,g^3}(H)={\mathds{k}}\{1-g^3,y\}$. Hence $$\begin{aligned} x^2-\lambda_1x=\mu_1(1-g^2),\quad y^2-\lambda_2y=\mu_2(1-g^2),\quad [x,y]+\lambda_1y-\lambda_2x=0,\end{aligned}$$ for some $\mu_1,\mu_2\in{\mathds{k}}$. It follows by a direct computation that all ambiguities are resolvable and hence by Diamond lemma, $\dim H=16$. Then we can take $\mu_1=0=\mu_2$ via the linear translation $x:=x-a(1-g),~y:=y-b(1-g^3)$ satisfying $a^2-a\lambda_1=\mu_1,b^2-b\lambda_2=\mu_2$. Therefore, the structure of $H$ depends on $\lambda_1,\lambda_2\in{\mathbb{I}}_{0,1}$, denoted by $H(\lambda_1,\lambda_2)$. We claim that $H(0,1)\cong H(1,0)$. Indeed, consider the the algebra map $\phi: H(0,1)\rightarrow H(1,0)$ given by $\phi(g)=g^3$, $\phi(x)=y$ and $\phi(y)=x$. It follows by a direct computation that $\phi$ is a Hopf algebra morphism. Obviously, $\phi$ is an epimorphism and $\phi|_{(H(0,1))_1}$ is injective. Therefore, $\phi$ is an isomorphism. It is easy to see that $H(0,0)$, $H(1,0)$ and $H(1,1)$ are pairwise non-isomorphic. Therefore, we obtain three classes of $H$ described in $(59)$–$(61)$. Assume that $(i,j)=(2,2)$. Then ${\mathcal{P}}(H)=0$. Hence $$\begin{aligned} x^2=0,\quad y^2=0,\quad xy-yx=0.\end{aligned}$$ Then it is easy to see that all ambiguities are resolvable and hence by the Diamond lemma, $\dim H=16$. Similar to the last case, we obtain two classes of $H$ described in $(62)$–$(63)$. Assume that $V\cong M_{k,2}$ for $k\in\{0,2\}$. Then $$\begin{gathered} {\operatorname{gr}}H:={\mathds{k}}\langle g,x,y\mid g^4-1,gx=xg,gy=(y+x)g,xy-yx,x^2,y^2\rangle;\end{gathered}$$ with $ g\in{\mathbf{G}}({\operatorname{gr}}H), x,y\in{\mathcal{P}}_{1,g^{2k}}({\operatorname{gr}}H)$ for $k\in{\mathbb{I}}_{0,1}$. By similar computations as before, we have $$\begin{gathered} gx-xg=\lambda_1(g-g^{2k+1}),\quad gy-(y+x)g=\lambda_2(g-g^{2k+1}),\quad xy-yx,x^2,y^2\in{\mathcal{P}}(H),\end{gathered}$$ for some $\lambda_1,\lambda_2\in{\mathds{k}}$. If $k=0$, then ${\mathcal{P}}(H)={\mathds{k}}\{x,y\}$, which implies that $$\begin{aligned} x^2=\alpha_1x+\alpha_2y,\quad y^2=\alpha_3x+\alpha_4y,\quad xy-yx=\alpha_4x+\alpha_6y;\end{aligned}$$ for some $\alpha_1,\cdots,\alpha_6\in{\mathds{k}}$. Observe that ${\mathcal{P}}(H)$ is a two-dimensional restricted Lie algebra and $H\cong{\mathds{k}}[C_4]\sharp U^L({\mathcal{P}}(H)$, where $U^L({\mathcal{P}}(H))$ is a restricted universal enveloping algebra. Then by [@W1 Theorem 7.4], we obtain five classes of $H$ described in $(64)$–$(68)$. If $k=1$, then ${\mathcal{P}}(H)=0$ and hence the defining relations of $H$ are $$\begin{gathered} gx-xg=\lambda_1(g-g^{3}),\quad gy-(y+x)g=\lambda_2(g-g^{3}),\quad xy-yx=x^2=y^2=0.\end{gathered}$$ The verification of the ambiguities $(a^2)b=a(ab)$ and $(ab)b=a(b^2)$ for all $a,b\in\{g,x,y\}$ and $(gx)y=g(xy)$ gives no ambiguity conditions. Then by Diamond lemma, $\dim H=16$. We write $H(\lambda_1,\lambda_2):=H$ for convenience. **Cliam:** $H(\lambda_1,\lambda_2)\cong H(\gamma_1,\gamma_2)$, if and only if, there exist $\alpha_1,\alpha_2\neq 0,\beta_2\in{\mathds{k}}$ such that $\alpha_2\gamma_1=\lambda_1$ and $\beta_2\gamma_1-\alpha_1+\alpha_2\gamma_2-\lambda_2=0$. Suppose that $\phi: H(\lambda_1,\lambda_2)\rightarrow H(\gamma_1,\gamma_2)$ for $\lambda_1,\lambda_2,\gamma_1,\gamma_2\in{\mathds{k}}$ is a Hopf algebra isomorphism. Write $g^{\prime},x^{\prime},y^{\prime}$ to distinguish the generators of $H(\gamma_1,\gamma_2)$. Then $$\begin{aligned} \phi(g)=g^{\prime\,\pm 1},\quad \phi(x)=\alpha_1(1-(g^{\prime})^2)+\alpha_2x^{\prime}+\alpha_3y^{\prime},\quad \phi(y)=\beta_1(1-(g^{\prime})^2)+\beta_2x^{\prime}+\beta_3y^{\prime}\end{aligned}$$ for some $\alpha_1,\alpha_2,\alpha_3,\beta_1,\beta_2,\beta_3\in{\mathds{k}}$. Applying $\phi$ to the relation $gx-xg=\lambda_1g(1-g^2)$, we have $\alpha_3=0=\alpha_2\gamma_1-\gamma_1$. Then applying $\phi$ to the relation $gy-(y+x)g=\lambda_2g(1-g^2)$, we have $$\begin{aligned} \beta_3=\alpha_2,\quad \beta_2\gamma_1-\alpha_1+\gamma_2\beta_3-\lambda_2=0.\end{aligned}$$ Then it is easy to check that $\phi$ is a well-defined bialgebra map. Since $\phi$ is an isomorphism, it follows that $\alpha_2\neq 0$. Consequently, the claim follows. By rescaling $x$, we can take $\lambda_1\in{\mathbb{I}}_{0,1}$. Then from the last claim, we have $H(\lambda_1,0)\cong H(\lambda_1,\lambda_2)$ for $\lambda_1\in{\mathbb{I}}_{0,1}$ and $H(0,0)\not\cong H(1,0)$. Consequently, we obtain two classes of $H$ described in $(69)$–$(70)$. ### ${\mathbf{G}}(H)\cong C_2\times C_2:=\langle g\rangle\times\langle h\rangle$. If $V$ is a decomposable object in ${}_{C_2\times C_2}^{C_2\times C_2}\mathcal{YD}$, then $V:={\mathds{k}}\{x,y\}$ must be the sum of two one-dimensional objects in ${}_{C_2\times C_2}^{C_2\times C_2}\mathcal{YD}$ such that $x\in V_{g^ih^j}^{\epsilon},~y\in V_{g^{\mu}h^{\mu}}^{\epsilon}$ for $i,j,\mu,\nu\in{\mathbb{I}}_{0,1}$. If $V$ is an indecomposable object in ${}_{C_2\times C_2}^{C_2\times C_2}\mathcal{YD}$, then by [@Ba] and Theorem \[thm:indecomposable-object-YD-over-groups\], $V:={\mathds{k}}\{x,y\}\in{}_{C_2\times C_2}^{C_2\times C_2}\mathcal{YD}$ by $$\begin{gathered} g\cdot x=x,\quad g\cdot y=y+x,\quad h\cdot x=x,\quad h\cdot y=y+\lambda x,\quad \lambda\in{\mathds{k}};\\ \delta(x)=g^kh^l\otimes x,\quad \delta(y)=g^kh^l\otimes y,\quad\text{for some }k,l\in{\mathbb{I}}_{0,1}.\end{gathered}$$ We claim that $(k,l,\lambda)\in\{(0,0,\lambda),(0,1,0),(1,1,1)\}$; otherwise, $V$ is of Jordan type, a contradication. Assume that $V$ is a decomposable object in ${}_{C_2\times C_2}^{C_2\times C_2}\mathcal{YD}$. Then $x\in V_{g^ih^j}^{\epsilon},~y\in V_{g^{\mu}h^{\mu}}^{\epsilon}$ for $i,j,\mu,\nu\in{\mathbb{I}}_{0,1}$. Without loss of generality, we may assume that $x,y\in V_{1}$, $x\in V_{g},y\in V_{g^i}$ for $i\in{\mathbb{I}}_{0,1}$ or $x\in V_{g},y\in V_{h}$. Assume that $x,y\in V_{1}^{\epsilon}$. Then $H\cong{\mathds{k}}[C_2\times C_2]\otimes U^L({\mathcal{P}}(H))$, where $U^L({\mathcal{P}}(H))$ is a restricted universal enveloping algebra of ${\mathcal{P}}(H)$. Then by [@W1 Theorem 7.4], we obtain five classes of $H$ described in $(71)$–$(75)$. Assume that $x\in V_{g}^{\epsilon}, y\in V_{1}^{\epsilon}$. Then by Lemma \[lem:p4-x1yu\], the defining relations of $H$ are $$\begin{gathered} g^2=1,\quad h^{2}=1,\quad gx-xg=\lambda_1g(1-g), \quad gy-yg=0,\\ hx-xh=\lambda_3h(1-g),\quad hy-yh=0,\\ x^2-\lambda_1x=\mu_1y,\quad y^2=\mu_2y,\quad xy-yx=\mu_3x+\mu_4(1-g),\end{gathered}$$ for $\lambda_1\in{\mathbb{I}}_{0,1},\lambda_3,\mu_1,\cdots,\mu_4\in{\mathds{k}}$ with ambiguity conditions $$\begin{aligned} \mu_1\mu_3=0=\mu_1\mu_4,\quad \mu_2\mu_3=\mu_3^2,\quad \mu_2\mu_4=\mu_3\mu_4,\quad \lambda_1\mu_3=0=\lambda_3\mu_3.\end{aligned}$$ By rescaling $y$, we can take $\mu_2\in{\mathbb{I}}_{0,1}$. If $\lambda_1=0=\mu_2$, then $\mu_3=0=\mu_1\mu_4$ and we can take $\lambda_3,\mu_4\in{\mathbb{I}}_{0,1}$ by rescaling $x,y$. If $\mu_4=0$, then by rescaling $y$, $\mu_1\in{\mathbb{I}}_{0,1}$. If $\mu_4\neq 0$, then $\mu_1=0$ and we can take $\mu_4=1$ by rescaling $y$. Therefore, we obtain six classes of $H$ described in $(76)$–$(81)$. If $\lambda_1=0=\mu_2-1$, then $\mu_3=\mu_3^2$, $(\mu_3-1)\mu_4=0$ and $\mu_1\mu_4=0=\lambda_3\mu_3$. We can take $\mu_3\in{\mathbb{I}}_{0,1}$ by rescaling $y$. If $\mu_3=0$, then $\mu_4=0$ and we can take $\lambda_3\in{\mathbb{I}}_{0,1}$ by rescaling $x$, which gives four classes of $H$ described in $(82)$–$(85)$. If $\mu_3=1$, then $\lambda_3=0$ and we can take $\mu_1\in{\mathbb{I}}_{0,1}$ by rescaling $x$. If $\mu_1=0$, then we can take $\mu_4=0$ via the linear translation $x:=x+\mu_4(1-g)$, which gives one class of $H$ described in $(86)$. If $\mu_1=1$, then $\mu_4=0$, which gives one class of $H$ described in $(87)$. If $\lambda_1-1=0=\mu_2$, then $\mu_3=0$ and $\mu_1\mu_4=0$. If $\mu_1=0=\mu_4$, then $H\cong {\widetilde{H}}_{8}(\lambda_3)$ described in $(88)$. If $\mu_1\neq 0$, then $\mu_4=0$ and we can take $\mu_1=1$ by rescaling $y$, which implies that $H\cong{\widetilde{H}}_{9}(\lambda_3)$ described in $(89)$. If $\mu_1=0$ and $\mu_4\neq 0$, then by rescaling $y$, $\mu_4=1$, which implies that $H\cong{\widetilde{H}}_{10}(\lambda_3)$ described in $(90)$. If $\lambda_1=\mu_2=1$, then $\mu_3=0=\mu_4$ and hence $H\cong{\widetilde{H}}_{11}(\lambda_3,\mu_1)$ described in $(91)$. **Claim:** ${\widetilde{H}}_n(\lambda)\cong{\widetilde{H}}_n(\gamma)$ for $n\in{\mathbb{I}}_{8,10}$, if and only if, $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$; ${\widetilde{H}}_{11}(\lambda,\mu)\cong{\widetilde{H}}_{11}(\gamma,\nu)$ if and only if $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$ and $\mu=\nu$. Suppose that $\phi:{\widetilde{H}}_8(\lambda)\rightarrow {\widetilde{H}}_8(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Then $\phi|_{C_2\times C_2}:C_2\times C_2\rightarrow C_2\times C_2$ is an automorphism. Write $g^{\prime},h^{\prime},x^{\prime}$ to distinguish the generators of ${\widetilde{H}}_8(\gamma)$. Since spaces of skew-primitive elements of ${\widetilde{H}}_8(\gamma)$ are trivial except ${\mathcal{P}}_{1, g^{\prime} }({\widetilde{H}}_8(\gamma))={\mathds{k}}\{x^{\prime}\}\oplus {\mathds{k}}\{1- g^{\prime} \}$ and ${\mathcal{P}}({\widetilde{H}}_8(\gamma))={\mathds{k}}\{y^{\prime}\}$, it follows that $$\phi(g)=g^{\prime},\quad \phi(h)=(g^{\prime})^ih^{\prime},\quad \phi(x)=a(1- g^{\prime} )+bx^{\prime},\quad \phi(y)=cy^{\prime}$$ for some $a,b\neq 0,c\neq 0\in{\mathds{k}}$ and $i\in{\mathbb{I}}_{0,1}$. Then applying $\phi$ to the relations $gx-xg=g(1-g)$ and $hx-xh=\lambda h(1-g)$, we have $$b=1\quad b(i+\gamma)=\lambda\quad \Rightarrow\quad i+\gamma =\lambda .$$ Conversely, if $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$, then consider the algebra map $\psi:{\widetilde{H}}_{8}(\lambda)\rightarrow{\widetilde{H}}_8(\gamma), g\rightarrow g, h\rightarrow g^ih, x\rightarrow x,y\rightarrow y$. It is easy to see that $\psi$ is a Hopf algebra epimorphism and $\psi|_{({\widetilde{H}}_8(\lambda)_1)}$ is injective. Therefore, ${\widetilde{H}}_8(\lambda)\cong{\widetilde{H}}_8(\gamma)$. Similarly, ${\widetilde{H}}_n(\lambda)\cong{\widetilde{H}}_n(\gamma)$ for $n\in{\mathbb{I}}_{9,10}$, if and only if, $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$; ${\widetilde{H}}_{11}(\lambda,\mu)\cong{\widetilde{H}}_{11}(\gamma,\nu)$ if and only if $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$ and $\mu=\nu$. Assume that $x,y\in V_{g}^{\epsilon}$. Then by Lemma \[lem:p4-x1yu\], the defining relations of $H$ are $$\begin{gathered} g^2=1,\quad h^{2}=1,\quad gx-xg=\lambda_1g(1-g), \quad gy-yg=\lambda_2g(1-g ),\\ hx-xh=\lambda_3h(1-g),\quad hy-yh=\lambda_4h(1-g ),\\ x^2-\lambda_1x=0,\quad y^2- \lambda_2y=0,\quad xy-yx+ \lambda_1y-\lambda_2x=0.\end{gathered}$$ for $\lambda_1,\lambda_2\in{\mathbb{I}}_{0,1},\lambda_3,\cdots,\lambda_5\in{\mathds{k}}$. If $\lambda_1=0=\lambda_2$, then we can take $\lambda_3,\lambda_4\in{\mathbb{I}}_{0,1}$ by rescaling $x,y$, which gives two classes of $H$ described in $(92)$–$(93)$. Let $H:=H(\lambda_3,\lambda_4)$ for convenience. Indeed, $H(1,0)\cong H(0,1)$ by swapping $x$ and $y$; $H(1,0)\cong H(1,1)$ via the Hopf algebra isomorphism $\phi:H(1,0)\rightarrow H(1,1)$ defined by $$\phi(g)=g,\quad \phi(h)=h,\quad \phi(x)=x,\quad \phi(y)=x+y.$$ Moreover, $H(0,0)$ and $H(1,0)$ are not isomorphic since $H(0,0)$ is commutative while $H(1,0)$ is not commutative. If $\lambda_1-1=0=\lambda_2$, then we can take $\lambda_4\in{\mathbb{I}}_{0,1}$ by rescaling $y$. If $\lambda_4=0$, then $H\cong{\widetilde{H}}_{12}(\lambda_3)$ described in $(94)$. If $\lambda_4=1$, then $H\cong{\widetilde{H}}_{13}(\lambda_3)$ described in $(95)$. If $\lambda_1=0=\lambda_2-1$, then $H$ is isomorphic to one of the Hopf algebras described in $(94)$–$(95)$ by swapping $x$ and $y$. If $\lambda_1=\lambda_2=1$, then $H$ is isomorphic to one of the Hopf algebras described in $(94)$–$(95)$. Indeed, consider the translation $y:=x+y$, it is easy to see that $H$ is isomorphic to the Hopf algebra defined by $$\begin{gathered} g^2=1,\quad h^{2}=1,\quad gx-xg=g(1-g), \quad gy-yg=0,\quad hx-xh=\lambda_3h(1-g),\\ hy-yh=(\lambda_3+\lambda_4)h(1-g ),\quad x^2-x=0,\quad y^2=0,\quad xy-yx+y=0.\end{gathered}$$ If $\lambda_3+\lambda_4=0$, then $H$ is isomorphic to the Hopf algebra described in $(94)$. If $\lambda_3+\lambda_4\neq 0$, then by rescaling $y$, $H$ is isomorphic to the Hopf algebra described in $(95)$. **Claim:** ${\widetilde{H}}_{n}(\lambda)\cong{\widetilde{H}}_{n}(\gamma)$ for $n\in{\mathbb{I}}_{12,13}$, if and only if, $\lambda=\gamma+i$ for $i\in{\mathbb{I}}_{0,1}$. Assume that $x\in V_{g}^{\epsilon}, y\in V_{h}^{\epsilon}$. Then by Lemma \[lem:p4-xg1yhu\], the defining relations of $H$ are $$\begin{gathered} gx-xg=\lambda_1g(1-g),\quad hx-xh=\lambda_2h(1-g), \quad x^2-\lambda_1x=0\\ gy-yg=\lambda_3g(1-h ), \quad hy-yh=\lambda_4h(1-h ),\quad y^2- \lambda_4y=0,\\ xy-yx-\lambda_3x+ \lambda_2y=\lambda_5(1-gh ).\end{gathered}$$ for some $\lambda_1,\lambda_4\in{\mathbb{I}}_{0,1}$, $\lambda_2,\lambda_3,\lambda_5\in{\mathds{k}}$. The verifications of $(a^2)b=a(ab),a(b^2)=(ab)b$ for $a,b\in\{g,h,x,y\}$ and $a(xy)=(ax)y$ for $a\in\{g,h\}$ amounts to the conditions $$\begin{aligned} (\lambda_1+\lambda_2)\lambda_3=(\lambda_1+\lambda_2)\lambda_2=(\lambda_1+\lambda_2)\lambda_5=0,\\ (\lambda_3-\lambda_4)\lambda_3=(\lambda_3-\lambda_4)\lambda_2=(\lambda_3-\lambda_4)\lambda_5=0.\end{aligned}$$ Then by the Diamond lemma, $\dim H=16$. If $\lambda_1=0=\lambda_4$, then $\lambda_2=0=\lambda_3$ and hence we can take $\lambda_5\in{\mathbb{I}}_{0,1}$ by rescaling $x$, which gives two classes of $H$ described in $(96)$–$(97)$. If $\lambda_1-1=0=\lambda_4$, then $\lambda_2^2=\lambda_2$, $\lambda_3=0$ and $(\lambda_2-1)\lambda_5=0$. Hence we can take $\lambda_2,\lambda_5\in{\mathbb{I}}_{0,1}$ by rescaling $x,y$. If $\lambda_2=0$, then $\lambda_5=0$, which gives one class of $H$ described in $(98)$. If $\lambda_2=1$, then we can take $\lambda_5=0$ via the linear translation $y:=y-\lambda_5(1-h)$, which gives one class of $H$ described in $(99)$. If $\lambda_1=0=\lambda_4-1$, then we obtain two classes of $H$ described in in $(98)$–$(99)$ via the linear translation $g:=h, h:=g, x:=y,y:=x$. If $\lambda_1=1=\lambda_4$, then $\lambda_2=\lambda_3\in{\mathbb{I}}_{0,1}$ and $(1+\lambda_2)\lambda_5=0$. If $\lambda_2=0=\lambda_3$, then $\lambda_5=0$, which gives one class of $H$ described in $(100)$. If $\lambda_2=\lambda_3=1$, then we can take $\lambda_5=0$ via the linear translation $y:=y-\lambda_5(1-h)$, which gives one class of $H$ described in $(101)$. Assume that $V$ is an indecomposable object in ${}_{C_2\times C_2}^{C_2\times C_2}\mathcal{YD}$. Then $ {\operatorname{gr}}H={\mathds{k}}\langle g,h,x,y\rangle$, subject to the relations $$\begin{gathered} g^2=h^2=x^2=y^2=1,[g,x]=[h,x]=[g,h]=0,gy=(y+x)g, hy=(y+\lambda x)h,\end{gathered}$$ with $g,h\in{\mathbf{G}}({\operatorname{gr}}H),x,y\in{\mathcal{P}}_{g^kh^l}({\operatorname{gr}}H)$, where $(k,l,\lambda)\in\{(0,0,\lambda),(0,1,0),(1,1,1)\}$. It is easy to see that ${\operatorname{gr}}H$ with $(k,l,\lambda)\in\{(0,1,0),(1,1,1)\}$ are isomorphic. Hence we can take $(k,l,\lambda)\in\{(0,0,\lambda),(0,1,0)\}$. By similar computations as before, we have $$\begin{gathered} gx-xg=\lambda_1g(1-g^kh^l),\quad gy-(y+x)g=\lambda_2g(1-g^kh^l);\\ hx-xh=\lambda_3h(1-g^kh^l),\quad hy-(y+\lambda x)h=\lambda_4h(1-g^kh^l).\end{gathered}$$ If $(k,l,\lambda)=(0,0,\lambda)$, then ${\mathcal{P}}(H)={\mathds{k}}\{x,y\}$ and $x^2,y^2,[x,y]\in{\mathcal{P}}(H)$. Hence $H\cong{\mathds{k}}[C_4]\sharp U^L({\mathcal{P}}(H)$, where $U^L({\mathcal{P}}(H))$ is a restricted universal enveloping algebra of ${\mathcal{P}}(H)$. Then by [@W1 Theorem 7.4], we obtain five classes of $H$ described in $(102)$–$(106)$. If $(k,l,\lambda)=(0,1,0)$, then it follows by a direct computation that $x^2-\lambda_3x,y^2-\lambda_4y,xy-yx-\lambda_4x+\lambda_3y\in{\mathcal{P}}(H)$. Therefore, the defining relations of $H$ are $$\begin{gathered} g^2=h^2=1,\quad gh=hg,\quad gx-xg=\lambda_1g(1-h),\quad gy-(y+x)g=\lambda_2g(1-h),\\ hx-xh=\lambda_3h(1-h),\quad hy-yh=\lambda_4h(1-h),\\ xy-yx-\lambda_4x+\lambda_3y=0,\quad x^2-\lambda_3x=0,\quad y^2-\lambda_4y=0.\end{gathered}$$ The verifications of $(a^2)b=a(ab),(ab)b=a(b^2)$ for $a,b\in\{g,h,x,y\}$ and $a(xy)=(ax)y$ for $a\in\{g,h\}$ amounts to the conditions $$\begin{gathered} \lambda_1=0=\lambda_3.\end{gathered}$$ By Diamond Lemma, $\dim H=16$. We can take $\lambda_2=0$ via the linear translation $x:=x+\lambda_2(1-h)$ and take $\lambda_4\in{\mathbb{I}}_{0,1}$ by rescaling $x,y$, which gives two classes of $H$ described in $(107)$–$(108)$. Coradical ${\mathds{k}}[C_2]$ ----------------------------- Then by Lemma \[lem:cyclic-groups-dimV=3\], $V\cong M_{i,1}\oplus M_{j,1}\oplus M_{k,1}$ for $i,j,k\in{\mathbb{I}}_{0,p-1}$ or $ M_{0,1}\oplus M_{0,2}$ and hence ${\mathcal{B}}(V)\cong{\mathds{k}}[x,y,z]/(x^p,y^p,z^p)$. Assume that $V\cong M_{i,1}\oplus M_{j,1}\oplus M_{k,1}$ for $i,j,k\in{\mathbb{I}}_{0,1}$. Then $$\begin{aligned} {\operatorname{gr}}H={\mathds{k}}\langle g,x,y,z\mid g^2=1,[g,x]=[g,y]=[g,z]=x^2=y^2=z^2=[x,y]=[x,z]=[y,z]=0\rangle,\end{aligned}$$ with $g\in{\mathbf{G}}(H)$, $x\in{\mathcal{P}}_{1,g^i}(H)$, $y\in{\mathcal{P}}_{1,g^j}(H)$ and $z\in{\mathcal{P}}_{1,g^k}(H)$. Up to isomorphism, we may assume that $(i,j,k)=(0,0,0)$, $(1,1,1)$, $(1,1,0)$ and $(1,0,0)$. Assume that $(i,j,k)=(0,0,0)$. Then $H\cong{\mathds{k}}[C_2]\otimes U^L({\mathcal{P}}(H))$, where $U^L({\mathcal{P}}(H))$ is a restricted universal enveloping algebra of ${\mathcal{P}}(H)$. Then by [@NWW1 Theorem 1.4], we obtain fourteen classes of $H$ described in $(109)$–$(122)$. Assume that $(i,j,k)=(1,1,1)$. Then by Lemma \[lem:p4-x1y1z1\], the defining relations of $H$ are $$\begin{gathered} g^2=1,\quad gx-xg=\lambda_1g(1-g),\quad gy-yg=\lambda_2g(1-g),\quad gz-zg=\lambda_3g(1-g),\\ x^2-\lambda_1x=0,\quad y^2-\lambda_2y=0,\quad z^2-\lambda_3z=0,\quad xy-yx-\lambda_2x+\lambda_1y=0,\\ xz-zx-\lambda_3x+\lambda_1z=0,\quad yz-zy-\lambda_3y+\lambda_2z=0.\end{gathered}$$ for $\lambda_1,\lambda_2,\lambda_3\in{\mathbb{I}}_{0,2}$. Let $H(\lambda_1,\lambda_2,\lambda_3):=H$ for convenience. We claim that $H(1,0,0)\cong H(1,1,0)$. Indeed, consider the algebra map $\phi:H(1,0,0)\rightarrow H(1,1,0), g\rightarrow g,x\rightarrow x,y\rightarrow x+y, z\rightarrow z$. Then it is easy to see $\phi$ is a Hopf algebra epimorphism and $\phi|_{(H(1,0,0))_1}$ is injective, which implies that the claim follows. Similarly, $H(1,1,1)\cong H(1,1,0)\cong H(1,0,0)$. Observe that $H(0,0,0)$ is commutative and $H(1,0,0)$ is not commutative. Hence $H\cong H(0,0,0)$ or $H(1,0,0)$ described in $(123)$ or $(124)$. Assume that $(i,j,k)=(1,1,0)$. Then by Lemma \[lem:p4-x1y1z0\], the defining relations of $H$ are $$\begin{gathered} g^2=1,\quad gx-xg=\lambda_1g(1-g),\quad gy-yg=\lambda_2g(1-g),\quad gz-zg=0,\\ x^2-\lambda_1x=\lambda_3z,\quad y^2-\lambda_2y=\lambda_4z,\quad z^2=\lambda_5z,\\ xz-zx=\gamma_1x+\gamma_2y+\gamma_3(1-g),\quad yz-zy=\gamma_4x+\gamma_5y+\gamma_6(1-g),\\ xy-yx-\lambda_2x+\lambda_1y=\lambda_6z.\end{gathered}$$ for $\lambda_1,\lambda_2,\lambda_5\in{\mathbb{I}}_{0,1}$ and $\lambda_3,\lambda_4,\lambda_6,\gamma_1,\cdots,\gamma_6\in{\mathds{k}}$ with the ambiguity conditions given by –. Suppose that $\lambda_1=0=\lambda_2$. Then by rescaling $x,y$, we can take $\lambda_3,\lambda_4\in{\mathbb{I}}_{0,1}$. If $\lambda_3=0=\lambda_4$, then $\lambda_6\gamma_i=0$ for all $i\in{\mathbb{I}}_{0,1}$ and by rescaling $x$, we can take $\lambda_6\in{\mathbb{I}}_{0,1}$. If $\lambda_6=1$, then $\gamma_i=0$ for all $i\in{\mathbb{I}}_{1,6}$, that is, $[x,z]=0=[y,z]$ in $H$. Then $H$ depends on $\lambda_6\in{\mathbb{I}}_{0,1}$, that is, $H$ is isomorphic to one of the Hopf algebras described in $(125)$–$(126)$. If $\lambda_6=0=\lambda_5$, then $\gamma_1^2=\gamma_2\gamma_4=\gamma_5^2$, $\gamma_5\gamma_6=\gamma_3\gamma_4$, $\gamma_1\gamma_3=\gamma_2\gamma_6$, $(\gamma_1-\gamma_5)\gamma_2=0=(\gamma_1-\gamma_5)\gamma_4$ and by rescaling $x,y$, we can take $\gamma_2,\gamma_4\in{\mathbb{I}}_{0,1}$. If $\gamma_2=0=\gamma_4$, then $\gamma_1=0=\gamma_5$ and we can take $\gamma_3,\gamma_6\in{\mathbb{I}}_{0,1}$. Let $H(\gamma_3,\gamma_6):=H$ for convenience. It is easy to see that $H(0,1)\cong H(1,0)$ by swapping $x$ and $y$ and $H(1,1)\cong H(1,0)$ via the linear translation $y:=y+x$. Observe that $H(0,0)$ is commutative while $H(1,0)$ is not commutative. Therefore, $H$ is isomorphic to one of the Hopf algebras described in $(127)$–$(128)$. If $\gamma_2-1=0=\gamma_4$, then $\gamma_1=\gamma_5=\gamma_6=0$ and hence we can take $\gamma_3=0$ via the linear translation $y:=y+\gamma_3(1+g)$, which gives one class of $H$ described in $(129)$. If $\gamma_2=0=\gamma_4-1$, then $H$ is isomorphic to the Hopf algebra described in $(129)$ by swapping $x$ and $y$. If $\gamma_2=1=\gamma_4$, then $H$ is isomorphic to the Hopf algebra described in $(129)$ via the linear translation $y:=y+x$. If $\lambda_6=0=\lambda_5-1$, then $(1-\gamma_1)\gamma_1=\gamma_2\gamma_4=(1-\gamma_5)\gamma_5$, $(1+\gamma_1+\gamma_5)\gamma_2=0=(1+\gamma_1+\gamma_5)\gamma_4$, $(1-\gamma_1)\gamma_3=\gamma_2\gamma_6$, $(1-\gamma_5)\gamma_6=\gamma_3\gamma_4$. If $\gamma_2=0=\gamma_4$, then $\gamma_1,\gamma_5\in{\mathbb{I}}_{0,1}$, $(1-\gamma_1)\gamma_3=0=(1-\gamma_5)\gamma_6$. Moreover, we can take $\gamma_3=0=\gamma_6$. Indeed, if $\gamma_1=0$ or $\gamma_5=0$, then $\gamma_3=0$ or $\gamma_6=0$; if $\gamma_1=1$ or $\gamma_5=1$, then we can take $\gamma_3=0$ or $\gamma_6=0$ via the linear translation $x:=x+\gamma_3(1-g)$ or $y:=y+\gamma_6(1-g)$. Observe that the Hopf algebras with $\gamma_1-1=0=\gamma_5$ and $\gamma_1=0=\gamma_5-1$ are isomorphic by swapping $x$ and $y$. Then $H$ is isomorphic to one of the Hopf algebras described in $(130)$–$(132)$. If $\gamma_2-1=0=\gamma_4$, then $\gamma_1,\gamma_5\in{\mathbb{I}}_{0,1}$, $\gamma_1+\gamma_5=1$, $(1-\gamma_1)\gamma_3=\gamma_6$, $(1-\gamma_5)\gamma_6=0$. If $\gamma_1=1$, then $\gamma_5=0=\gamma_6$ and hence $H$ is isomorphic to the Hopf algebra described in $(131)$ via the linear translation $x:=x+y+\gamma_3(1-g)$. If $\gamma_1=0$, then $\gamma_5=1$, $\gamma_3=\gamma_6$ and hence $H$ is isomorphic to the Hopf algebra described in $(131)$ via the linear translation $x:=y+\gamma_3(1-g),y:=x+y+\gamma_3(1-g)$. Similarly, if $\gamma_2=\gamma_4-1$ or $\gamma_2=1=\gamma_4$, $H$ is isomorphic to the Hopf algebra described in $(131)$. If $\lambda_3-1=0=\lambda_4$, then $\gamma_i=0$ for all $i\in{\mathbb{I}}_{1,6}$ and hence $H$ is isomorphic to one of the Hopf algebras described in $(133)$–$(136)$. If $\lambda_3=0=\lambda_4-1$ or $\lambda_3=1=\lambda_4$, then similar to the last case, $H$ is isomorphic to one of the Hopf algebra described in $(133)$–$(136)$. Suppose that $\lambda_1-1=0=\lambda_2$. Then $\gamma_1=0=\gamma_4$ and by rescaling $y$, we can take $\lambda_4\in{\mathbb{I}}_{0,1}$. If $\lambda_4=0$, then $\lambda_6\gamma_i=0$ for all $i\in{\mathbb{I}}_{1,6}-\{3\}$ and by rescaling $y$, we can take $\lambda_6\in{\mathbb{I}}_{0,1}$. Observe that $\gamma_3=\lambda_3\gamma_6$ and $\lambda_3\gamma_3=0$. If $\lambda_6=1$, then $\gamma_i=0$ for all $i\in{\mathbb{I}}_{1,6}$ and we can take $\lambda_3=0$ via the linear translation $x:=x-\lambda_3 y$. Therefore, we obatin two classes of $H$ described in $(137)$–$(138)$. If $\lambda_6=0=\lambda_5$, then $\lambda_3\gamma_i=0$ for all $i\in{\mathbb{I}}_{1,6}$. If $\lambda_3=0$, then $\gamma_5=0$, $\gamma_2\gamma_6=0$ and by rescaling $y,z$, we can take $\gamma_2,\gamma_6\in{\mathbb{I}}_{0,1}$. If $\gamma_2=0$, then we can take $\gamma_3\in{\mathbb{I}}_{0,1}$. Let $H(\gamma_3,\gamma_6):=H$ for convenience. Then it is easy to see that $H(1,1)\cong H(0,1)$ via the linear translation $x:=x+y$. Therefore, $H$ is isomorphic to one of the Hopf algebras described in $(139)$–$(141)$. If $\gamma_2=1$, then $\gamma_6=0$ and hence we can take $\gamma_3=0$ via the linear translation $y:=y+\gamma_3(1-g)$, which gives one class of $H$ described in $(142)$. If $\lambda_3\neq 0$, then $\gamma_i=0$ for all $i\in{\mathbb{I}}_{1,6}$ and we can take $\lambda_3=1$ by rescaling $z$, which gives two classes of $H$ described in $(143)$. If $\lambda_6=0=\lambda_5-1$, then $\lambda_3\gamma_5=0$, $(1-\gamma_5)\gamma_5=0$, $(1+\gamma_5)\gamma_2=0$, $\gamma_3=\gamma_2\gamma_6$, $(1-\gamma_5)\gamma_6=0$. If $\gamma_5=1$, then $\lambda_3=0$, $\gamma_3=\gamma_2\gamma_6$ and we can take $\gamma_6=0=\gamma_3$ via the linear translation $y:=y+\gamma_6(1-g)$. Indeed, if $\gamma_2=0$, then $\gamma_3=0$; if $\gamma_2\neq 0$, then $\gamma_3=\gamma_2\gamma_6$ and hence the translation is well-defined. Then $H$ is isomorphic to the Hopf algebra described as follows: - ${\mathds{k}}\langle g,x,y,z\rangle/(g^2-1,[g,x]-g(1-g),[g,y],[g,z],[x,y]-y,[x,z]-\gamma_2y,[y,z]-y,x^2-x,y^2,z^2-z)$. We can take $\gamma_2=0$ via the linear translation $x:=x+\gamma_2y$. Indeed, it follows by a direct computation that the translation is a well-defined Hopf algebra isomorphism. Therefore, $H$ is isomorphic to the Hopf algebra described in $(144)$. If $\gamma_5=0$, then $\gamma_2=0=\gamma_3=\gamma_6$, and hence $H\cong{\widetilde{H}}_{14}(\lambda_3)$ described in $(145)$. If $\lambda_4=1$, then $\gamma_i=0$ for $i\in{\mathbb{I}}_{1,6}$. If $\lambda_5=0=\lambda_6$, then we can take $\lambda_3=0$ via the linear translation $x:=x+\alpha y$ satisfying $\alpha^2=\lambda_3$, which gives one class of $H$ described in $(146)$. If $\lambda_5=0$ and $\lambda_6\neq 0$, then by rescaling $y,z$, we can take $\lambda_6=1$. Moreover, we can take $\lambda_3=0$ via the linear translation $x:=x+\alpha y$ satisfying $\alpha^2+\alpha=\lambda_3$, which gives one class of $H$ described in $(147)$. If $\lambda_5=1$, then we can take $\lambda_3=0$ via the linear translation $x:=x+\alpha y$ satisfying $\alpha^2+\lambda_6\alpha=\lambda_3$ and hence $H\cong{\widetilde{H}}_{15}(\lambda_6)$ described in $(148)$. Suppose that $\lambda_1=0=\lambda_2-1$ or $\lambda_1=1=\lambda_2$. Then it can be reduced to the case $\lambda_1-1=0=\lambda_2$ by swapping $x$ and $y$ or via the linear translation $y:=x+y$, respectively. **Claim:** ${\widetilde{H}}_{14}(\lambda)\cong{\widetilde{H}}_{14}(\gamma)$ or ${\widetilde{H}}_{15}(\lambda)\cong{\widetilde{H}}_{15}(\gamma)$, if and only if, $\lambda=\gamma$. Suppose that $\phi:{\widetilde{H}}_{15}(\lambda)\rightarrow {\widetilde{H}}_{15}(\gamma)$ for $\lambda,\gamma\in{\mathds{k}}$ is a Hopf algebra isomorphism. Then $\phi|_{C_2}:C_2 \rightarrow C_2$ is an automorphism. Write $g^{\prime},x^{\prime},y^{\prime},z^{\prime}$ to distinguish the generators of ${\widetilde{H}}_{15}(\gamma)$. Since spaces of skew-primitive elements of ${\widetilde{H}}_{15}(\gamma)$ are trivial except ${\mathcal{P}}_{1, g^{\prime} }({\widetilde{H}}_8(\gamma))={\mathds{k}}\{x^{\prime},y^{\prime}\}\oplus {\mathds{k}}\{1- g^{\prime} \}$ and ${\mathcal{P}}({\widetilde{H}}_{15}(\gamma))={\mathds{k}}\{z^{\prime}\}$, it follows that $$\phi(g)=g^{\prime}, \quad \phi(x)=\alpha_1x^{\prime}+\alpha_2y^{\prime}+\alpha_3(1-g^{\prime}),\quad\phi(y)=\beta_1x^{\prime}+\beta_2y^{\prime}+\beta_3(1-g^{\prime}),\quad \phi(z)=kz^{\prime}$$ for some $\alpha_i,\beta_i, k\in{\mathds{k}}$ and $i\in{\mathbb{I}}_{1,3}$. Then applying $\phi$ to the relations $gx-xg=g(1-g)$, $z^2=z$, $x^2=x$ and $[g,y]=0$, we have $$\alpha_1=0,\quad k=1,\quad \alpha_2^2+\alpha_2\gamma=0,\quad \beta_1=0.$$ Then applying $\phi$ to the relation $[x,y]-y-\lambda z$, we have $$\begin{aligned} \lambda=\gamma.\end{aligned}$$ Conversely, it is easy to see that ${\widetilde{H}}_{15}(\lambda)\cong{\widetilde{H}}_{15}(\gamma)$ if $\lambda=\gamma$. Similarly, ${\widetilde{H}}_{14}(\lambda)\cong{\widetilde{H}}_{14}(\gamma)$ if and only if $\lambda=\gamma$. Assume that $(i,j,k)=(1,0,0)$. Then by Theorem \[thm:p4-x1y0z0\], $H$ is isomorphic to one of the Hopf algebras described in $(149)$–$(183)$. Assume that $V\cong M_{0,1}\oplus M_{0,2}$. Then ${\operatorname{gr}}H={\mathds{k}}\langle g,x,y,z\rangle$, subject to the relations $$\begin{gathered} g^2=x^2=y^2=z^2=1,\quad [g,x]=[g,y]=[x,y]=[x,z]=[y,z]=0,\quad gz-(z+y)g=0,\end{gathered}$$ with $g\in{\mathbf{G}}({\operatorname{gr}}H),x,y,z\in{\mathcal{P}}({\operatorname{gr}}H)$. It follows by a direct computation that $$\begin{gathered} gx-xg=0,\quad gy-yg=0,\quad gz-(z+y)g=0;\\ x^2,y^2,z^2,[x,y],[x,z],[y,z]\in{\mathcal{P}}(H).\end{gathered}$$ Then $H\cong{\mathds{k}}[C_2]\sharp U^L({\mathcal{P}}(H))$, where $U^L({\mathcal{P}}(H))$ is a restricted universal enveloping algebra of ${\mathcal{P}}(H)$. Then by [@NWW1 Theorem 1.4], we obtain fourteen classes of $H$ described in $(184)$–$(197)$. **ACKNOWLEDGMENT** The essential part of this article was written during the visit of the author to University of Padova supported by China Scholarship Council (Grant No. 201706140160) and the NSFC (Grant No. 11771142). The author would like to thank his supervisors Profs. G. Carnovale, N. Hu and Prof. G. A. Garcia so much for the help and encouragement. [50]{} N. Andruskiewitsch, I. Angiono and I. Heckenberger, *Examples of finite-dimensional pointed Hopf algebras in positive characteristic*. Preprint: arXiv:1905.03074. N. Andruskiewitsch and H. J. Schneider, *Lifting of quantum linear spaces and pointed Hopf algebras of order $p^3$*, J. Algebra **209** (1998), 658–691. —, *Finite quantum groups and Cartan matrices*, Adv. Math. **154** (2000), 1–45. —, Lifting of Nichols algebras of type $A_2$ and pointed Hopf algebras of order $p^4$. In: Caenepeel, S., van Oystaeyen, F., eds. Hopf Algebras and Quantum Groups: Proceedings of the Brussels Conference, Lecture Notes in Pure and Appl. Math. New York-Basel: Marcel Dekker, 209:1–14. —, *Pointed Hopf algebras*, New directions in Hopf algebras, 1–68, Math. Sci. Res. Inst. Publ., 43, Cambridge Univ. Press, Cambridge, 2002. V. A. Bashev, Representations of the group $Z_2\times Z_2$ in a field of characteristic 2, Dokl. Akad. Nauk SSSR **141** (5) (1961), 1015–1018. G. Bergman, *The diamond lemma for ring theory*, Adv. Math. **29** (1978), 178–218. M. Beattie, S. Dăscălescu and L. Grunenfelder, *Constructing pointed Hopf algebras via Ore extensions*, J. Algebra, **225** (2000), 743–770. S. Caenepeel, S. Dăscălescu and S. Raianu, *Classifying pointed Hopf algebras of dimension $16$*, Comm. Algebra **28** (2) (2000), 541–568. C. Cibils, A. Lauve and S. Witherspoon, *Hopf quivers and Nichols algebras in positive characteristic*, Proc. Amer. Math. Soc. **137** (2009), 4029–4041. J. Dong and H. Chen, *The representations of Quantum double of Dihedral groups*, Algebra Colloquium **20** (2013), 95–108. M. Graña, *Freeness theorem for Nichols algebras*, J. Algebra **231** (1) (2000), 235–257. G. Henderson, *Low dimensional cocommutative connected Hopf algebras*, J. Pure Appl. Algebra **102** (1995), 173–193. N. Hu and X. Wang, *Quantizations of generalized-Witt algebra and of Jacobson-Witt algebra in the modular case*, J. Algebra **312** (2) (2007), 902–929. N. Hu and X. Wang, *Twists and quantizations of Cartan type S Lie algebras*, J. Pure Appl. Algebra **215** (6) (2011), 1205–1222. N. Jacobson, *Lie Algebras*, Dover Publications Inc., New York, 1979. G. Mason, *The quantum double of a finite group and its role in conformal field theory*, in “Proceedings, London Mathematical Society Conference, Galway, 1993,” Groups 93, London Mathematical Society Lecture Note Series 212, pp. 405–417, Cambridge Univ. Press, Cambridge, 1995. W. D. Nichols, *Bialgebras of type one*, Comm. Algebra **6** (15) (1978), 1521–1552. S. H. Ng and X. Wang, *Hopf algebras of prime dimension in positive characteristic*. Preprint: arXiv:1810.00476. V. C. Nguyen, L. Wang and X. Wang, *Classification of connected Hopf algebras of dimension $p^3$ I*, J. Algebra **424** (2015), 473–505. V. C. Nguyen, L. Wang and X. Wang, *Primitive deformations of quantum p-groups*, Algebr. Represent. Theor. (2018). https://doi.org/10.1007/s10468-018-9800-x. V. C. Nguyen and X. Wang, *Pointed $p^3$-dimensional Hopf algebras in positive characteristic*, Algebra Colloquium **25** (3) (2018), 399–436. W. D. Nichols and M. B. Zoeller, *A Hopf algebra freeness theorem*, Amer. J. of Math. **111** (2) (1989), 381–385. D. E. Radford, *The structure of Hopf algebras with a projection*, J. Algebra **92** (2) (1985), 322–347. —, *Hopf algebras*, Series on Knots and Everything, **49**, World Scientific Publishing Co. Pte. Ltd., Singapore, 2012. S. Scherotzke, *Classification of pointed rank one Hopf algebras*, J. Algebra **319** (2008), 2889–2912. D. Ştefan and F. van Oystaeyen, *Hochschild cohomology and the coradical filtration of pointed coalgebras: applications*, J. Algebra **210** (1998), 535–556. M. Takeuchi, *Survey of braided Hopf algebras*, Contemp. Math. **267** (2000), 301–323. Z. Tong and N. Hu, *Modular quantizations of Lie algebras of Cartan type K via Drinfeld twists of Jordanian type*, J. Algebra **450** (2016), 102–151. Z. Tong, N. Hu and X. Wang, *Modular quantizations of Lie algebras of Cartan type H via Drinfel’d twists*, Lie algebras and related topics, 173–206, Contemp. Math. 652, Amer. Math. Soc. Providence, RI, 2015. J. Wang and I. Heckenberger, *Rank 2 Nichols algebras of diagonal type over fields of positive characteristic*, SIGMA Symmetry Integrability Geom. Methods Appl. **011** (2015), 24 pages. J. Wang, *Rank three Nichols algebras of diagonal type over arbitrary fields*, Isr. J. Math. **218** (1) (2017), 1–26. L. Wang and X. Wang, *Classification of pointed Hopf algebras of dimension $p^2$ over any algebraically closed field*, Algebr. Represent. Theory **17** (2014), 1267–1276. X. Wang, *Connected Hopf algebras of dimension $p^2$*, J. Algebra **391** (2013), 93–113. X. Wang, *Isomorphism classes of finite-dimensional connected Hopf algebras in positive characteristic*, Adv. Math. **281** (2015), 594–623. D. G. Wang, J. J. Zhang and G. Zhuang, *Primitive cohomology of Hopf algebras*, J. Algebra **464** (2016), 36–96. S. J. Witherspoon, *The representation ring of the Quantum Double of a finite group*, J. Algebra **179** (1996), 305–329. R. Xiong, Pointed $p^2q$-dimensional Hopf algebras in positive characteristic. Preprint: arXiv:1705.00339.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this article we construct a function with infinitely many vanishing (generalized) moments. This is motivated by an application to the Taylorlet transform which is based on the continuous shearlet transform. It can detect curvature and other higher order geometric information of singularities in addition to their position and the direction. For a robust detection of these features a function with higher order vanishing moments, $\int_{\ensuremath{\mathbb{R}}}g(x^k)x^mdx=0$, is needed. We show that the presented construction produces an explicit formula of a function with $\infty$ many vanishing moments of arbitrary order and thus allows for a robust detection of certain geometric features. The construction has an inherent connection to q-calculus, the Euler function and the partition function.' address: 'Universität Passau, Universidade de Aveiro' author: - 'T. Fink' - 'U. Kähler' bibliography: - 'euler.bib' title: 'A space-based method for the generation of a Schwartz function with infinitely many generalized vanishing moments with applications in image processing' --- Introduction ============ Vanishing moment conditions give orthogonality with respect to subspaces of polynomials and therefore play a vital role in many areas of analysis. Especially in wavelet and shearlet theory they are of pivotal importance. A wavelet needs vanishing moments to enable a detection of the regularity of the analyzed signal [@MaHw92 Thm 2]. Similarly, for the resolution of the wavefront set by the continuous shearlet transform, a shearlet has to incorporate so called vanishing directional moments [@gr11 Thm 6.1 $\&$ 6.4]. For a shearlet $\psi\in L^2({\ensuremath{\mathbb{R}}}^2)$, they are of the form $\int_{\ensuremath{\mathbb{R}}}\psi(x_1,x_2)x_1^m dx_1=0$ for all $x_2\in{\ensuremath{\mathbb{R}}}$. While the continuous shearlet transform allows for a detection of the position and direction of a singularity, the recently created Taylorlet transform additionally allows for a detection of the curvature and other higher order geometric information of singularities. The Taylorlets inherit the properties of the shearlets and extend them by utilizing shears of higher order, ie $$S_s(x):=\begin{pmatrix} x_1+\sum_{k=0}^n \frac{s_k}{k!}\cdot x_2^k \\ x_2 \end{pmatrix}\quad\text{for } x\in{\ensuremath{\mathbb{R}}}^2\text{ and }s=(s_0,\ldots,s_n)\in{\ensuremath{\mathbb{R}}}^{n+1}.$$ The Taylorlet transform of a function $f\in L^2({\ensuremath{\mathbb{R}}}^2)$ is defined as $L^2$-scalar product of $f$ and a dilated, translated and sheared version of a Taylorlet. This transform allows for a detection of certain geometric features of the singularities of the analyzed function by observing the transform’s decay rate for decreasing scales. The decay rate depends on the choice of the translation and shear parameters and on the so called vanishing directional moments of higher order of a Taylorlet $\tau\in{\mathcal{S}}({\ensuremath{\mathbb{R}}}^2)$, ie conditions of the form $\int_{\ensuremath{\mathbb{R}}}\tau(\pm x_1^k,x_2)x_1^mdx_1=0$ for all $x_2\in{\ensuremath{\mathbb{R}}}$, where $k,m\in{\ensuremath{\mathbb{N}}}$ [@F17]. Thus, a function with infinitely many vanishing moments of higher order is both of great theoretical interest and very useful for a robust detection of geometric information. For the construction of functions with vanishing moments there exist two classical approaches. The first method uses an arbitrary Schwartz function $f\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$, whose $n^{\text{th}}$ derivative has $n$ vanishing moments. The most famous example is probably the Mexican hat wavelet which is the second derivative of the Gaussian and thus exhibits 2 vanishing moments. The drawback of this approach is its limited use, if one is interested in a function with infinitely many vanishing moments. To this end, a Fourier ansatz is more convenient. Since the number of vanishing moments of a function coincides with the order of the Fourier transform’s root in the origin, it suffices to construct a function in the Fourier domain which vanishes with a proper order in the origin. A well known example of this method is the Meyer wavelet, whose Fourier support is $\left[-\frac{8\pi}3,-\frac{2\pi}3\right]\cup\left[\frac{2\pi}3,\frac{8\pi}3\right]$. Hence, the Meyer wavelet exhibits infinitely many vanishing moments. Yet, under certain circumstances an explicit formula for such a function in space domain is preferable over a Fourier construction. For instance, the higher order vanishing moment conditions $\int_{\ensuremath{\mathbb{R}}}g(x^k)x^mdx=0$ do not interact well with the Fourier transform due to their non-linear nature and hence the construction and explicit expression of a function incorporating these conditions is easier to achieve and to apply in space domain. Hence, we consider a construction yielding an explicit formula in space domain for a function with infinitely many vanishing moments. The generation of vanishing moments is achieved by considering linear combinations of dilations of a function. This process yields a structure which can be studied by applying a q-calculus of operator-valued functions. This calculus is a variation of the classical analysis and resembles the finite difference calculus, but uses a multiplicative notation instead. Eg, for $q>0$, the q-derivative of a function $f\in C({\ensuremath{\mathbb{R}}})$ in $x\in{\ensuremath{\mathbb{R}}}$ is defined as $$d_q f(x) = \frac{f(qx)-f(x)}{qx-x}.$$ This calculus recently gained interest due to its applications in quantum mechanics. The construction we present in this article has an inherent connection to the q-Pochhammer symbol and the Euler function. The latter itself incorporates a deep link to the theory of partitions. This article is structured as follows. In the second section, we define the Taylorlet transform and show its most important properties. Section 3 contains a small introduction into q-calculus. The fourth section is dedicated to the construction of Schwartz functions with infinitely many vanishing moments and highlights its connection to q-calculus. In section 5, we give a numerical analysis of the evaluation of the function constructed in the previous section and show numerical examples of its application to the Taylorlet transform. Finally, the last section incorporates a conclusion of the article and an outline of a possible generalization of the wavefront set. Taylorlets and higher order vanishing moments ============================================= A classical result in the theory of shearlets which proves its value is the resolution of the wavefront set by the continuous shearlet transform which was shown by Kutyniok and Labate [@KuLa09]. Suppose, the function $f\in{\mathcal{S}}'\left({\ensuremath{\mathbb{R}}}^2\right)$ analyzed with the continuous shearlet transform has a singular support that can be represented as graph of a function $q\in C^\infty({\ensuremath{\mathbb{R}}})$, eg $$f(x)={\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}\left(x_1-q(x_2)\right)\quad\text{for all }x\in{\ensuremath{\mathbb{R}}}^2.$$ We will call $q$ the singularity function. Under these circumstances, the wavefront set of $f$ can be interpreted as a linear approximation of $q$. In this scenario, the shearlet transform $\mathcal{SH}_\psi f(a,s,t)$ does not decay faster than any polynomial for $a\to 0$, if and only if $q(t_2)=t_1$ and $q'(t_2)=s$. In other words, the continuous shearlet transform provides the first two Taylor coefficients of $q$ at the point $t_2$. The Taylorlet transform expands this idea by supplying means to detect arbitrary Taylor coefficients of the singularity function $q$. We say that a function $f:{\ensuremath{\mathbb{R}}}\to{\ensuremath{\mathbb{R}}}$ has $M$ vanishing moments of order $n$ if $$\int_{\ensuremath{\mathbb{R}}}f\big(\pm t^k\big)t^m dt = 0$$ for all $m\in\{0, \ldots, kM-1\}$ and for all $k\in\{1,\ldots,n\}.$ Let ${\mathcal{S}}({\ensuremath{\mathbb{R}}}^{d})$ denote the Schwartz space on ${\ensuremath{\mathbb{R}}}^{d}$ and let $g,h\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ such that $g$ has $M$ vanishing moments of order $n$. The space of Schwartz functions with infinitely many vanishing moments of order $n$ will be denoted as ${\mathcal{S}}^*_n({\ensuremath{\mathbb{R}}}).$ We call the function $$\tau=g\otimes h$$ an analyzing Taylorlet of order $n$ with $M$ vanishing moments in $x_1$-direction. We say $\tau$ is restrictive, if additionally 1. $g(0)\ne 0$ and $\int_0^\infty g(t)t^jdt\ne 0$ for all $j\in\{0,\ldots,M-1\}$ and 2. $\int_{\ensuremath{\mathbb{R}}}h(t)dt\ne 0$. plot (, [1/630\*exp(-pow(,2))\*(1+)\*(315 - 51660\*pow(,2) + 286020\*pow(,4) - 349440\*pow(,6) + 142464\*pow(,8) - 21504\*pow(,10) + 1024\*pow(,12))]{}); The Taylorlet transform is defined as follows. Let $n\in{\ensuremath{\mathbb{N}}}$ and let $\tau\in{\mathcal{S}}({\ensuremath{\mathbb{R}}}^2)$ be an analyzing Taylorlet of order $n$. Let $\alpha>0$, $t\in{\ensuremath{\mathbb{R}}}$, $a>0$ and $s\in{\ensuremath{\mathbb{R}}}^{n+1}$. We define $$\tau^{(n,\alpha)}_{a,s,t}(x):= \tau{\begin{pmatrix}\left[x_1-\sum_{k=0}^n \frac {s_k}{k!}\cdot (x_2-t)^k\right]/a \\ (x_2-t)/a^\alpha\end{pmatrix}}\quad\text{for all }x=(x_1,x_{2})\in{\ensuremath{\mathbb{R}}}^{2}.$$ The Taylorlet transform wrt $\tau$ of a tempered distribution $f\in {\mathcal{S}}'({\ensuremath{\mathbb{R}}}^2)$ is defined as $$\begin{aligned} \mathcal{T}^{(n,\alpha)}:{\mathcal{S}}'\left({\ensuremath{\mathbb{R}}}^2\right)\to C^\infty\left({\ensuremath{\mathbb{R}}}_+\times{\ensuremath{\mathbb{R}}}^{n+1}\times{\ensuremath{\mathbb{R}}}\right),\quad \mathcal{T}^{(n,\alpha)}f(a,s,t) = {\left\langle}f, \tau^{(n,\alpha)}_{a,s,t}{\right\rangle}.\end{aligned}$$ Analyzing function Moment condition Detected geometric features --------------------------------------------------------------------------- -------------------------------------------------------------------------------------------- ------------------------------ Shearlet $\int_{\ensuremath{\mathbb{R}}}\psi(x_1,x_2)x_1^mdx_1=0$ Position and direction $\psi\in{\mathcal{S}}\left({\ensuremath{\mathbb{R}}}^2\right)$ for all $m\in{\ensuremath{\mathbb{N}}}$ of singularities Taylorlet $\int_{\ensuremath{\mathbb{R}}}g(t)t^mdt=0=\int_{\ensuremath{\mathbb{R}}}g(\pm t^2)t^m dt$ Position, direction and $\tau=g\otimes h\in{\mathcal{S}}\left({\ensuremath{\mathbb{R}}}^2\right)$ for all $m\in{\ensuremath{\mathbb{N}}}$ curvature of singularities Taylorlet $\int_{\ensuremath{\mathbb{R}}}g(\pm t^k)t^mdt=0$ First n+1 Taylor cofficients $\tau=g\otimes h\in{\mathcal{S}}\left({\ensuremath{\mathbb{R}}}^2\right)$ for all $k\in\{1,\ldots,n\}$, $m\in{\ensuremath{\mathbb{N}}}$ of the singularity function : Moment conditions and detection results The Taylorlet transform allows for the following detection result. \[detect\] Let $M,n\in{\ensuremath{\mathbb{N}}}$ and let $\tau$ be an analyzing Taylorlet of order $n$ with $M$ vanishing moments in $x_1$-direction. Let furthermore $0\le j< M-1$, $t\in{\ensuremath{\mathbb{R}}}$ and let $q\in C^\infty({\ensuremath{\mathbb{R}}})$ be the singularity function of $$f(x)=\left[x_1-q(x_2)\right]^{j}\cdot {\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_\pm}\left(x_1-q(x_2)\right).$$ 1. Let $\alpha>0$. If $s_0\ne q(t)$, the Taylorlet transform has a decay of $$\mathcal{T}^{(n,\alpha)}f(a,s,t)={\mathcal{O}}\big(a^N\big)\quad\text{for }a\to 0$$ for all $N>0$. 2. Let $\alpha<\frac 1n$, $k<n$ and let $s_\ell=q^{(\ell)}(t)$ for all $\ell\in\{0,\ldots,k-1\}$. Then the Taylorlet transform has the decay property $$\mathcal{T}^{(n,\alpha)}f(a,s,t)={\mathcal{O}}\left(a^{j-1+(M-j-1)[1-(k+1)\alpha]}\right)\quad\text{for }a\to 0.$$ 3. Let $\alpha>\frac 1{n+1}$ and let $\tau$ be restrictive. If $s_\ell=q^{(\ell)}(t)$ for all $\ell\in\{0,\ldots,n\}$, then the Taylorlet transform has the decay property $$\mathcal{T}^{(n,\alpha)}f(a,s,t)\sim a^{j}\quad\text{for }a\to 0.$$ Due to the detection result, the construction of a function $g\in {\mathcal{S}}_n^*({\ensuremath{\mathbb{R}}})$ is highly desirable, as the corresponding Taylorlet $\tau= g\otimes h$ allows for a very robust detection of the Taylor coefficients of the singularity function. Furthermore, such a Taylorlet simplifies said detection, as shown in the following corollary. \[cor\] Let $n\in{\ensuremath{\mathbb{N}}}$ and let $\tau$ be a restricitve analyzing Taylorlet of order $n$ with infinitely many vanishing moments in $x_1$-direction. Let furthermore $\alpha\in\left(\frac 1{n+1},\frac 1n\right)$, $j\ge 0$ and let $q\in C^\infty({\ensuremath{\mathbb{R}}})$ be the singularity function of $$f(x)=\left[x_1-q(x_2)\right]^{j}\cdot {\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_\pm}\left(x_1-q(x_2)\right).$$ Then $$\mathcal{T}^{(n,\alpha)}f(a,s,t)={\mathcal{O}}\big(a^N\big)\quad\text{for }a\to 0$$ for all $N>0$, if and only if there exists a $k\in\{0,\ldots,n\}$ such that $s_k\ne q^{(k)}(t)$. q-Calculus ========== Despite its modern applications in quantum mechanics, the first accounts of q-calculus actually date back to the days of Euler. When he developed the theory of partitions, he introduced the partition function $p:{\ensuremath{\mathbb{N}}}\to{\ensuremath{\mathbb{N}}}$ with $p(n)$ being the number of distinct ways of representing $n$ as a sum of natural numbers. He found out that the infinite product $$\prod_{k=1}^\infty \frac 1{1-q^k}=\sum_{n=0}^\infty p(n) q^n$$ is the generating function for the partition function [@Er00]. Its reciprocal is also known as Euler’s function. Let $q\in{\ensuremath{\mathbb{C}}}$ with $|q|<1$. Then, $${\ensuremath{\varphi}}(q)=\prod_{k=1}^\infty(1-q^k)$$ is Euler’s function. This function is not to be confused with Euler’s totient function which is also denoted by ${\ensuremath{\varphi}}$, but ${\ensuremath{\varphi}}(n)$ there displays the amount of numbers up to $n$ which are relative prime to $n$. The concept of q-calculus is similar to that of the finite difference calculus, but the q-derivative of a function $f:{\ensuremath{\mathbb{R}}}\to{\ensuremath{\mathbb{R}}}$ is defined as $$d_qf(x)=\frac{f(qx)-f(x)}{qx-x}$$ rather than $D_hf(x)=\frac{f(x+h)-f(x)}h.$ It is of particular importance in structures based on q-commutator relations such as the Manin plane. Such structures appear not only in relation to quantum groups, but also in terms of interpolation between the bosonic and the fermionic case [@BS91]. For a more general overview on q-calculus see [@Kac]. Similar to the finite differences, the infinitesimal theory can be obtained by q-calculus via the limit process $q\to 1$. As an example, the $q$-derivative of a monomial can be found to be $$d_q(x^n)=\frac{q^n-1}{q-1}\cdot x^{n-1}.$$ The occuring factor is also known as the q-bracket $[n]_q=\frac{q^n-1}{q-1}$. This also yields a generalization of the classical binomial coefficient $$\binom nk_q=\prod_{k=1}^{n}\frac{[n+1-k]_q}{[k]_q}.$$ Infinitesimal calculus q-calculus ----------------------------------------------- ---------------------------------------------------------------- $f'(x)=\lim_{q\to 1}\frac {f(qx)-f(x)}{qx-x}$ $d_qf(x)=\frac {f(qx)-f(x)}{qx-x}$ $\frac d{dx}\ x^n= n\cdot x^{n-1}$ $d_q (x^n) =[n]_q\cdot x^{n-1}=\frac{q^n-1}{q-1}\cdot x^{n-1}$ $n!$ $[n]_q!=\prod_{k=1}^n [k]_q$ $\binom nk$ $\binom nk_q=\frac{[n]_q!}{[k]_q!\cdot[n-k]_q!}$ $(a)_n$ $(a;q)_n=\prod_{k=0}^{n-1}(1-aq^k)$ : Overview of important q-analogs One of the most central concepts in q-calculus is the analog of the classical Pochammer symbol. It is defined as $$(a;q)_n:=\prod_{k=0}^{n-1}\left(1-aq^{k}\right).$$ It can be represented in terms of the q-binomial as shown in the following lemma. \[qbin0\] [@Ex83 (4.2.3)] Let $x\in{\ensuremath{\mathbb{C}}}$, $q>0$ and $n\in{\ensuremath{\mathbb{N}}}$. Then $$(x;q)_n=\sum_{k=0}^n \binom nk_{q}q^{\binom k2}(-1)^k\cdot x^k.$$ The statement of this lemma can be extended from $x\in{\ensuremath{\mathbb{C}}}$ to automorphism on ${\ensuremath{\mathbb{C}}}$-vector spaces as shown in the next corollary. This will be important later in this article. \[qbin\] Let $q>0$, $n\in{\ensuremath{\mathbb{N}}}$ and let $(V,\|\cdot\|)$ be a normed vector space over ${\ensuremath{\mathbb{C}}}$. Furthermore, let $A\in\mathcal{L}(V,V)$, ie, $A:V\to V$ is linear and continuous with respect to $\|\cdot\|$. Then $$(A;q)_n := \prod_{k=0}^{n-1}\left(\mathrm{Id}-q^{k}\cdot A\right)\in\mathcal{L}(V,V)$$ and $$(A;q)_n = \sum_{k=0}^n \binom nk_{q}q^{\binom k2}(-1)^k\cdot A^k.$$ Since $A\in\mathcal{L}(V,V)$, we have $\mathrm{Id}-q^{k}\cdot A\in\mathcal{L}(V,V)$ for all $q\in{\ensuremath{\mathbb{C}}}$ and all $k\in{\ensuremath{\mathbb{N}}}$ and so $$\prod_{k=0}^{n-1}\left(\mathrm{Id}-q^{k}\cdot A\right) \in \mathcal{L}(V,V)\quad\text{for all }q\in{\ensuremath{\mathbb{C}}},n\in{\ensuremath{\mathbb{N}}}.$$ Since $\{\mathrm{Id}-q^{k}\cdot A\}_{k\in{\ensuremath{\mathbb{N}}}}\cup\{A\}$ commute, the proof for the identity $$(A;q)_n = \sum_{k=0}^n \binom nk_{q}q^{\binom k2}(-1)^k\cdot A^k$$ is analogous to the case of Lemma \[qbin0\]. The q-Pochhammer symbol also is the initial point for a multitude of important functions in q-calculus. Among them, the Euler function can be represented as $${\ensuremath{\varphi}}(q)=(q;q)_\infty.$$ As we will see in in the next section, the Euler function can also be expanded as a series of q-Pochhammer symbols. Construction ============ In order to obtain a function $g\in {\mathcal{S}}_n^*({\ensuremath{\mathbb{R}}})$, it is sufficient to construct a function $\psi$ such that 1. $\psi$ is even, 2. $\psi^{(k)}(0)=c\cdot\delta_{0k}$ for some $c\ne 0$, 3. $\psi\in{\mathcal{S}}_1^\ast({\ensuremath{\mathbb{R}}})$. As the following proposition shows, with such a function $\psi$, we can construct a function with infinitely many vanishing moments of arbitrary order $n$. \[sqrt\] Let the function $\psi\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ fulfill the conditions 1. and 2. and let $v_{n}:={\mathrm{lcm}}\{1,\ldots,n\}$ be the least common multiple of the numbers $1,\ldots,n$. 1. If $M\in{\ensuremath{\mathbb{N}}}$ and $\psi$ has $Mv_n$ vanishing moments, the function $$g:=\psi\circ \sqrt[v_{n}]{|\cdot|}\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$$ and has $M$ vanishing moments of order $n$. 2. If $\psi$ fulfills condition 3., the function $$g:=\psi\circ \sqrt[v_{n}]{|\cdot|}\in{\mathcal{S}}_n^*({\ensuremath{\mathbb{R}}}).$$ <!-- --> 1. The decay conditions of $\psi$ are not changed by the concatenation with $\sqrt[v_{n}]{|\cdot|}$ and its smoothness is preserved as well due to condition 2. Hence, $g\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$. Hence, it only remains to prove, that $g$ has $M$ vanishing moments of order $n$. $$\begin{aligned} \int_{{\ensuremath{\mathbb{R}}}} g\big(\pm t^k\big) t^m dt &= \int_{{\ensuremath{\mathbb{R}}}} \psi\big(|t|^{k/v_n}\big) t^m dt \\ &= \int_{{\ensuremath{\mathbb{R}}}} \psi(|u|) u^{m v_n/k}\cdot\frac{v_n}{k} u^{v_n/k-1}du \\ &= \frac{v_n}k\cdot \int_{{\ensuremath{\mathbb{R}}}} \psi(u) u^{(m+1) v_n/k-1} du=0\end{aligned}$$ for all $m\in\{0,\ldots,kM-1\}$, since $k$ divides $v_n$ for all $k\in\{1,\ldots,n\}$. 2. This follows immediately from a). For the construction of a function $\psi$ with properties 1. - 3. we start with an even bump function $\phi\in C_{c}^{\infty}({\ensuremath{\mathbb{R}}})$ and a number ${\ensuremath{\varepsilon}}>0$ such that $\phi\big|_{\left[-{\ensuremath{\varepsilon}},{\ensuremath{\varepsilon}}\right]}\equiv 1.$ Hence, properties 1. and 2. are already fulfilled, $\phi$ is a Schwartz function and we only need to gather vanishing moments. This can be achieved for $q>1$ by the following function sequence: $$\phi_{m+1}=\left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_{m},$$ where for $a>0$, $D_a$ is the dilation operator with $D_a:L^\infty({\ensuremath{\mathbb{R}}})\to L^\infty({\ensuremath{\mathbb{R}}})$, $D_af(x)=f(ax)$. The innate connection between the function sequence $\phi_m$ and the q-derivative can be seen in the following proposition. \[Fourier\] Let $n\in{\ensuremath{\mathbb{N}}}$, $\phi_0\in L^1({\ensuremath{\mathbb{R}}},x^ndx)$ and $$\phi_{m+1}=\left({\mathrm{Id}}-q^{-(m+1)}\cdot D_\frac 1q\right)\phi_m$$ for all $m\in{\ensuremath{\mathbb{N}}}$. Then $$\widehat{\phi_n}(\omega)=(1-q)^n\cdot\omega^n\cdot d_q^n\widehat{\phi_0}(\omega).$$ We first look for an appropriate representation of $\phi_n$. To this end, we will utilize Corollary \[qbin\]. Since the dilation operator is linear, we can write $\phi_n$ as an operator-valued q-Pochhammer symbol $$\phi_n=\prod_{m=0}^{n-1} \left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_0 = \left(q^{-1} D_{\frac 1q}; q^{-1}\right)_n\phi_0$$ and obtain that $$\label{phi} \phi_n=\sum_{k=0}^n \binom nk_{\frac 1q}q^{-\binom k2}(-1)^k\cdot q^{-k}D_{q^{-k}}\phi_0.$$ Due to [@Er12 (6.98)], the $n^{\mathrm{th}}$ q-derivative of a function can be represented as $$d_q^nf(x)=(q-1)^{-n}q^{-\binom n2}x^{-n}\sum_{k=0}^n\binom nk_q q^{\binom k2}(-1)^k f\left(q^{n-k}x\right).$$ Hence, we have to show that $$\label{goal} \widehat{\phi_n}(\omega) = q^{-\binom n2}\sum_{k=0}^n\binom nk_q q^{\binom k2}(-1)^{n-k} \widehat{\phi_0}\left(q^{n-k}\omega\right).$$ To this end, we first represent $\binom nk_{\frac 1q}$ in terms of $\binom nk_q$. We can write $$\begin{aligned} \binom nk_{\frac 1q} &= \prod_{\ell=1}^k \frac{1-q^{\ell-n-1}}{1-q^{-\ell}} \\ &= \prod_{\ell=1}^k \frac{q^{\ell-n-1}}{q^{-\ell}}\cdot \frac{q^{n+1-\ell}-1}{q^{\ell}-1} \\ &= q^{2\binom{k+1}2-k(n+1)}\cdot \prod_{\ell=1}^k \frac{q^{n+1-\ell}-1}{q^{\ell}-1} \\ &= q^{-k(n-k)}\cdot \binom nk_q.\end{aligned}$$ By applying the Fourier transform to equation (\[phi\]) and inserting the upper equality, we get $$\begin{aligned} \widehat{\phi_n}(\omega) &= \sum_{k=0}^n \binom nk_{\frac 1q}q^{-\binom k2}(-1)^k\widehat{\phi_0}\left(q^k \omega \right) \\ &= \sum_{k=0}^n \binom nk_{\frac 1q}q^{-\binom {n-k}2}(-1)^{n-k}\widehat{\phi_0}\left(q^{n-k}\omega\right) \\ &= \sum_{k=0}^n \binom nk_q q^{-k(n-k)} q^{-\binom {n-k}2}(-1)^{n-k}\widehat{\phi_0}\left(q^{n-k}\omega\right) \\ &= q^{-\binom n2}\cdot\sum_{k=0}^n \binom nk_q q^{\binom k2} (-1)^{n-k}\widehat{\phi_0}\left(\textcolor{black}{q^{n-k}\omega}\right),\end{aligned}$$ where the last equality results from $\binom {a+b}2=\binom a2+ab+\binom b2$. As a consequence of this proposition, each step of the recursion $$\phi_{m+1}=({\mathrm{Id}}-q^{-(m+1)}D_{1/q})\phi_{m}$$ generates one further vanishing moment. As the next lemma shows, not only $\phi_m$, but also its restriction to ${\ensuremath{\mathbb{R}}}_+$ or ${\ensuremath{\mathbb{R}}}_-$ features $m$ vanishing moments. \[vanmom\] Let $\phi_0\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ be an even function and let $$\phi_{m+1}=\left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_{m}$$ for all $m\in{\ensuremath{\mathbb{N}}}$. Then $\phi_m\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ and $$\int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_m(x) x^\ell dx=0$$ for all $\ell\in\{0,\ldots,m-1\}$ and for all $m\in{\ensuremath{\mathbb{N}}}$. The statement can be shown inductively. Obviously, the statement is true for $\phi_0$. Now we assume, that $\phi_m\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ and $\int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_m(x) x^\ell dx=0$ for all $\ell\in\{0,\ldots,m-1\}$. It can be easily confirmed that $\phi_{m+1}\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$. Furthermore, for all $\ell\in\{0,\ldots,m-1\}$, we have $$\int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_{m+1}(x)x^\ell dx=\underbrace{\int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_{m}(x)x^\ell dx}_{=0} - q^{-(m+1)}\cdot\underbrace{\int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_m\left(\frac xq\right) x^\ell dx}_{=0}.$$ Furthermore, $$\begin{aligned} \int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_{m+1}(x)x^mdx &= \int_{{\ensuremath{\mathbb{R}}}_\pm} \left[\phi_m(x)-q^{-m-1}\phi_m\left(\frac xq\right)\right]x^m dx \\ &= \int_{{\ensuremath{\mathbb{R}}}_\pm} \phi_m(x)x^m dx - \int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_m\left(\frac xq\right) \cdot\left(\frac xq\right)^m \frac{dx}q \\ &= \int_{{\ensuremath{\mathbb{R}}}_\pm} \phi_m(x)x^m dx - \int_{{\ensuremath{\mathbb{R}}}_\pm} \phi_m(x)x^m dx \\ &= 0.\end{aligned}$$ Since $\int_0^\infty \phi_m(x)dx =0$ for all $m\ge 1$ as a consequence of the previous lemma, we cannot immediately use the functions $\phi_m$ to produce restrictive Taylorlets. We will present a method to achieve this in Lemma \[rest\]. The next lemma shows that the sequence $\phi_m$ converges to a function satisfying the properties 1. - 3. \[existence\] Let $\phi_{0}\in C_{c}^{\infty}({\ensuremath{\mathbb{R}}})$ and let $q>1$ and ${\ensuremath{\varepsilon}}>0$ such that $\phi_{0}\big|_{\left[-{\ensuremath{\varepsilon}},{\ensuremath{\varepsilon}}\right]}\equiv 1.$ and let $$\phi_{m+1}=\left(\mathrm{Id}-q^{-(m+1)}D_{\frac 1q}\right)\phi_{m}$$ for all $m\in{\ensuremath{\mathbb{N}}}$. Then the function sequence $\phi_m$ converges uniformly to $\psi$ for $m\to\infty$ and $\psi$ fulfills conditions 1. - 3. We first show that $\phi_m$ is a Cauchy sequence wrt to the $L^\infty$-norm and hence its uniform convergence. To this end, let $\ell,m\in{\ensuremath{\mathbb{N}}}$. Then $$\begin{aligned} \|\phi_{m+\ell}-\phi_m\|_\infty &=\left\|\left[\prod\left({\mathrm{Id}}-q^{-(m+1)}D_\frac 1q\right)-{\mathrm{Id}}\right]\phi_m\right\|_\infty \\ &=\left\| \left[\left(q^{-(m+1)}D_\frac 1q;q^{-1}\right)_\ell-{\mathrm{Id}}\right]\phi_m \right\|_\infty.\end{aligned}$$ Utilizing Corollary \[qbin\], we obtain $$\begin{aligned} \|\phi_{m+\ell}-\phi_m\|_\infty &=\left\| \left[\sum_{k=0}^\ell(-1)^kq^{-\binom k2}\cdot\binom\ell k_\frac 1q\cdot q^{-k(m+1)}D_{q^{-k}}-{\mathrm{Id}}\right]\phi_m \right\|_\infty \\ &\le \left[\sum_{k=0}^\ell q^{-\left[\binom k2+k(m+1)\right]}\cdot\binom\ell k_\frac 1q-1\right]\cdot\|\phi_m\|_\infty \\ &=\|\phi_m\|_\infty\cdot\sum_{k=1}^\ell q^{-\left[\binom k2+k(m+1)\right]}\cdot\prod_{\nu=0}^{k-1}\frac{1-q^{\ell-\nu}}{1-q^{-(\nu+1)}} \\ &\le\|\phi_m\|_\infty\cdot\sum_{k=1}^\ell q^{-\left[\binom k2+k(m+1)\right]}\cdot\prod_{\nu=0}^{\infty}\frac1{1-q^{-(\nu+1)}} \\ &=\|\phi_m\|_\infty\cdot\sum_{k=1}^\ell q^{-\left[\binom k2+k(m+1)\right]}\cdot\frac 1{{\ensuremath{\varphi}}\left(\frac 1q\right)}.\end{aligned}$$ Since $\sum_{k=1}^\ell q^{-\left[\binom k2+k(m+1)\right]}\sim q^{-(m+1)}$ for $k\to\infty$ and ${\ensuremath{\varphi}}(a)\ne 0$ for all $a\in(0,1)$, it only remains to show that $\phi_m$ has a uniform upper bound. To this end, we observe that for $m\in{\ensuremath{\mathbb{N}}}$ we have $$\begin{aligned} \left\|\phi_{m+1}\right\|_{\infty} &= \left\|\left(\mathrm{Id}-q^{-(m+1)}D_{\frac 1q}\right)\phi_{m}\right\|_{\infty} \\ &\le \left\|\phi_{m}\right\|_{\infty}+q^{-(m+1)}\left\|D_\frac 1q\phi_m\right\|_{\infty} \\ &\le \left(1+q^{-(m+1)}\right)\cdot \left\|\phi_{m}\right\|_{\infty}.\end{aligned}$$ Hence, we can inductively show that $$\begin{aligned} \left\|\phi_{m+1}\right\|_{\infty} &\le \prod_{k=0}^{m}\left(1+q^{-(k+1)}\right)\cdot \left\|\phi_{0}\right\|_{\infty} \\ &\le \left\|\phi_{0}\right\|_{\infty}\cdot \exp\left(\sum_{k=0}^{m}\log(1+q^{-(k+1)})\right)\end{aligned}$$ We hence only need to prove that the series $\sum_{k=0}^\infty\log\left(1+q^{-(k+1)}\right)$ converges. This can be achieved by utilizing the estimate $$0<\log(1+x)<x\quad\text{for all }x>0.$$ We obtain $$\sum_{k=0}^\infty\log\left(1+q^{-(k+1)}\right)<\sum_{k=0}^\infty q^{-(k+1)}=\frac q{1-\frac 1q}<\infty.$$ We now proceed by proving the properties 1. - 3. for $\psi$. 1. Since $\phi_{0}$ is even and the constructive function sequence consists of linear combinations of dilates of $\phi_{0}$, $\psi$ is even, as well. 2. As $\phi_{0}\big|_{\left[-{\ensuremath{\varepsilon}},{\ensuremath{\varepsilon}}\right]}\equiv 1$, we only have to show that $\psi(0)\ne 0$. We can represent the limit function as $$\psi=\prod_{k=0}^{\infty}\left({\mathrm{Id}}-q^{-(k+1)}D_{\frac 1q}\right)\phi_{0}.$$ Due to $\phi_{0}(0)=1$, we obtain that $$\psi(0)=\prod_{k=0}^{\infty}\left(1-q^{-(k+1)}\right)=\exp\left(\sum_{k=0}^{\infty}\log\left(1-q^{-(k+1)}\right)\right).$$ Due to the concavity of the logarithm, we obtain that $$\log (1-x)\ge q\log\left(1-\frac 1q\right)\cdot x\quad\text{for all }x\in\left[0,\frac 1q\right].$$ Hence, we obtain that $$\sum_{k=0}^{\infty}\log\left(1-q^{-(k+1)}\right)\ge q\log\left(1-\frac 1q\right)\cdot\sum_{k=0}^{\infty}q^{-(k+1)}=\frac {\log\left(1-\frac 1q\right)}{1-\frac 1q}.$$ Due to the monotonicity of the exponential function, we then obtain $$\psi(0)=\exp\left(\sum_{k=0}^{\infty}\log\left(1-q^{-(k+1)}\right)\right)\ge \exp\left(\frac {\log\left(1-\frac 1q\right)}{1-\frac 1q}\right)=\left(\frac {q-1}q\right)^{\frac q{q-1}}>0,$$ since $q>1.$ 3. As shown in Lemma \[vanmom\], $\phi_{m}$ has $m$ vanishing moments for all $m\in{\ensuremath{\mathbb{N}}}$. Hence, $\psi$ has infinitely many vanishing moments. So it remains to prove that $\psi\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$. To this end, we define $$c_{k,\ell,m}:=\|x^{k}\phi_{m}^{(\ell)}(x)\|_{\infty}.$$ In order to prove that $\psi\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$, we will show that uniform upper bounds in $m$ exist for the $c_{k,\ell,m}$. Ie, for all $k,\ell\in{\ensuremath{\mathbb{N}}}$ we determine a $c_{k,\ell}>0$ such that $$c_{k,\ell,m}\le c_{k,\ell}\text{ for all }m\in{\ensuremath{\mathbb{N}}}.$$ For this purpose we estimate $c_{k,\ell,m+1}$ in terms of $c_{k,\ell,m}$. $$x^{k}\cdot\phi_{m+1}^{(\ell)}(x)=x^{k}\cdot\partial_{x}^{\ell}\left(\mathrm{Id}-q^{-(m+1)}D_{\frac 1q}\right)\phi_{m}(x)=x^{k}\cdot\left(\mathrm{Id}-q^{-(m+\ell+1)}D_{\frac 1q}\right)\phi_{m}^{(\ell)}(x).$$ Hence, we can estimate $$\begin{aligned} c_{k,\ell,m+1} & = \left\|x^{k}\phi_{m+1}^{(\ell)}(x)\right\|_{\infty} \\ &= \left\|x^{k}\cdot\left(\mathrm{Id}-q^{-(m+\ell+1)}D_{\frac 1q}\right)\phi_{m}^{(\ell)}(x)\right\|_{\infty} \\ &\le \left\|x^{k}\phi_{m}^{(\ell)}(x)\right\|_{\infty}+q^{-(m+\ell-k+1)}\left\|\left(\frac xq\right)^{k}\phi_{m}^{(\ell)}\left(\frac xq\right)\right\|_{\infty} \\ &\le \left(1+q^{-(m+\ell-k+1)}\right)\cdot c_{k,\ell,m} \\ &\le \prod_{\nu=0}^{m}\left(1+q^{-(\nu+\ell-k+1)}\right)\cdot c_{k,\ell,0} \\ &\le c_{k,\ell,0}\cdot \prod_{\nu=0}^{m}(1+q^{k-\ell-1-\nu}) \\ &\le c_{k,\ell,0}\cdot \exp\left(\sum_{\nu=0}^{m}\log(1+q^{k-\ell-1-\nu})\right).\end{aligned}$$ Hence, it is sufficient to prove that the series $\sum_{\nu=0}^{\infty}\log(1+q^{k-\ell-1-\nu})$ converges. To this end, we exploit the following estimate of the logarithm: $$0<\log(1+x) \le x \quad \text{for all }x>0.$$ With this we obtain $$\sum_{\nu=0}^{\infty}\log\left(1+q^{k-\ell-1-\nu}\right)\le\sum_{\nu=0}^{\infty}q^{k-\ell-1-\nu}=\frac {q^{k-\ell}}{q-1}.$$ Thus, $c_{k,\ell,m}\le \mathrm{e}^{\frac {q^{k-\ell}}{q-1}}\cdot c_{k,\ell,0}=:c_{k,\ell}$ for all $m\in{\ensuremath{\mathbb{N}}}.$ Since $\phi_{0}\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ by prerequisite, $c_{k,\ell,0}$ is finite for all $k,\ell\in{\ensuremath{\mathbb{N}}}$. The next lemma gives an explicit representation of $\psi$ as a series of dilates of $\phi_{0}$ by using the $q$-Pochhammer symbol. \[sum\] Let $\phi_{0}\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ and let $q>1$. Then $$\psi=\lim_{m\to\infty}\phi_{m}=\sum_{\ell=0}^{\infty}\frac 1{(q;q)_\ell}\cdot D_{q^{-\ell}}\phi_{0}.$$ We can rewrite the function $\psi$ as $$\psi=\prod_{m=0}^\infty\left({\mathrm{Id}}-q^{-(m+1)}\cdot D_\frac 1q\right)\phi_0=\left(\frac 1q\cdot D_{\frac 1q};\frac 1q\right)_\infty\phi_0.$$ Due to a result of [@Eu48 Chapter 16], the following series expansion holds: $$(z;p)_\infty=\sum_{n=0}^\infty\frac{(-1)^n\cdot p^{n(n-1)/2}}{(p;p)_n}\cdot z^n$$ for $|p|<1$ and for all $z\in{\ensuremath{\mathbb{C}}}.$ Since the dilation operator commutes with the multiplication with constants, we can rewrite the limit function $\psi$ as $$\begin{aligned} \psi &= \left(\frac 1q\cdot D_{\frac 1q};\frac 1q\right)_\infty\hskip-2mm\phi_0 \\ & = \sum_{\ell=0}^\infty\frac{(-1)^\ell\cdot q^{-\ell(\ell-1)/2}}{(q^{-1};q^{-1})_\ell}\cdot q^{-\ell}\cdot D_{q^{-\ell}}\phi_0 \\ & = \sum_{\ell=0}^\infty\frac{(-1)^\ell}{q^{(\ell+1)\ell/2}\cdot \prod_{k=1}^\ell (1-q^{-k})}\cdot D_{q^{-\ell}}\phi_0 \\ & = \sum_{\ell=0}^\infty\frac{1}{(q;q)_\ell}\cdot D_{q^{-\ell}}\phi_0\end{aligned}$$ The next theorem utilizes all of the previous lemmata to show an explicit formula for the limit function $\psi\in{\mathcal{S}}^*({\ensuremath{\mathbb{R}}}).$ \[construction\] Let $q>1$, ${\ensuremath{\varepsilon}}>0$ and let $\phi_0\in C_c^\infty({\ensuremath{\mathbb{R}}})$ be of the form $$\phi_0(x)=\begin{cases} 1 & \text{for }|x|\le {\ensuremath{\varepsilon}}, \\ \eta(|x|) & \text{for }|x|\in\big({\ensuremath{\varepsilon}},{\ensuremath{\varepsilon}}q\big] \\ 0 & \text{for } |x|>{\ensuremath{\varepsilon}}q. \end{cases},$$ where $\eta\in C^\infty\left(\left[{\ensuremath{\varepsilon}},q{\ensuremath{\varepsilon}}\right]\right)$ is chosen so that $\phi_0\in C^\infty({\ensuremath{\mathbb{R}}})$. Then, $\psi=\prod_{m=0}^\infty \left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_0$ has the explicit representation $$\psi(x)={\ensuremath{\varphi}}\left(\frac 1q\right)+\sum_{\ell=0}^\infty \left[\frac 1{(q;q)_\ell}\cdot\eta\left(\frac{|x|}{q^\ell}\right)-\sum_{k=0}^\ell \frac 1{(q;q)_k}\right]\cdot{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}q^{\ell};{\ensuremath{\varepsilon}}q^{\ell+1}\right]}(|x|),$$ where ${\ensuremath{\varphi}}$ is the Euler function. Due to the form of $\phi_0$ and Lemma \[sum\] we obtain that $$\begin{aligned} \psi(x) &= \sum_{\ell=0}^\infty \frac 1{(q;q)_\ell}\cdot D_{q^{-\ell}}\phi_0(x) \\ &= \sum_{\ell=0}^\infty \frac 1{(q;q)_\ell}\cdot \left[{\mathbbm{1}}_{\left[-{\ensuremath{\varepsilon}}q^{\ell},{\ensuremath{\varepsilon}}q^{\ell}\right]}(x)+\left(D_{q^{-\ell}}\eta\right)(|x|)\cdot{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}q^{\ell},{\ensuremath{\varepsilon}}q^{\ell+1}\right]}(|x|)\right] \\ &= \sum_{\ell=0}^\infty \frac 1{(q;q)_\ell}\cdot \left(\sum_{k=0}^{\ell-1}{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}q^{k},{\ensuremath{\varepsilon}}q^{k+1}\right]}(|x|)+{\mathbbm{1}}_{\left[-{\ensuremath{\varepsilon}},{\ensuremath{\varepsilon}}\right]}(x)+\eta\left(q^{-\ell}\cdot |x|\right)\cdot{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}q^{\ell},{\ensuremath{\varepsilon}}q^{\ell+1}\right]}(|x|)\right) \\ &= \sum_{\ell=0}^\infty \frac 1{(q;q)_\ell}\cdot {\mathbbm{1}}_{\left[-{\ensuremath{\varepsilon}},{\ensuremath{\varepsilon}}\right]}(x) + \sum_{k=0}^\infty{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}q^{k},{\ensuremath{\varepsilon}}q^{k+1}\right]}(|x|)\cdot\sum_{\ell=k+1}^\infty \frac 1{(q;q)_\ell} \\ &\quad +\sum_{\ell=0}^\infty\frac 1{(q;q)_\ell}\cdot\eta\left(q^{-\ell}\cdot |x|\right)\cdot{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}q^{\ell},{\ensuremath{\varepsilon}}q^{\ell+1}\right]}(|x|).\end{aligned}$$ By inserting $x=0$ into the upper equation, we obtain that $$\psi(0)=\sum_{\ell=0}^\infty \frac 1{(q;q)_\ell}.$$ By repeating this for the equation $$\psi(x)=\left[\prod_{m=0}^\infty\left({\mathrm{Id}}-q^{-(m+1)}\cdot D_{\frac 1q}\right)\phi_0\right](x),$$ we can conclude with $\phi_0(0)=1$ and $(D_af)(0)=({\mathrm{Id}}f)(0)$ for all $a>0$ and all $f\in C({\ensuremath{\mathbb{R}}})$ that $$\begin{aligned} \sum_{\ell=0}^\infty \frac 1{(q;q)_\ell} & =\psi(0)=\left[\prod_{m=0}^\infty\left({\mathrm{Id}}-q^{-(m+1)}\cdot {\mathrm{Id}}\right)\phi_0\right](0)=\prod_{m=0}^\infty\left(1-q^{-(m+1)}\right) \\ & =\prod_{m=1}^\infty\left(1-q^{-m}\right)={\ensuremath{\varphi}}\left(\frac1q\right).\end{aligned}$$ This equation together with $\sum_{k=\ell+1}^\infty\frac 1{(q;q)_k}={\ensuremath{\varphi}}\left(\frac 1q\right)-\sum_{k=0}^\ell\frac 1{(q;q)_k}$ delivers $$\psi(x)={\ensuremath{\varphi}}\left(\frac 1q\right)+\sum_{\ell=0}^\infty \left[\frac 1{(q;q)_\ell}\cdot\eta\left(\frac{|x|}{q^\ell}\right)-\sum_{k=0}^\ell \frac 1{(q;q)_k}\right]\cdot{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}q^{\ell};{\ensuremath{\varepsilon}}q^{\ell+1}\right]}(|x|).$$ We can also utilize that ${\ensuremath{\varphi}}(x)$ is known for special values of $x$. For instance, we know that $${\ensuremath{\varphi}}\left(e^{-\pi}\right)=\frac{e^{\frac\pi{24}}\cdot \Gamma\left(\frac 14\right)}{2^{\frac78}\cdot\pi^{\frac 34}}$$ due to [@Be05 p. 326]. With the choice $q=e^\pi$, we hence obtain the following formula for a function in ${\mathcal{S}}_1^*({\ensuremath{\mathbb{R}}})$: $$\psi(x)=\frac{e^{\frac\pi{24}}\cdot \Gamma\left(\frac 14\right)}{2^{\frac78}\cdot\pi^{\frac 34}}+\sum_{\ell=0}^\infty \left[\frac 1{(e^\pi;e^\pi)_\ell}\cdot \eta(e^{-\ell\pi}|x|)-\sum_{k=0}^\ell \frac 1{(e^{\pi};e^{\pi})_k}\right]\cdot{\mathbbm{1}}_{\left({\ensuremath{\varepsilon}}e^{\ell\pi};{\ensuremath{\varepsilon}}e^{(\ell+1)\pi}\right]}(|x|).$$ Numerical treatment =================== In this section we will give a numerical analysis of the computations involved in the explicit formula presented in in the previous section. For a numerical computation of the Euler function, we can utilize Euler’s pentagonal number theorem [@Er12 equation (3.1)], which states that $$\phi(q) = 1+\sum_{m=1}^\infty (-1)^m\left(q^{m(3m-1)/2}+q^{m(3m+1)/2}\right)\quad\text{for }0<|q|<1.$$ This series expansion provides an excellent error estimate, since $$\left|\phi(q)-1-\sum_{m=1}^n (-1)^m\left(q^{m(3m-1)/2}+q^{m(3m+1)/2}\right)\right| \le 2q^{(n+1)(3n+2)/2}\quad\text{for }0<|q|<1,$$ as we can easily prove. Now we will adress the evaluation of the function $\psi$ presented in . As we are mainly interested in a function with vanishing moments, we will approximate $\psi$ by a function $$\phi_n=\prod_{m=0}^{n-1}\left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_0,$$ which has $n$ vanishing moments itself. In the next lemma we prove an approximation error for $\psi$. \[error\] Let $q\ge 2$ and $\phi_0$ be given as in and let $$\phi_n=\prod_{m=0}^{n-1}\left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_0.$$ Then we have $$\|\psi-\phi_n\|_\infty\le 5q^{-(n+1)}.$$ We first look for an appropriate representation of $\phi_n$. To this end, we will use Corollary \[qbin\]. We consider $$\begin{aligned} \phi_n &= \prod_{m=0}^{n-1}\left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_0 \\ &= (q^{-1}\cdot D_{\frac 1q};q^{-1})_n\phi_0 \\ &= \sum_{k=0}^n (-1)^kq^{-\binom{k+1}{2}}\cdot\binom nk_{\frac 1q}\cdot D_{q^{-k}}\phi_0\qquad\qquad\qquad (\text{using Corollary }\ref{qbin}) \\ &= \sum_{k=0}^n (-1)^kq^{-\binom{k+1}{2}}\cdot\prod_{\ell=0}^{k-1}\frac{1-q^{\ell-n}}{1-q^{-(\ell+1)}}\cdot D_{q^{-k}}\phi_0 \\ &= \sum_{k=0}^n \frac {(-1)^k}{(q;q)_k}\cdot\prod_{\ell=0}^{k-1}(1-q^{\ell-n})\cdot D_{q^{-k}}\phi_0 \\ &= \sum_{k=0}^n \frac {(-1)^k}{(q;q)_k}\cdot(q^{-n};q)_k\cdot D_{q^{-k}}\phi_0\end{aligned}$$ Hence, we can write the $L^\infty$-error of the approximation as $$\begin{aligned} \|\psi-\phi_n\|_\infty &= \left\| \sum_{k=0}^{n}(-1)^k\cdot \frac {1-(q^{-n};q)_k}{(q;q)_k} D_{q^{-k}}\phi_0 + \sum_{k=n+1}^\infty \frac {(-1)^k}{(q;q)_k} D_{q^{-k}}\phi_0\right\|_\infty \\ &\le \underbrace{\sum_{k=1}^{n} \left| \frac {1-(q^{-n};q)_k}{(q;q)_k}\right|}_{=:\text{(I)}} + \underbrace{\sum_{k=n+1}^\infty \left|\frac 1{(q;q)_k}\right|}_{=:\text{(II)}}.\end{aligned}$$ For the estimation of (I), we utilize Lemma \[qbin0\] to obtain $$\label{cell} 1-(q^{-n};q)_k = -\sum_{\ell=1}^k (-1)^{\ell} q^{-n\ell}\cdot q^{\binom\ell 2}\cdot\binom k\ell_q=-\sum_{\ell=1}^k(-1)^\ell c_\ell.$$ We will now show that under the given conditions, $\{c_\ell\}_\ell$ is a monotonically decreasing sequence. We observe that $$\begin{aligned} c_\ell-c_{\ell+1} &= q^{-n\ell}\cdot q^{\binom\ell 2}\cdot\binom k\ell_q - q^{-n(\ell+1)}\cdot q^{\binom{\ell+1} 2}\cdot\binom k{\ell+1}_q \\ &= q^{-n(\ell+1)}\cdot q^{\binom{\ell} 2}\cdot \binom k\ell_q\cdot \left(q^n -q^\ell\cdot \frac{q^{k-\ell}-1}{q^{\ell+1}-1}\right) \\ &= \frac{q^{-n(\ell+1)}}{q^{\ell+1}-1}\cdot q^{\binom{\ell} 2}\cdot \binom k\ell_q\cdot \left(q^n\cdot (q^{\ell+1}-1) -q^\ell \cdot(q^{k-\ell}-1)\right) \\ &= \frac{q^{-n(\ell+1)}}{q^{\ell+1}-1}\cdot q^{\binom{\ell+1} 2}\cdot \binom k\ell_q\cdot \left(q^{n+1}-q^{n-\ell}-q^{k-\ell}+1\right).\end{aligned}$$ The last factor is positive, if $q\ge2$, $\ell\ge0$ and $k\le n$. The latter is the case, since (I) covers only the cases where $1\le k\le n$. Since $c_\ell$ is a positive, monotonically decreasing sequence, the sum in \[cell\] is an alternating sum over a decreasing sequence. Hence, we obtain the estimate $$|1-(q^{-n};q)_k| \le c_1 = q^{-n}\cdot\binom k1_q=q^{-n}\cdot \frac{q^k-1}{q-1}.$$ This delivers the following estimate for (I): $$\begin{aligned} (\text{I}) &= \sum_{k=1}^n\left|\frac {1-(q^{-n};q)_k}{(q;q)_k}\right| \\ &\le \frac 1{q^{n}(q-1)}\cdot\sum_{k=1}^n (q^k-1)\cdot\frac 1{|(q;q)_k|} \\ &= \frac 1{q^{n}(q-1)}\cdot\sum_{k=1}^n\frac 1{|(q;q)_{k-1}|}.\end{aligned}$$ By adding up (II) and the estimate for (I), we obtain $$\begin{aligned} \|\psi-\phi_n\| &\le \text{(I)+(II)} \\ &\le \frac 1{q^{n}(q-1)}\cdot\sum_{k=1}^n\frac 1{|(q;q)_{k-1}|} + \sum_{k=n+1}^\infty\frac 1{|(q;q)_{k}|} \\ &\le \frac 1{q^{n}(q-1)}\cdot \sum_{k=0}^{n-1}\frac 1{|(q;q)_{k}|} + \frac 1{q^{n+1}-1}\cdot\sum_{k=n}^{\infty} \frac 1{|(q;q)_{k}|} \\ &\le \frac 1{q^{n}(q-1)}\cdot \sum_{k=0}^{\infty}\frac 1{|(q;q)_{k}|} \\ &\le 5\cdot q^{-(n+1)}.\end{aligned}$$ The last estimate can be obtained via the condition $q\ge 2$ by the considerations $$\frac 1{q-1}\le 2q^{-1}\quad\text{and}\quad\sum_{k=0}^{\infty}\frac 1{|(q;q)_{k}|} = \sum_{k=0}^\infty\prod_{\ell=1}^k \frac1{q^\ell-1} \le 1+\underbrace{\frac 1{q-1}}_{\le 1}\cdot\sum_{m=0}^\infty\underbrace{\frac 1{(q^2-1)^m}}_{\le 3^{-m}}\le\frac 52.$$ With this result we have an error estimate for a numerical application of to the Taylorlet transform. In order to see the effect of the approximation on the decay rate, we briefly revisit rsp. -1mm Corollary \[cor\]. In both statements we observe the decay rate of the Taylorlet transform $\mathcal{T}f(a,s,t)$ of a feasible function $f(x)=c\cdot[x_1-q(x_2)]^j\cdot{\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_\pm}(x_1-q(x_2))$ with a singularity function $q\in C^\infty({\ensuremath{\mathbb{R}}})$ for $a\to 0$. Additionally, both revolve around the cases $$\tag{$A_k$}\label{A_k} s_i = q^{(i)}(t)\quad\text{for all }i\in\{0,\ldots,k\}$$ When using a restrictive analyzing Taylorlet as constructed in , Corollary \[cor\] states that $\mathcal{T} f(a,s,t)$ decays superpolynomially fast for $a\to 0$ if and only if $(A_n)$ does not hold. Thus it provides a sharp distinction between $(A_n)$ and $\neg(A_n)$. For a numerical treatment of the Taylorlet transform we are bound to using Taylorlets with finitely vanishing moments and thus applies. Despite not providing a superpolynomial versus polynomial decay rate scenario, it still guarantees a significantly lower decay rate of the Taylorlet transform, if $(A_n)$ holds. For an illustration of the functioning of the Taylorlets, we will show images of the Taylorlet transform of different twodimensional functions. The Taylorlets we use for the images have 5 vanishing moments of second order in $x_1$-direction and are of the form suggested in . $$\begin{aligned} \tau(x)&=g(x_1)\cdot h(x_2), \\ h(x_2)&=e^{-x_2^2}, \\ g(x_1)&=\phi_{10}\left(\sqrt{|x_1-\tfrac 18|}\right), \\ \phi_{10}(t)&=\prod_{m=0}^9 ({\mathrm{Id}}-2^{-(m+1)}D_{\frac 12})\phi_0(t), \\ \phi_0(t)&=\begin{cases} 1, & |t|\le \frac 14, \\ (128t^3-144t^2+48t-4), & |t|\in\left(\frac 14,\frac 12\right), \\ 0, & |t|\ge \frac 12. \end{cases}\end{aligned}$$ As stated in , the restrictiveness is important for the lower bound of the Taylorlet transform and hence for detecting the Taylor coefficients of the singularity function. In the upper definition of $\tau$ the shift in the step $g(x_1)=\phi_{10}\left(\sqrt{|x_1-\tfrac 18|}\right)$ provides the restrictiveness of the Taylorlet while preserving the vanishing moments, as the following lemma show. It must be mentioned, though, that the upper Taylorlet is only in $C^1({\ensuremath{\mathbb{R}}}^2)$, but not $C^\infty({\ensuremath{\mathbb{R}}}^2)$, because $\phi_0$ is not $C^2$ in the points $\pm\tfrac 14$ and $\pm\tfrac 12$. \[rest\] Let $q>1$ and let $\phi_0$ be of the form given in and let $$\phi_{m+1}=\left({\mathrm{Id}}-q^{-(m+1)}D_{\frac 1q}\right)\phi_m$$ for all $m\in{\ensuremath{\mathbb{N}}}$. If 1. $t_0\in\left(-{\ensuremath{\varepsilon}}^{v_n},{\ensuremath{\varepsilon}}^{v_n}\right)$, $M\in{\ensuremath{\mathbb{N}}}$ and $g(t):=\phi_{Mv_n}\left(\sqrt[v_n]{|t-t_0|}\right)$ for all $t\in{\ensuremath{\mathbb{R}}}$, 2. $h\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ with $\int_{\ensuremath{\mathbb{R}}}h(t)dt\ne 0$, then $\tau=g\otimes h$ has $M$ vanishing moments of order $n$ in $x_1$-direction and is restrictive. Let $\tilde\phi(t):=\phi_{Mv_n}\left(\sqrt[v_n]{|t|}\right)$. Due to there exists a $c>0$ such that $$\label{c} \tilde\phi\big|_{\left[-{\ensuremath{\varepsilon}}^{v_n},{\ensuremath{\varepsilon}}^{v_n}\right]} \equiv c.$$ Hence, with $g=\tilde\phi(\ \cdot-t_0)$ and $t_0\in\left(-{\ensuremath{\varepsilon}}^{v_n},{\ensuremath{\varepsilon}}^{v_n}\right)$ we obtain that $$g\big|_{\left[t_0-{\ensuremath{\varepsilon}},{\ensuremath{\varepsilon}}-t_0\right]} \equiv c$$ and so $g\in{\mathcal{S}}({\ensuremath{\mathbb{R}}})$ still. Furthermore, Proposition \[sqrt\] and Lemma \[vanmom\] deliver that $g$ has $M$ vanishing moments of order $n$. For the restrictiveness of $\tau$, we have to show that 1. $g(0)\ne 0$ and $\int_0^\infty g(t)t^jdt\ne 0$ for all $j\in\{0,\ldots,r-1\}$ and 2. $\int_{\ensuremath{\mathbb{R}}}h(t)dt\ne 0$. Property (ii) is already given by condition b). Hence, (i) remains to show. Lemma \[vanmom\] in addition states that $\int_{{\ensuremath{\mathbb{R}}}_\pm}\phi_{Mv_n}(t)t^mdt=0$ for all $\ell\in\{0,\ldots,Mv_n-1\}$. We will now show that a similar property is true for $\tilde\phi$. The variable substitution $t= u^{v_n}$ delivers $$\begin{aligned} \int_0^\infty\tilde\phi(t)t^mdt &= \int_0^\infty\phi_{Mv_n}\left(\sqrt[v_n]{|t|}\right)t^mdt \\ &= \int_0^\infty \phi_{Mv_n}(u) u^{mv_n}\cdot v_n\cdot u^{v_n-1} du \\ &=v_n\cdot\int_0^\infty \phi_{MV_n}(u)u^{(m+1)v_n-1} du=0\end{aligned}$$ for all $m\in\{0,\ldots,M-1\}$ due to Lemma \[vanmom\]. With this result, we can show property (i). For $m\in\{0,\ldots,M-1\}$, we have $$\begin{aligned} \int_0^\infty g(t)t^mdt&=\int_0^\infty \tilde\phi(t-t_0)t^mdt \\ &=\int_{-t_0}^\infty \tilde\phi(u)(u+t_0)^m dt \\ &= \sum_{k=0}^m \binom mk t_0^{m-k}\cdot\int_{-t_0}^\infty \tilde\phi(u)u^k dt \\ &= \sum_{k=0}^m \binom mk t_0^{m-k}\cdot \Bigg(\int_{-t_0}^0\underbrace{\tilde\phi(u)}_{\equiv c\text{ for }|u|\le t_0\quad (\ref{c})}u^kdu+\underbrace{\int_0^\infty \tilde\phi(u)u^k dt}_{=0} \Bigg) \\ &= c\cdot\sum_{k=0}^m \binom mk t_0^{m-k}\int_{-t_0}^0u^k du \\ &= c\cdot\sum_{k=0}^m \binom mk t_0^{m-k}\cdot\frac 1{k+1}\cdot[-(-t_0)^{k+1}] \\ &= -\frac {c\cdot t_0^{m+1}}{m+1}\cdot \sum_{k=0}^m \binom {m+1}{k+1} (-1)^{k+1} \\ &= -\frac {c\cdot t_0^{m+1}}{m+1}\cdot\Bigg( \underbrace{\sum_{k=0}^{m+1} \binom {m+1}{k} (-1)^{k}}_{=0} - \binom{m+1}0 (-1)^0\Bigg) \\ &= \frac{c\cdot t_0^{m+1}}{m+1}\ne 0.\end{aligned}$$ The functions we will analyze with the Taylorlet transform are of the form $$f(x)={\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}\left(x_1-q(x_2)\right),$$ where $q\in C^\infty({\ensuremath{\mathbb{R}}})$ is the singularity function of $f$. In order to efficiently compute the Taylorlet transform of such functions, we utilize the following lemma allowing a 1-dimensional and thus faster integration. \[1d\] Let $\tau$ be of the form given in Lemma \[rest\], let $f(x)={\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}\left(x_1-q(x_2)\right)$ with $q\in C^\infty({\ensuremath{\mathbb{R}}})$ and $n\in{\ensuremath{\mathbb{N}}},$ $n\ge 2$. Furthermore let $$G(w):=av_n\cdot\int_{\sqrt[v_n]{|w-t_0|}}^\infty\phi_{Mv_n}(u)|u|^{v_n-1}du$$ for $w\in|R$. Furthermore, let $T:=G\otimes h$ and let $T_{ast}(x):=T\left(A_\frac 1a S_{-s}(x-t e_2)\right)$ for $x\in{\ensuremath{\mathbb{R}}}^2$. Then $${\mathcal{T}}_{\tau}f(a,s,t)=\int_{\ensuremath{\mathbb{R}}}T_{ast}{\begin{pmatrix}q(u) \\ u\end{pmatrix}} du$$ for all $a>0$, $s\in{\ensuremath{\mathbb{R}}}^{n+1}$, $t\in{\ensuremath{\mathbb{R}}}$. Wlog let $t=0$. By rewriting the Taylorlet transform we obtain $$\begin{aligned} \mathcal{T}_\tau f(a,s,0) &= \int_{{\ensuremath{\mathbb{R}}}^2} \tau{\begin{pmatrix}\frac 1a\cdot \left[x_1-\sum_{k=0}^n \frac{s_kx_2^k}{k!}\right] \\ \frac{x_2}{a^\alpha}\end{pmatrix}} {\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}\left(x_1-q(x_2)\right)dx \\ &= \int_{\ensuremath{\mathbb{R}}}\int_{q(x_2)}^{\infty} g\left(\frac 1a\cdot \left[x_1-\sum_{k=0}^n \frac{s_kx_2^k}{k!}\right]\right)dx_1 h\left(\frac{x_2}{a^\alpha}\right)dx_2 \\ &= \int_{\ensuremath{\mathbb{R}}}\int_{q(x_2)}^{\infty} \phi_{Mv_n}\left(\sqrt[v_n]{\left|\frac 1a\cdot \left[x_1-\sum_{k=0}^n \frac{s_kx_2^k}{k!}\right]-t_0\right|}\right) dx_1 h\left(\frac{x_2}{a^\alpha}\right)dx_2.\end{aligned}$$ Performing the variable substitution $$x={\begin{pmatrix}{\mathrm{sgn}}(y_1)\cdot\left[a(y_1^{v_n}+t_0)+\sum_{k=0}^n\frac{s_kx_2^k}{k!}\right] \\ x_2\end{pmatrix}},\quad \frac{dx_1}{dy_1}=av_n|y_1|^{v_n-1},$$ delivers $$\begin{aligned} \mathcal{T}_\tau f(a,s,0) &= av_n\cdot\int_{\ensuremath{\mathbb{R}}}\int_{\sqrt[v_n]{\left|\frac 1a\cdot \left[q(y_2)-\sum_{k=0}^n \frac{s_ky_2^k}{k!}\right]-t_0\right|}}^\infty \phi_{Mv_n}(y_1)|y_1|^{v_n-1} dy_1 h\left(\frac{y_2}{a^\alpha}\right)dy_2 \\ &= \int_{\ensuremath{\mathbb{R}}}G\left(\frac 1a\cdot \left[q(y_2)-\sum_{k=0}^n \frac{s_ky_2^k}{k!}\right]\right) h\left(\frac{y_2}{a^\alpha}\right)dy_2 \\ &= \int_{\ensuremath{\mathbb{R}}}T_{as0}{\begin{pmatrix}q(y_2) \\ y_2\end{pmatrix}} dy_2.\end{aligned}$$ For the implementation of the Taylorlet transform in Matlab, we utilized Lemma \[1d\] and employed the one-dimensional adaptive Gauss-Kronrod quadrature `quadgk` in Matlab for the evaluation of the integrals. $s_{0}$ $s_{1}$ $s_{2}$ --------- --------- --------- : Plots of the Taylorlet transform $\mathcal{T}f(a,s,0)$ for $f_1(x)={\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}(x_1- \sin x_2)$ (upper row), $f_2(x)={\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}\left(x_1-e^{x_2}\right)$ (middle row) and $f_3(x)={\mathbbm{1}}_{B_1}(x)$ (bottom row). The vertical axis shows the dilation parameter in a logarithmic scale $-\log_2a$. The horizontal axis shows the location $s_0$ (left), the slope $s_1$ (center) and the parabolic shear $s_2$ (right). The respective true value is indicated by the vertical red line. The values of $\alpha$ change with $s_{i}$: for $s_{0}$ we use $\alpha=1.01$, for $s_{1}$ we have $\alpha=0.51$ and during the search for $s_{2}$ we set $\alpha=0.34$. The Taylorlet transform was computed for points $(a,s_i)$ on a $300\times 300-$grid. We can observe the paths of the local maxima wrt the respective shearing variable as they converge to the correct related geometric value through the scales. Due to the vanishing moment conditions of higher order, the local maxima display a fast convergence to the correct value. The bottom left image shows that the Taylorlet transform of $f_3$ exhibits the two singularities $s_0=\pm1$, since this function is not of the form ${\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}(x_1-q(x_2))$. Table 4 shows plots of the Taylorlet transform $\mathcal{T}f(a,s,0)$ of the three functions $f_1(x)={\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}(x_1- \sin x_2)$, $f_2(x)={\mathbbm{1}}_{{\ensuremath{\mathbb{R}}}_+}\left(x_1-e^{x_2}\right)$ and $f_3(x)={\mathbbm{1}}_{B_1}(x)$. The vertical axis describes the dilation parameter in a binary logarithmic scale while the horizontal axis shows location, slope and parabolic shear rsp. These plots represent a search for successive Taylor coefficients of the respective singularity functions around the origin. This is possible by exploiting which states that the decay rate of the Taylorlet transform $\mathcal{T} f(a,s,t)$ for $a\to 0$ depends on $\alpha$ and on the highest $k\in\{0,\ldots,n\}$ for which the condition $(A_k)$ does not hold. In order to find $q(t)$, we hence compute the Taylorlet transform with $\alpha>1$ first. The choice of $\alpha$ and the restrictiveness of the Taylorlet, as guaranteed by Lemma \[rest\], ensure a decay rate of $$\mathcal{T}f(a,s_{0},t)\sim 1\quad\text{for } a\to 0$$ for $s_{0}=q(t)$ due to . The following procedure is an adaption of the method of wavelet maximum modulus [@MaHw92]. We observe the paths of the local maxima wrt $s_0$ for decreasing $a$. As in the method of maximum modulus, the local extrema of the Taylorlet transform converge to $s_0=q(t)$ for decreasing $a$. After finding $q(t)$, we fix $s_0=q(t)$, choose an $\alpha\in\left(\frac 12,1\right)$ and repeat the procedure to find $q'(t)$ and to search for $q''(t)$ with an $\alpha\in\left(\frac 13,\frac12\right)$ in the final step. For a better visibility of the local maxima, we normalized the absolute value of the Taylorlet transform such that the maximal value in each scale is 1. Conclusion and Discussion ========================= In this article we utilized methods from q-calculus in the construction of a function $g\in{\mathcal{S}}_1^*({\ensuremath{\mathbb{R}}})$ which additionally is constant around the origin (). This allows for the creation of an analyzing Taylorlet of arbitrary order with infinitely many vanishing moments (Proposition \[sqrt\]). In the explicit formula of the constructed function, the Euler function appears and exhibits an inherent connection between the presented method and the field of combinatorics. In section 5 we presented a numerical analysis of the evaluation of the function constructed in which preserves several important properties such as smoothness, decay rate and some, although not all of the vanishing moments. In numerical experiments we illustrated that our mathematical results translate into practice. In the following consideration let $\tau$ be a restrictive analyzing Taylorlet of order $n$ with infinitely many vanishing moments in $x_1$-direction. As Corollary \[cor\] states, such a Taylorlet $\tau$ allows for a precise determination of the first $n+1$ Taylor coefficients of the singularity function $q$ by means of the decay rate of the Taylorlet transform. Interestingly, the first two Taylor coefficients are deeply linked to the concept of the wavefront set which includes localization and directionality of singularities. Furthermore, as was shown by Grohs, the shearlet transform allows for a resolution of the wavefront set if the shearlet is a Schwartz function and exhibits infinitely many vanishing moments in $x_1$-direction [@gr11 Thm 6.4]. Hence, the Taylorlet $\tau$, similarly, is a good starting point for a generalization of the concept of wavefront set which addicionally includes local curvature and higher order geometric information of the distributed singularity. In a similar fashion as in the characterization of the classical wavefront set by the continuous shearlet transform, we can define a generalized wavefront of a tempered distribution $f\in{\mathcal{S}}'({\ensuremath{\mathbb{R}}}^2)$ by using a Taylorlet $\tau$ as described above and $\alpha\in\left(\frac 1{n+1},\frac 1n\right)$ via $$\begin{aligned} \mathcal{WF}_n(f)^c=\big\{&(t,s_0,\ldots,s_n)\in{\ensuremath{\mathbb{R}}}^{n+2}:\,\exists\text{ open neighborhood }U\text{ of }(t,s_0,\ldots,s_n):\\ &\mathcal{T}_\tau f(a,\cdot,\cdot)\text{ decays superpolynomially fast for }a\to 0\text{ in }U \text{ globally}\big\}.\end{aligned}$$ This concept would enable a more precise description and analysis of singularities than the definition of the wavefront set allows. To the knowledge of the authors this idea has not been pursued to this date and so a thorough investigation is still needed, but beyond the scope of this paper. Acknowledgements {#acknowledgements .unnumbered} ================ The first author expresses his gratitude for the support by the DFG project FO 792/2-1 “Splines of complex order, fractional operators and applications in signal and image processing”, awarded to Brigitte Forster. The work of the second author was supported by Portuguese funds through the CIDMA - Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (“FCT–Fundação para a Ciência e a Tecnologia”), within project UID/MAT/ 0416/2013 and the sabbatical grant SFRH/BSAB/135157/2017.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Iterative linear-quadratic (ILQ) methods are widely used in the nonlinear optimal control community. Recent work has applied similar methodology in the setting of multi-player general-sum differential games. Here, ILQ methods are capable of finding local Nash equilibria in interactive motion planning problems in real-time. As in most iterative procedures, however, this approach can be sensitive to initial conditions and hyperparameter choices, which can result in poor computational performance or even unsafe trajectories. In this paper, we focus our attention on a broad class of dynamical systems which are feedback linearizable, and exploit this structure to improve both algorithmic reliability and runtime. We showcase our new algorithm in three distinct traffic scenarios, and observe that in practice our method converges significantly more often and more quickly than was possible without exploiting the feedback linearizable structure.' author: - 'David Fridovich-Keil\*, Vicenç Rubies-Royo\*, and Claire J. Tomlin [^1][^2]' bibliography: - 'references.bib' title: | **An Iterative Quadratic Method for General-Sum Differential Games\ with Feedback Linearizable Dynamics** --- [^1]: Department of EECS, UC Berkeley, [](mailto:[email protected]). $^*$ indicates equal contribution. [^2]: This research is supported by an NSF CAREER award, the Air Force Office of Scientific Research (AFOSR), NSF’s CPS FORCES and VeHICaL projects, the UC-Philippine-California Advanced Research Institute, the ONR BRC grant for Multibody Systems Analysis, a DARPA Assured Autonomy grant, and the SRC CONIX Center. D. Fridovich-Keil is also supported by an NSF Graduate Research Fellowship.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work, we combine magnetization, pressure dependent electrical resistivity, heat-capacity, $^{63}$Cu Nuclear Magnetic Resonance (NMR) and X-ray resonant magnetic scattering experiments to investigate the physical properties of the intermetallic CeCuBi$_{2}$ compound. Our single crystals show an antiferromagnetic ordering at $T_{\rm N}\simeq$ 16 K and the magnetic properties indicate that this compound is an Ising antiferromagnet. In particular, the low temperature magnetization data revealed a spin-flop transition at $T$ = 5 K when magnetic fields of about $5.5$ T are applied along the $c$-axis. Moreover, the X-ray magnetic diffraction data below $T_{\rm N}$ revealed a commensurate antiferromagnetic structure with propagation wavevector $(0~0~\frac{1}{2}$) with the Ce$^{3+}$ moments oriented along the $c$-axis. Furthermore, our heat capacity, pressure dependent resistivity, and temperature dependent $^{63}$Cu NMR data suggest that CeCuBi$_{2}$ exhibits a weak heavy fermion behavior with strongly localized Ce$^{3+}$ 4$f$ electrons. We thus discuss a scenario in which both the anisotropic magnetic interactions between the Ce$^{3+}$ ions and the tetragonal crystalline electric field effects are taking into account in CeCuBi$_{2}$.' author: - 'C. Adriano$^{1}$' - 'P. F. S. Rosa$^{1,2}$' - 'C. B. R. Jesus$^{1}$' - 'J. R. L. Mardegan$^{1}$' - 'T. M. Garitezi$^{1}$' - 'T. Grant$^{2}$' - 'Z. Fisk$^{2}$' - 'D. J. Garcia$^{3}$' - 'A. P. Reyes$^{4}$' - 'P. L. Kuhns$^{4}$' - 'R. R. Urbano$^{1}$' - 'C. Giles$^{1}$' - 'P. G. Pagliuso$^{1}$' bibliography: - 'basename of .bib' title: 'Physical properties and magnetic structure of the intermetallic CeCuBi$_{2}$ compound' --- INTRODUCTION ============ A series of rare-earth based intermetallic compounds is usually of great interest to explore the interplay between Ruderman-Kittel-Kasuya-Yoshida (RKKY) magnetic interaction, crystalline electrical field (CEF) and the Fermi surface effects frequently present in these materials. The Ce-based materials can have especially interesting physical properties that arise from the combination of these effects with a strong hybridization between the Ce$^{3+}$ 4$f$ and the conduction electrons. Therefore, these materials may present a variety of non-trivial ground states, including unconventional superconductivity (SC) and non-Fermi-liquid behavior frequently exhibited in the vicinity of a magnetically ordered state [@Review; @Piers_JPCM_2001]. Interestingly, some of these properties, such as the concomitant observation of unconventional SC and heavy fermion (HF) behavior, seem to be favored in systems with tetragonal structure. Well known examples are the Ce-based heavy fermions superconductors Ce$M$In$_5$ ($M$ = Co, Rh, Ir), Ce$_2M$In$_8$ ($M$ = Co, Rh, Pd), CePt$_2$In$_7$ ($M$ = Co, Rh, Ir, Pd) [@Thompson_Fisk_review115; @NiniNP; @pagliuso1; @eric; @curro] and CeCu$_2$Si$_2$ [@Steglich_CeCu2Si2; @Stockert]. Recent attention has been given to the Ce$T$X$_2$ family ($T$ = transition metal, $X$ = pnictogen) and in particular to the Ce$T$Sb$_2$ compounds [@CeTSb2], which host ferromagnetic members with complex magnetic behavior, such as Ce(Ni,Ag)Sb$_2$. Their physical properties have motivated the investigation of the parent Ce$T$Bi$_2$ compounds [@CeTBi2; @CeCuBi2_Acta; @CeNiBi2_Takabatake], although studies on the latter are rather rare. Thamizhavel *et al.* [@CeTBi2] have shown that CeCuBi$_2$ orders antiferromagnetically with a Néel temperature of $T_{\rm N}$ = 11.3 K and an easy axis along the $c$-direction. Nevertheless, no detailed microscopic investigation regarding the relevant magnetic interactions have been presented so far. It is also intriguing that no HF superconductors have ever been discovered within the Ce$T$X$_2$ family. Another remarkable result is the breakdown of the De Gennes scaling revealed by non-Kondo members of the $RE$AgBi$_2$ [@ReAgBi2_Petrovic] and $RE$CuBi$_2$ [@ReCuBi2_Camilo] ($RE$ = rare earth) families. This usually indicates a complex and non-trivial competition between RKKY interactions and tetragonal CEF [@pagliuso2; @Pagliuso_JAP2006; @Serrano2]. In this work we report the physical properties and magnetic structure of CeCuBi$_2$ single crystals. CeCuBi$_2$ is an intermetallic compound that crystallizes in the tetragonal ZrCuSi$_2$-type structure (P4/nmm [@CeCuBi2_Acta] space group and lattice parameters $a$ = 4.555(4) $\rm{\AA}$ and $c$ = 9.777(8) $\rm{\AA}$) with a stacking arrangement of CeBi-Cu-CeBi-Bi layers. Our results revealed an antiferromagnetic ordering at $T_{\rm N}$ = 16 K, a higher value than reported previously [@CeTBi2; @CeCuBi2_Acta] suggesting our crystals are of higher quality. In fact, we also found that the Néel temperature is suppressed in Cu-deficient crystals. For example, the compound CeCu$_{0.6}$Bi$_2$ orders antiferromagnetically at $T_{\rm N}$ = 12 K. The magnetic structure determination of CeCuBi$_2$ revealed a propagation vector (0 0 $\frac{1}{2}$) with the magnetic moments aligned along the $c$-axis. A systematic analysis of the magnetization and specific heat data within the framework of a mean field theory with influence of anisotropic first-neighbors interaction and tetragonal CEF [@Pagliuso_JAP2006] allowed us to extract the CEF scheme for CeCuBi$_2$. It also led us to estimate the values of the anisotropic RKKY exchange parameters between the Ce$^{3+}$ ions. In addition, the analyses of electrical resistivity under hydrostatic pressure and $^{63}$Cu Nuclear Magnetic Resonance (NMR) data suggest a scenario where CeCuBi$_2$ might display a weak heavy fermion behavior with rather strong localized Ce$^{3+}$ $4f$ electrons. EXPERIMENTAL DETAILS ==================== Single crystals of CeCuBi$_{2}$ and LaCuBi$_2$ (a non-magnetic reference) were grown from Bi-flux, as reported previously [@ReCuBi2_Camilo]. The crystallographic structure was verified by X-ray powder diffraction and the crystal orientation was determined by the usual Laue method. The system was submitted to elemental analysis using a commercial Energy Dispersive Spectroscopy (EDS) microprobe and a commercial Wavelength Dispersive Spectroscopy (WDS). For oxygen free surface samples the stoichiometry is 1:1:2 with an error of 5%. Magnetization measurements were performed using a commercial superconducting quantum interference device (SQUID). The specific heat was measured using a commercial small mass calorimeter that employs a quasi-adiabatic thermal relaxation technique. The in-plane electrical resistivity was obtained using a low-frequency ac resistance bridge and a four-contact configuration. Electrical-resistivity measurements under hydrostatic pressure were carried out in a clamp-type cell using Fluorinert as a pressure transmitting medium. Pressure was determined by measuring the superconducting critical temperature of a Pb sample. X-ray resonant magnetic scattering (XRMS) measurements of CeCuBi$_2$ were carried out at the 4-ID-D beamline at the Advanced Photon Source of the Argonne National Laboratory-IL. The sample was mounted on a cryostat installed in a four-circle diffractometer with the *a*-axis parallel to the beam direction. This configuration allowed $\sigma$-polarized incident photons in the sample. The measurements were performed using polarization analysis, with a LiF(220) crystal analyzer, appropriate for the energy of Ce-L$_2$ absorption edge (6164 eV). NMR experiments were performed at the National High Magnetic Field Laboratory (NHMFL) in Tallahassee-FL. A CeCuBi$_2$ single crystal was mounted on a low temperature NMR probe equipped with a goniometer, which allowed a fine alignment of the crystallographic axes with the external magnetic field. A silver wire NMR coil was used in this experiment. The field-swept $^{63}$Cu NMR spectra ($I$ = 3/2, $\gamma_N$/2$\pi$ = 11.285 MHz/T) were obtained by stepwise summing the Fourier transform of the spin-echo signal. RESULTS AND DISCUSSIONS ======================= Figures \[fig:Fig1\]a and 1b show the temperature dependence of the magnetic susceptibility $\chi(T)$ when the magnetic field ($H$ = 1 kOe) is applied parallel $\chi_{\parallel}$ (panel 1a) and perpendicular $\chi_{\bot}$ (panel 1b) to the crystallographic $c$-axis. These data show an antiferromagnetic (AFM) order at $T_{\rm N}\simeq$ 16 K and a low temperature magnetic anisotropy consistent with an easy axis along the $c$-direction. The ratio $\chi_{\parallel}/\chi_{\perp} \approx$ 4.5 at $T_{\rm N}$ is mainly determined by the tetragonal CEF splitting and reflects the low-$T$ Ce$^{3+}$ single ion anisotropy. The inverse of the polycrystalline 1/$\chi_{poly}(T)$ is presented in Fig. \[fig:Fig1\]c. A Curie-Weiss fit to this averaged data for $T>$ 150 K (dashed line) yields an effective magnetic moment $\mu_{eff}$ = 2.5(1) $\mu_{B}$ (in agreement with the theoretical value of $\mu_{eff}$ = 2.54 $\mu_{B}$ for Ce$^{3+}$) and a paramagnetic Curie-Weiss temperature $\theta_{p}$ = -23(1) K. Figure \[fig:Fig1\]d displays the low temperature magnetization as a function of the applied magnetic field $M(H)$. The large magnetic anisotropy of CeCuBi$_2$ is also evident in these data. We found an abrupt spin-flop transition from an antiferromagnetic to a ferromagnetic (FM) phase at $H\approx$ 55 kOe when the magnetic field is applied parallel to the $c$-axis (open circles) whilst a linear behavior is observed when the field is applied perpendicular to the $c$-axis (open triangles) for fields up to $H$ = 70 kOe. Interestingly, the $M(H)$ data show a small hysteresis around $H\sim$ 50 kOe, suggesting a first order character for this field induced phase transition. The solid lines through the data points in Figs. 1a, 1b and 1d represent the best fits using a CEF mean field model discussed in detail ahead. ![Temperature dependence of the magnetic susceptibility measured with $H$ = 1 kOe applied (a) parallel $\chi_{\parallel}$, and (b) perpendicular $\chi_{\bot}$ to the $c$-axis. (c) Inverse of the polycrystalline average 1/$\chi_{poly}(T)$. The green-dashed line represents a Curie-Weiss fit for $T >$ 150 K. (d) Magnetization as a function of the applied magnetic field parallel (open circles) and perpendicular (open triangles) to the $c$-axis at $T$ = 5 K. The solid lines through the experimental points in Figs. 1a, 1b and 1d are best fits of the data using the CEF mean field model discussed in the text.[]{data-label="fig:Fig1"}](Fig1.eps){width="1.0\columnwidth"} ![(a) $C(T)/T$ of CeCuBi$_2$ (open squares) and LaCuBi$_2$ (solid line) as a function of temperature. The inset shows the SC transition at $T\sim$ 1.3 K observed in the low-$T$ specific heat and electrical resistivity data of LaCuBi$_2$. (b) $C_{mag}(T)/T$ as a function of temperature. The solid line represents a Schottky-type anomaly resulted from the tetragonal CEF scheme.[]{data-label="fig:Fig2"}](Fig2.eps){width="0.95\columnwidth"} The total specific heat divided by the temperature $C(T)/T$ as a function of temperature for CeCuBi$_2$ (open squares) is shown in Fig. 2a. The peak of $C(T)/T$ defines $T_{\rm N}$ = 16 K consistently with the AFM transition temperature observed in the magnetization measurements. Fig. 2b presents the magnetic specific heat $C_{mag}(T)/T$ of CeCuBi$_2$ (solid squares) after subtracting the lattice contribution from the non-magnetic reference LaCuBi$_2$ compound (solid line in Fig. \[fig:Fig2\]a). The magnetic entropy recovered at $T_{\rm N}$ obtained by integrating $C_{mag}(T)/T$ in this temperature range (not shown) was found to be about $80\%$ of R$ln$2 (R $\sim$ 8.3 J/mol K). This suggests that the magnetic moments of the Ce$^{3+}$ CEF ground state are slightly compensated due to the Kondo effect. Although the presence of magnetic frustration and short range order may also explain the magnetic entropy above $T_{\rm N}$. Yet from the $C_{mag}(T)/T$ data above $T_{\rm N}$, it is possible to estimate the Sommerfeld coefficient $\gamma$ by performing a simple entropy-balance construction \[S($T_{\rm N}$ - $\epsilon$) = S($T_{\rm N}$ + $\epsilon$)\] [@Hegger_PRL2000]. Thus, one obtains a $\gamma\sim$ 50-150 mJ/mole K$^2$, very consistent with the partly compensated magnetic moment of the CEF doublet at the transition. The inset of Fig. 2a highlights the superconducting transition found for LaCuBi$_2$ at $T\sim$ 1.3 K. In fact, conventional superconductivity at similar temperatures has been previously reported for isostructural compounds of the La$M$Sb$_2$ family ($M$ = Ni, Cu, Pd and Ag) [@Muro]. The solid line in Fig. 2b represents a Schottky-type anomaly resulted from the tetragonal CEF scheme obtained from our analysis as discussed in the following. In order to establish a plausible scenario for the magnetic properties of CeCuBi$_2$, we have analyzed the data presented in Figs. 1 and 2 using a mean field model including the anisotropic interaction from nearest-neighbors as well as the tetragonal CEF hamiltonian. For the complete description of the theoretical model, see reference [@Pagliuso_JAP2006]. This model was used to simultaneously fit $\chi(T)$, $M(H)$ and $C_{mag}(T)/T$ data for $T> 20$ K as a constrain. The best fits yield the CEF parameters: B$^{0}_{2}$ = -7.67 K, B$^{0}_{4}$ = 0.18 K and B$^{4}_{4}$ = 0.11 K; and two RKKY exchange parameters: $z_{AFM}*J_{AFM}$ = 1.12 K and $z_{FM}*J_{FM}$ = -1.18 K, where $z_{AFM}$ = 2 ($z_{FM}$ = 4) is the Ce$^{3+}$ nearest neighbors with an AFM (FM) coupling, in this case, along the $c$-axis ($ab$-plane). The XRMS experiment suggests a magnetic structure compatible with this scenario, as will be discussed later on this work. It is worth emphasizing that the fits converged only when two distinct $J_{RKKY}$ exchange parameters were considered. Although CeCuBi$_2$ has an AFM ground state at zero field, the presence of FM fluctuations are evidenced by the presence of a spin-flop transition in the $M(H)$ data. The extracted parameters resulted in a CEF scheme with a $\Gamma^{(1)}_{7}$ ground state doublet $(0.99|\pm5/2\rangle - 0.06|\mp3/2\rangle)$, a first excited state $\Gamma^{(2)}_{7}$ $(0.06|\pm5/2\rangle + 0.99|\mp3/2\rangle)$ at 50 K and a second excited doublet $\Gamma_{6}$ $(|\pm 1/2\rangle)$ at 149 K. The obtained CEF scheme and exchange constants account for the main features of the data shown in Figures 1 and 2, meaning that the magnetic anisotropy, the spin-flop transition and the Schottky anomaly in $C_{mag}(T)/T$ are all well explained by this model. However, it is important to notice that the CEF parameters obtained from the fits to macroscopic measurements data may not be as precise and unique. An accurate determination of the CEF scheme and its parameters does require a direct measurement by inelastic neutron scattering [@Christianson_CEF115], while the mixed parameters of the wave functions may be compared with a X-Ray absorption study [@Severing_Ce115]. Nonetheless, apart from a more precise determination of the CEF parameters, the analysis presented here suggests that the Ce$^{3+}$ 4$f$ electrons behave as localized magnetic moments. The only indication of a possible Kondo compensation is given by the partially recovered magnetic entropy at $T_{\rm N}$ ($\sim 80\%$ of R$ln$2) and by the rough estimate of $\gamma$. Hence, in order to further investigate the presence of Kondo lattice behavior in CeCuBi$_2$ we have also performed pressure dependent electrical resistivity. Applied pressure is well known to favor the Kondo effect with respect to the RKKY interaction in Ce-based HF [@Review; @Piers_JPCM_2001; @Thompson_Fisk_review115]. ![Temperature dependence of the electrical resistivity for different values of applied hydrostatic pressure up to 18 kbar. The inset shows the variation of $T_{\rm N}$ as a function of pressure.[]{data-label="fig:Fig3"}](Fig3.eps){width="0.95\columnwidth"} The in-plane electrical resistivity $\rho(T,P)$ of CeCuBi$_2$ as a function of temperature for several pressures is summarized in Fig. 3. The electrical resistivity at ambient pressure first decreases with decreasing temperature, but it increases back for temperatures below $\sim$ 150 K. Then, $\rho(T,P=0)$ reaches a maximum at about 50 K and then drops abruptably after the magnetic scattering becomes coherent, as typically found for Ce-based HF [@Review; @Piers_JPCM_2001; @Thompson_Fisk_review115]. At lower temperatures, a small kink is observed at $T_{\rm N}$ = 16 K. As pressure is increased, a small increase of the room-$T$ resistivity value is observed together with the decrease of $T_{\rm N}$. This effect can be seen in the inset of Fig. 3. Such suppression of $T_{\rm N}$ as a function of pressure is consistent with the increase of the Kondo effect on the Ce$^{3+}$ $f$-electrons. However, the slope d$T_{\rm N}$/d$P$ is relatively small and might be an indication that the Ce$^{3+}$ $f$-electrons remain rather localized in the studied pressure range. To gain a more microscopic insight about the magnetic interactions present in CeCuBi$_2$, its magnetic structure was investigated by XRMS technique at the Ce-L$_2$ absorption edge in order to enhance the magnetic signal from Ce$^{3+}$ ions below $T_{\rm N}$. Magnetic peaks were observed in the dipolar resonant condition at temperatures below $\sim$ 16 K at reciprocal lattice points forbidden for charge scattering and consistent with a commensurate antiferromagnetic structure with propagation vector $(0~0~ \frac{1}{2})$. To determine the possible irreducible magnetic representations $\Gamma^{\rm XRMS}$ associated with the space group $P4/nmm$, the propagation vector $(0~0~\frac{1}{2})$ and a magnetic moment at the Ce sites, we used the program SARA[*h*]{} [@wills2000new]. The magnetic representation can be decomposed in terms of four non-zero irreducible representations (IRs - $\Gamma^{\rm XRMS}_{2}$, $\Gamma^{\rm XRMS}_{3}$, $\Gamma^{\rm XRMS}_{9}$ and $\Gamma^{\rm XRMS}_{10}$) written in Kovalev’s notation [@Kovalev_1993] . Within the possible IRs $\Gamma^{\rm XRMS}_{2}$ and $\Gamma^{\rm XRMS}_{3}$ correspond to a magnetic structure with the Ce magnetic moments pointing along $c$-direction and $\Gamma^{\rm XRMS}_{9}$ and $\Gamma^{\rm XRMS}_{10}$ correspond to Ce magnetic moments lyng in the $ab$-plane. Also, $\Gamma^{\rm XRMS}_{2}$ and $\Gamma^{\rm XRMS}_{10}$ correspond to a FM coupling of the Ce ions within the unit cell forming a (+ + - -) sequence (model I), and $\Gamma^{\rm XRMS}_{3}$ and $\Gamma^{\rm XRMS}_{9}$ correspond to an AFM coupling of the Ce ions within the unit cell forming a (+ - - +) sequence (model II), both along the $c$-direction. Figure \[fig:Fig4\] shows typical results for one selected magnetic peak (0 0 5.5). Fig. \[fig:Fig4\]a presents the resonant energy line shape showing a single peak 3 eV below the edge and compatible with a pure dipolar resonance. Fig. \[fig:Fig4\]b shows the intensity as a function of the angle $\theta$, where a pseudo Voigt fit shows a full width half maximum of 0.023$^o$. Fig.  \[fig:Fig4\]c presents the temperature dependence of the square root of the integrated intensity which is proportional to the magnetization of Ce$^{3+}$ ions. A pseudo Voigt peak shape was used to fit longitudinal $\theta$-2$\theta$ scans in order to obtain the integrated intensities and no hysteresis was observed by cycling the temperature. The results presented in Figure \[fig:Fig4\] are consistent with a dipolar resonant magnetic scattering peak in which the magnetic intensity is found only in the $\sigma$-$\pi$’ channel and disappears above T$_N$. This confirms the magnetic origin of the (0 0 5.5) reflection due to the existence of an AFM structure that doubles the chemical unit cell in the $c$-direction. For collinear magnetic structures, the intensity of the X-ray resonant magnetic scattering assumes a simple form for dipolar resonances[@Hill_Acta1996]: ![(a) Energy dependence of the XRMS signal of the Ce$^{3+}$ magnetic moment of CeCuBi$_2$. (b) Intensity as a function of the angle $\theta$ of the crystal through the magnetic peak $(0~0~5.5)$ for $\sigma$-$\pi$’ polarization channel at the Ce $L_2$ absorption edge. (c) Square root of the intensity as a function of temperature measured with longitudinal ($\theta$-2$\theta$) from 8 to 16.5 K.[]{data-label="fig:Fig4"}](Fig4.eps){width="0.9\columnwidth"} $$\begin{aligned} \label{eq:equation1} I\propto\frac{1}{\mu^{*}sin(2\theta)}\left|\sum_{n}\textit{f}_{n}^{E1}(\vec{k},\hat{\epsilon},\vec{k'},\hat{\epsilon'},\hat{z}_{n})e^{i\vec{Q} \cdot \vec{R}_n}\right|^{2},\end{aligned}$$ where $\textit{f}_{n}^{E1}$ is the dipolar resonant magnetic form factor, $\mu^{*}$ is the absorption correction for asymmetric reflections, 2$\theta$ is the scattering angle, $\vec{Q}=\vec{k'}-\vec{k}$ is the wave-vector transfer, $\vec{k}$ and $\vec{k'}$ ($\hat{\epsilon}$ and $\hat{\epsilon'}$) are the incident and scattered wave (polarization) vectors, respectively. $\vec{R}_{n}$ is the position of the Ce *n*th atom in the lattice, and $\hat{z}_{n}$ is the moment direction at the *n*th site. The sum is over the *n* resonant ions in the magnetic unit cell. The intensity variation of a magnetic peak as a function of the azimuthal angle can be used to determine the direction of the magnetic moment. In the case of a propagation vector (0 0 $\frac{1}{2}$) the azimuthal dependence of specular magnetic peaks will be constant if the moment is parallel to the c-axis and will show a sinusoidal dependence if the moment is perpendicular to the c-axis. Fig. \[fig:Fig5\]a shows the azimuthal dependence of the integrated intensity ($\sigma - \pi$’ polarization channel) for two magnetic reflections (0 0 3.5) and (0 0 5.5). At each $\psi$ position a $\theta$ scan was measured and fitted using a Pseudo-Voigt function from which we extracted the integrated intensity value plotted in Fig. \[fig:Fig5\]a. As we can see from data in Fig. \[fig:Fig5\]a, the azimuthal dependence of the integrated intensity of the magnetic peaks have a small variation (within error bars) and presents no sinusoidal periodicity. This result clearly indicates that the moment direction is parallel to the $c$-axis and is in good agreement with the susceptibility measurements of Fig. \[fig:Fig1\]a and b that also point for the $c$-axis as the easy magnetization axis. The magnetic coupling of the Ce atoms within the unit cell can be determined by comparing the experimental integrated intensity of several magnetic peaks with the calculated model (Eq. \[eq:equation1\]) [@Serrano_PRB74_2006; @Cris_PRB2007; @Cris_PRB2010]. Simplifying the absolute square in Eq. \[eq:equation1\] for the reflections of the type $(0~0~\frac{l}{2})$, the magnetic intensity is proportional to $\sin^2(\theta +\alpha)* \cos^2$($2\pi l z) $ for the model (+ + - -) \[or $\Gamma^{\rm XRMS}_{2}$\] or $\cos^2(\theta +\alpha)*\cos^2$($2\pi l z)$ for the model (+ - - +) \[or $\Gamma^{\rm XRMS}_{3}$\], where $z$ is the position of Ce ions within the unit cell, $\theta$ is the Bragg angle and $\alpha$ is the angle between the vector $Q$ and the $c$-direction. Both magnetic representations consider the magnetic moments aligned parallel to $c$-direction. Six magnetic peaks of the family $(0~0~\frac{l}{2})$ were measured at $T$ = 8 K and compared with the theoretical normalized intensities calculated using the model described by Eq. 1 (Fig.  \[fig:Fig5\]b). It is clear that the experimental data follows the (+ + - -) coupling. We can conclude that the correct magnetic structure corresponds to the $\Gamma^{\rm XRMS}_{2}$ irreducible representation, *i.e.*, the magnetic moments are aligned parallel to $c$-axis with the (+ + - -) coupling. ![(a) Normalized intensity as a function of the azimuthal angle for (0 0 5.5) and (0 0 3.5) magnetic reflections. (b) Experimental normalized intensity (solid circles) as a function of the *l* at the reciprocal space direction $(0,0,l)$ at 8 K for $\sigma$ - $\pi$’ polarization channel at the Ce $L_2$ absorption edge. The calculated intensities for two different magnetic couplings are presented as dashed and dot lines.[]{data-label="fig:Fig5"}](Fig5.eps){width="1.0\columnwidth"} ![Schematic representation of the magnetic structure of CeCuBi$_2$. The dashed line defines two magnetic unit cells while the solid line bounds the chemical unit cells.[]{data-label="fig:Fig6"}](Fig6.eps){width="0.5\columnwidth"} ![(a) $^{63}$Cu NMR spectra ($I$ = 3/2) for various temperatures around $T_{\rm N}$. (b) The corresponding temperature dependence of the Knight shift (left hand side scale) compared with the magnetic susceptibility at $H$ = 7 T (right hand side scale). The Knight shift was calculated as $^{63}K_{\bot} = \frac{(\nu/^{63}\gamma_N)- H_r}{H_r}$, with $\nu =$ 72 MHz and $H_r$ the peak position of the spectra for each temperature)[]{data-label="fig:Fig7"}](Fig7.eps){width="1.0\columnwidth"} Figure \[fig:Fig6\] represents the magnetic structure of CeCuBi$_2$ compound where we show two magnetic unit cells (dashed line) for better visualization of the spin coupling along the three directions. One chemical unit cell is represented by the solid line. The heretofore determined magnetic structure of CeCuBi$_2$ sheds some light on the global magnetic properties of this compound. The ferromagnetic coupling between the Ce$^{3+}$ moments in the plane is consistent with the presence of FM fluctuations which justifies the need to include two different exchange constants in our mean field model. Indeed, this proposed magnetic structure is compatible with the spin-flop transition to a ferromagnetic phase when a magnetic field is applied along the the $c$-axis. Now, seeking for further microscopic information regarding the coupling of the Ce$^{3+}$4$f$ with the conduction electrons and/or neighboring atoms in CeCuBi$_2$, we have carried out temperature dependent $^{63}$Cu NMR measurements. NMR probes local interactions because it is site-specific and sensitive to both electronic charge distribution and magnetic spin. Figure \[fig:Fig7\]a presents a few $^{63}$Cu NMR spectra ($I$ = 3/2) at temperatures around $T_{\rm N}$ with the magnetic field applied perpendicular to the $c$-axis. Above $T_{\rm N}$ = 16 K, the $^{63}$Cu NMR spectra show a sharp single Lorentzian peak at $H\sim$ 6.4 T. We also observed a small peak at around 6.5 T which we associate with one of the $^{209}$Bi Zeeman split transitions ($I$ = 9/2). Although not the scope of the current investigation, further ongoing experiments will elucidate the origin of this signal. Also, for the low temperature spectra, below $T_{\rm N}$, a weak and broad signal is observed at the $^{63}$Cu NMR line which might be associated with local field distribution at the $^{63}$Cu sites. The Knight shift $^{63}K_{\bot}$ presented in Figure \[fig:Fig7\]b (left hand side axis) was obtained from Lorentzian fits of the main peaks shown in Figure \[fig:Fig7\]a. The $^{63}K_{\bot}$ is compared with the magnetic susceptibility $\chi_{\bot}$ measured with an applied magnetic field of 7 T. These data indicate that the Knight shift tracks the magnetic susceptibility down to $T_N\sim$ 16 K. Below this AFM transition, $^{63}K_{\bot}$ is driven by the internal field (hyperfine field) created by the Ce$^{3+}$ 4$f$ moments at the Cu sites. The Ce moments, slightly canted by the external field applied perpendicular to the $c$-axis create a weak ferromagnetic component in the plane responsible for the shift of the resonant peak towards lower fields. Moreover, the relatively small $^{63}K_{\bot}$ found for $^{63}$Cu NMR spectra in CeCuBi$_2$ compared to what is generally found in HF materials [@Yang_Urbano_PRL2009; @Ohara_CeCu2Si2] is consistent with the weak hyperfine coupling constant estimated from the K-$\chi$ plot (not shown). This indicates that the Cu 3$d$ electrons are weakly hybridized with the Ce$^{3+}$ 4$f$ local moments [@DCox]. Within this scenario, the dipolar rather than the RKKY interaction seems to be the most relevant mechanism for the weak hyperfine coupling at the Cu sites. Additionally, the sign and strength of the coupling are not strongly influenced by the $c$-$f$ hybridization as expected in most heavy fermion materials. As such, one may speculate that the strong local moment character of the Ce$^{3+}$ 4$f$ magnetism in CeCuBi$_2$ is a dominant trend in Ce$T$X$_2$ family ($T$ = transition metal, $X$ = pnictogen) which makes these families less likely [@leticie] to host HF superconductivity, at least under ambient pressure. Nevertheless, in a recent work [@Mizoguchi_CeNiBi2], polycrystalline samples of CeNi$_{0.8}$Bi$_2$ have been reported as a heavy fermion superconductor with an AFM transition at $\sim$ 5 K and a SC transition at $\sim$ 4.2 K. The superconducting phase was claimed to be evoked by Ni deficiencies that would presumably create a different ground state than the one realized on single crystalline CeNiBi$_2$ [@CeTBi2]. The stoichiometric compound was earlier classified as a moderate HF antiferromagnet with $T_{\rm N}$ $\sim$ 5 K, and the presence of a zero resistance transition was associated to contamination of extrinsic Bi thin films. However, a more recent work has raised important questions about the intrinsic origin of the superconductivity in CeNi$_{0.8}$Bi$_2$. In the report, systematic studies on CeNi$_{1-x}$Bi$_2$ (with $1-x$ varying from 0.64 to 0.85) single crystals [@Lin_RNiBi2] revealed that the superconductivity in CeNi$_{0.8}$Bi$_2$ is more likely to be associated with the $T_c$ of the Bi thin films and/or secondary phases of the binaries NiBi and NiBi$_3$. All the above arguments corroborate to our belief that the Ce$TX_2$ compounds do present strong local moment magnetism, with a moderate Kondo compensation implying a weak hybridization between the Ce$^{3+}$ 4$f$ ions and the conduction electrons. In absolute terms, this scenario does not favor a superconducting state. CONCLUSIONS =========== In summary, we studied temperature dependent magnetic susceptibility, pressure dependent electrical resistivity, heat-capacity, $^{63}$Cu Nuclear Magnetic Resonance and X-ray magnetic scattering on CeCuBi$_{2}$ single crystals. Our data revealed that CeCuBi$_{2}$ orders antiferromagnetically at $T_{N}\simeq$ 16 K, a value higher than those previously reported for Cu-deficient samples. The detailed analysis of the macroscopic properties of CeCuBi$_{2}$ using a mean field model with a tetragonal CEF, enlightened by the microscopic experiments, allowed us to understand the magnetic anisotropy and the realization of a spin-flop first order-like transition in CeCuBi$_{2}$. These are very compatible with a magnetic field effect on the commensurate antiferromagnetic structure with propagation wavevector (0 0 $\frac{1}{2}$) and Ce moments oriented along the $c$-axis. The combined analyses in this detailed investigation suggest that CeCuBi$_{2}$ presents a weak heavy fermion behavior with strongly localized Ce$^{3+}$ 4$f$ electrons subjected to dominant CEF effects and anisotropic RKKY interactions. This work was supported by FAPESP (Grants No. 2009/09247-3, 2009/10264-0, 2011/01564-0, 2011/23650-5, 2011/19924-2, 2012/04870-7, 2012/05903-6 and 2013/20181-0), CNPq and CAPES-Brazil. The authors thank the 4-ID-D staff of the APS - ANL for the XRMS measurements. R.R.U. is grateful to Dr. Hironori Sakai for enlightening discussions. Work at NHMFL was performed under the auspices of the NSF through the Cooperative Agreement No. DMR-0654118 and the State of Florida. The authors acknowledge the Brazilian Nanotechnology National Laboratory LNNano for providing the equipment and technical support for the EDS experiments. [99]{} A. C. Hewson. *The Kondo Problem To Heavy Fermions.* Cambridge University Press, Cambrige, 1993. P. Coleman, C. Pépin, Q. Si, and R. Ramazashvili. *J. Phys. Condens. Matter* **R723-R738**, (2001). J. D. Thompson, and Z. Fisk. *J. Phys. Soc. Japan* **81** 011002 (2012). S. Seo, Xin Lu, J-X. Zhu, R. R. Urbano, N. Curro, E. D. Bauer, V. A. Siderov, L. D. Pham, Tuson Park, Z. Fisk, and J. D. Thompson *Nat. Phys.* **10**, 120 (2014). P. G. Pagliuso, C. Petrovic, R. Movshovich, D. Hall, M. F. Hundley, J. L. Sarrao, J. D. Thompson and Z. Fisk, Phys. Rev. B **64**, 100503(R)(2001). N. J. Curro, J. L. Sarrao, J. D. Thompson, P. G. Pagliuso, S. Kos, A. Abanov and D. Pines, *Phys. Rev. Lett.* **90**, 227202 (2003). E. D. Bauer, H. O. Lee, V. A. Sidorov, N. Kurita, K. Gofryk, J. X. Zhu, F. Ronning, R. Movshovich, J. D. Thompson and T. Park, *Phys. Rev. B* **81**, 180507 (2010). F. Steglich, J. Aarts, C. D. Bredl, W. Lieke, D. Meshede, W. Franz, H. Schafer *Phys. Rev. Lett.* **43** 1892 (1979). O. Stockert et al. Nature Phys. 7, 119-124 (2011). A. Thamizhavel, *et al* *Phys. Rev. B* **68**, 054427 (2003). J. Ye, Y. K. Huang, K. Kadowaki, T. Matsumoto, *Acta Cryst.* **C52**, 1323 (1996). A. Thamizhavel, *et al* *J. Phys. Soc. Japan* **72** 2632 (2003). M. H. Jung, A. H. Lacerda, and T. Takabatake, *Phys. Rev. B* **65**, 132405 (2002). C. Petrovic, S. L. BudÕko, J. D. Strand, P. C. Canfield. *J. Mag. and Mag. Mat.* **261** 210 (2003). C. B. R. Jesus, M. M. Piva, P. F. S. Rosa, C. Adriano, and P. G. Pagliuso. *J. Appl. Phys.* **115**, 17E115 (2014). P. G. Pagliuso, J. D. Thompson, M. F. Hundley, J. L. Sarrao, and Z. Fisk, Phys. Rev. B **63**, 054426 (2001). P. G. Pagliuso, D. J. Garcia, E. Miranda, E. Granado, R. Lora Serrano, C. Giles, J. G. S. Duque, R. R. Urbano, C. Rettori, J. D. Thompson, M. F. Hundley and J. L. Sarrao, J. Appl. Phys. **99**, 08P703 (2006). R. Lora-Serrano, D. J. Garcia, E. Miranda, C. Adriano, C. Giles, J. G. S. Duque and P. G. Pagliuso, Phys. Rev. B **79**, 024422 (2009) A. D. Christianson, E. D. Bauer, J. M. Lawrence, P. S. Riseborough, N. O. Moreno, P. G. Pagliuso, J. L. Sarrao, J. D. Thompson, E. A. Goremychkin, F. R. Trouw, M. P. Hehlen, and R. J. McQueeney. Phys. Rev. B **70**, 134505 (2004). T. Willers, Z. Hu, N. Hollmann, P. O. Körner, J. Gegner, T. Burnus, H. Fujiwara, A. Tanaka, D. Schmitz, H. H. Hsieh, H.-J. Lin, C. T. Chen, E. D. Bauer, J. L. Sarrao, E. Goremychkin, M. Koza, L. H. Tjeng, and A. Severing. Phys. Rev. B **81**, 195114 (2010). A. Wills, Physica B **276-278**, 680 (2000). O. V. Kovalev, [*Representations of the Crystallographic Space Groups*]{}, 2$^{nd}$ ed., edited by H. T. Stokes and D. M. Hatch (Gordon and Breach Science Publishers, Yverdon, Switzerland, 1993). J. P. Hill and D. F. McMorrow, Acta Crystallogr., Sect. A: Found. Crystallogr. **A52**, 236 (1996). H. Hegger, C. Petrovic, E. G. Moshopoulou, M. F. Hundley, J. L. Sarrao, Z. Fisk, and J. D. Thompson, Phys. Rev. Lett. **84** 4986 (2000). Y. Muro, N. Takeda, M. Ishikawa, J. Alloys Compd **257** 23-29 (1997). R. Lora-Serrano, C. Giles, E. Granado, D. J. Garcia, E. Miranda, O. Agüero, L. Mendonça-Ferreira, J. G. S. Duque, and P. G. Pagliuso, Phys. Rev. B **74**, 214404 (2006). C. Adriano, R. Lora-Serrano, C. Giles, F. de Bergevin, J. C. Lang, G. Srajer, C. Mazzoli, L. Paolasini, and P. G. Pagliuso, Phys. Rev. B **76**, 104515 (2007). C. Adriano, C. Giles, E. M. Bittar, L. N. Coelho, F. de Bergevin, C. Mazzoli, L. Paolasini, W. Ratcliff, R. Bindel, J. W. Lynn, Z. Fisk, and P. G. Pagliuso, Phys. Rev. B **81**, 245115 (2010). Yi-feng Yang, R. R. Urbano, N. J. Curro, D. Pines, E. D. Bauer, P. G. Pagliuso, *Phys. Rev. Lett.* **103**, 197004 (2009) and references therein. Tetsuo Ohama, Hiroshi Yasuoka, D. Mandrus, Z. Fisk and J. L. Smith. *J. Phys. Soc. Jpn* **64**, 2628 (1995). E. Kim, M. Makivic and D. L. Cox, Phys. Rev. Lett. **75**, 2015 (1995). L. Mendonça-Fereira, T. Park, V. Sidorov, M. Nicklas, E. M. Bittar, R. Lora-Serrano, E. N. Hering, S. M. Ramos, M. B. Fontes, E. Baggio-Saitovich, J. L. Sarrao, J. D. Thompson and P. G. Pagliuso, *Phys. Rev. Lett.* **101**, 017005 (2008). Hiroshi Mizoguchi, Satoru Matsuishi, Masahiro Hirano, Makoto Tachibana, Eiji Takayama-Muromachi, Hitoshi Kawaji, and Hideo Hosono, *Phys. Rev. Lett.* **106**, 057002 (2011). Xiao Lin, Warren E. Straszheim, Sergey L. BudÕko, and Paul C. CanÞeld, *J. Alloys Comp.* **554**, 304 (2012).
{ "pile_set_name": "ArXiv" }
--- author: - 'Gian Piero Deidda[^1]' - 'Caterina Fenu[^2]' - Giuseppe Rodriguez title: Regularized solution of a nonlinear problem in electromagnetic sounding --- Introduction {#sec:intro} ============ Electromagnetic induction measurements are often used for non-destructive investigation of certain soil properties, which are affected by the electromagnetic features of the subsurface layers, e.g., the electrical conductivity and the magnetic permeability. Knowing such parameters allows one to identify inhomogeneities in the ground, and to ascertain the presence and the spatial position of particular conductive substances, such as metals, liquid pollutants, or saline water. This leads to important applications in Geophysics [@callegary2007; @fraser2007; @martinelli2010; @vanderkruk2000], Hydrology, [@lesch1995; @paine2003], Agriculture [@corwin2005; @gebbers2007; @yao2010], etc. A ground conductivity meter (GCM) is a rather common device for electromagnetic sounding, initially introduced by the Geonics company. It is composed by two coils (a transmitter and a receiver) placed at the extrema of a bar. An alternating current in the transmitter coil produces a primary magnetic field $H_P$, which induces small currents in the ground. These currents produce a secondary magnetic field $H_S$, which is sensed by the receiver coil. A GCM has two operating positions, which produce different measures, corresponding to the orientation (either vertical or horizontal) of the electric dipole generated by the transmitter coil; see Figure \[fig:em38\]. The instrument is often coupled to a GPS, so that it is possible to associate to each measurement the geographical position where it was taken. Its success is due to ease of use and a relatively low price. ![Schematic representation of a ground conductivity meter (GCM).[]{data-label="fig:em38"}](em38fig) Let us assume that the instrument is placed at ground level in vertical orientation, the soil has uniform magnetic permeability $\mu_0=4\pi10^{-7}\,\textrm{H/m}$ (the permeability of free space) and uniform electrical conductivity $\sigma$. Moreover, let the *induction number* be small $$B = \frac{r}{\delta} = r \sqrt{\frac{\mu_0\omega\sigma}{2}} \ll 1, \label{indnum}$$ where $\delta$ is the *skin depth* (the depth at which the principal field $H_P$ has been attenuated by a factor ${{\mathrm{e}}}^{-1}$), $r$ is the inter-coil distance, $\omega=2\pi f$, and $f$ is operating frequency of the device. In the case of the Geonics EM38 device, $r=1\,\textrm{m}$, $f=14.6\,\textrm{kHz}$, and $\delta=10\sim 50\,\textrm{m}$. A GCM measures the apparent conductivity $$m = \frac{4}{\mu_0\omega r^2}\operatorname{Im}\left(\frac{(H_S)_d}{(H_P)_d}\right), \label{appcond}$$ which coincides with $\sigma$ under the above restrictive assumptions, where $(H_P)_d$ and $(H_S)_d$ are the components along the dipole axis of the primary and secondary magnetic field, respectively. In real applications the assumption of uniform soil conductivity is not realistic. On the contrary, it is particularly interesting to investigate non homogeneous soils, where the electrical conductivity $\sigma$ is not constant and the magnetic permeability $\mu$ may be very different from $\mu_0$ for the presence of ferromagnetic materials. Apparent conductivity gives no information on the depth localization of inhomogeneities. To recover the distribution of conductivity with respect to depth by data inversion, multiple measures are needed. Different measures can be generated by varying some of the parameters which influence the response of the device. As suggested in [@borch97], we assume to place the instrument at different heights over the ground and to repeat the induction measurement with both the possible orientations. In 1980, McNeill [@mcneill80] described a linear model, based on the response curves in the vertical and horizontal positions of the device, which relates the apparent conductivity to the height over the ground. If $m^V(h)$ and $m^H(h)$ are the apparent conductivity measured by the GCM at height $h$, in the vertical and horizontal orientation, respectively, then $$\begin{aligned} m^V(h) &= \int_0^\infty \phi^V(h+z) \sigma(z) \,dz, \\ m^H(h) &= \int_0^\infty \phi^H(h+z) \sigma(z) \,dz, \end{aligned}$$ where $z$ is the ratio between the depth and the inter-coil distance $r$, $\sigma(z)$ is the conductivity at $z$, and $$\phi^V(z) = \frac{4z}{(4z^2+1)^{3/2}}, \qquad \phi^H(z) = 2 - \frac{4z}{(4z^2+1)^{1/2}}.$$ The linear model is valid for uniform magnetic permeability $\mu_0$, small *induction number* $B$, and moderate conductivity ($\sigma\lesssim 100\,\textrm{mS/m}$). This model is not accurate when the conductivity of some subsurface layers is large. In this case a nonlinear model is available [@hendr02; @wait82], which will be described in the next section. The two models are analyzed in [@borch97; @hendr02]. One of the conclusions is that, even if the nonlinear model produces better results when the electrical conductivity is large, “the linear model is preferred for all conductivities since it needs considerably less computer resources”. The same authors made available two Matlab packages for inversion, based on the linear and the nonlinear models, respectively; see [@borch97; @hendr02]. An algorithm for the solution of the linear model based on Tikhonov regularization has been analyzed in [@deidda03]. In this paper we propose a regularized inversion procedure for the nonlinear model, based on the coupling of the damped Gauss–Newton method with truncated singular value decomposition (TSVD). We give an explicit representation of the Jacobian of the nonlinear function defining the model, and show that the computational load required by the algorithm is not large, and allows real-time processing. For this reason we think that our approach is competitive with the existing ones, and can be effectively used in the presence of highly conductive materials. The plan of the paper is the following: in Section \[sec:nonlin\] we describe a nonlinear model which connects the real conductivity of the soil layers to the apparent conductivity, and in Section \[sec:jacob\] we compute the Jacobian matrix of the model. The inversion algorithm is introduced in Section \[sec:invalgo\], while Section \[sec:regul\] describes the regularization procedure adopted in the inversion algorithm. Finally, Section \[sec:numex\] reports the result of numerical experiments performed both on synthetic and real data. The nonlinear model {#sec:nonlin} =================== A nonlinear model which relates the electromagnetic features of the soil to the height of measurement is described in [@wait82], and it is further analyzed and adapted to the case of a GCM in [@hendr02]. The model is derived from Maxwell’s equations, keeping into account the cylindrical symmetry of the problem, due to the fact that the magnetic field sensed by the receiver coil is independent of the rotation of the instrument around the vertical axis. In the following, $\lambda$ is a variable of integration which has no particular physical meaning. It can be interpreted as the ratio between a length and the *skin depth* $\delta$. Following [@wait82 Chapter III], we assume that the soil has a layered structure with $n$ layers, each of thickness $d_i$, $i=1,\dots,n$. The bottom layer $d_n$ is assumed to be of infinite width. Let $\sigma_k$ and $\mu_k$ be the electrical conductivity and the magnetic permeability of the $k$-th layer, respectively, and let $u_k(\lambda) = \sqrt{\lambda^2 + {{\mathrm{i}}}\sigma_k\mu_k\omega}$, where ${{\mathrm{i}}}=\sqrt{-1}$ is the imaginary unit. Then, the characteristic admittance of the $k$-th layer is given by $$\label{charadm} N_k(\lambda) = \frac{u_k(\lambda)}{{{\mathrm{i}}}\mu_k\omega}.$$ The surface admittance at the top of the $k$-th layer is denoted by $Y_k(\lambda)$ and verifies the following recursion $$\label{surfadm} Y_k(\lambda) = N_k(\lambda)\frac{Y_{k+1}(\lambda)+N_k(\lambda) \tanh(d_k u_k(\lambda))}{N_k(\lambda) + Y_{k+1}(\lambda) \tanh(d_k u_k(\lambda))}, \quad k=n-1,\ldots,1,$$ where $d_k$ is the width of the $k$th layer. The recursion is initialized setting $Y_n(\lambda)=N_n(\lambda)$ at the lowest layer. Numerically, this is equivalent to start the recursion at $k=n$ with $Y_{n+1}(\lambda)=0$. Now let, $$R_0(\lambda) = \frac{N_0(\lambda) - Y_1(\lambda)}{N_0(\lambda) + Y_1(\lambda)}, \label{reflfact}$$ and $$\begin{aligned} T_0(h) &= -\delta^3 \int_{0}^{\infty} \lambda^2 e^{-2h\lambda} R_0(\lambda) J_0(r\lambda) \,d\lambda, \\ T_2(h) &= -\delta^2 \int_{0}^{\infty} \lambda e^{-2h\lambda} R_0(\lambda) J_1(r\lambda) \,d\lambda, \\ \end{aligned} \label{T0T2}$$ where $J_0(\lambda)$ and $J_1(\lambda)$ are Bessel functions of the first kind of order 0 and 1, respectively, and $r$ is the inter-coil distance. We prefer to express the integrals in the variable $\lambda$, instead than $g=\delta\lambda$ as in [@wait82]. The results obtained by Wait in [@wait82 page 113], adapted to the geometry of a GCM, give the components of the magnetic field along the dipole axis $$\begin{aligned} (H_P)_z &=-\frac{C}{r^3}, &\qquad (H_S)_z &=-\frac{C}{\delta^3}T_0(h), &\qquad & \text{(vertical dipole)}, \\ (H_P)_y &=-\frac{C}{r^3}, &\qquad (H_S)_y &=-\frac{C}{r\delta^2}T_2(h), &\qquad & \text{(horizontal dipole)}, \end{aligned}$$ where $C$ is a constant; in the case of a horizontal dipole, we assume its axis to be $y$-directed. Substituting in , we obtain the predicted values of the apparent conductivity measurement $m^V(h)$ (vertical orientation of coils) and $m^H(h)$ (horizontal orientation of coils) at height $h$ above the ground $$\begin{aligned} m^V(h) &= \frac{4}{\mu_0\omega r^2} \operatorname{Im}(B^3T_0(h)), \\ m^H(h) &= \frac{4}{\mu_0\omega r^2} \operatorname{Im}(B^2T_2(h)), \end{aligned}$$ where $B$ is the induction number . Simplifying formulae, we find $$\begin{aligned} m^V(h) &= \frac{4r}{\mu_0\omega} \mathcal{H}_0\left[ -\lambda e^{-2h\lambda} \operatorname{Im}(R_0(\lambda)) \right](r) \\ m^H(h) &= \frac{4}{\mu_0\omega} \mathcal{H}_1\left[ -e^{-2h\lambda} \operatorname{Im}(R_0(\lambda)) \right](r). \end{aligned} \label{mvmh}$$ Here we denote by $$\mathcal{H}_\nu[f](r) = \int_{0}^{\infty} f(\lambda) J_\nu(r\lambda) \lambda \,d\lambda \label{hankel}$$ the Hankel transform of order $\nu$ of the function $f(\lambda)$. In our numerical experiments we approximate $\mathcal{H}_\nu[f](r)$ by the quadrature formula described in [@anderson1979], using the nodes and weights adopted in [@hendr02]. The above relations show that the apparent conductivity predicted by the model is independent of the *skin depth* $\delta$ and the *induction number* $B$. To our knowledge, this is the first time that this is noted. The model just described depends upon a number of parameters which influence the value of the apparent conductivity. In particular, it is affected by the instrument orientation (horizontal/vertical), its height $h$ over the ground, the inter-coil distance $r$, and the angular frequency $\omega$. The problem of data inversion is very important in Geophysics, when one is interested in depth localization of inhomogeneities of the soil. To this purpose, multiple measures are needed to recover the distribution of conductivity with respect to depth. In order to obtain such measures, we use the two admissible orientations and assume to record apparent conductivity at height $h_i$, $i=1,\ldots,m$. This generates $2m$ data values. In our analysis, we let the magnetic permeability take the same value $\mu_0$ in the $n$ layers. This assumption is approximately met if the ground does not contain ferromagnetic materials. Then, we can consider the apparent conductivity as a function of the value of the conductivity $\sigma_k$ in each layer and of the height $h$, and we write $m^V({\boldsymbol\sigma},h)$ and $m^H({\boldsymbol\sigma},h)$, where ${\boldsymbol\sigma}=(\sigma_1,\ldots,\sigma_n)^T$, instead than $m^V(h)$ and $m^H(h)$. Now, let $b^V_i$ and $b^H_i$ be the data recorded by the GCM at height $h_i$ in the vertical and horizontal orientation, respectively, and let us denote by $r_i({\boldsymbol\sigma})$ the error in the model prediction for the $i$th observation $$r_i({\boldsymbol\sigma}) = \begin{cases} b^V_i - m^V({\boldsymbol\sigma},h_i), \qquad & i=1,\dots,m, \\ b^H_{m-i} - m^H({\boldsymbol\sigma},h_{m-i}), \qquad & i=m+1,\dots,2m. \end{cases} \label{risigma}$$ Setting ${\mathbf{b}}^V=(b^V_1,\ldots,b^V_m)^T$, ${\mathbf{m}}^V({\boldsymbol\sigma})=(m^V({\boldsymbol\sigma},h_1),\ldots,m^V({\boldsymbol\sigma},h_m))^T$, and defining ${\mathbf{b}}^H$ and ${\mathbf{m}}^H({\boldsymbol\sigma})$ similarly, we can write the measured data vector and the model predictions vector as $${\mathbf{b}} = \begin{bmatrix} {\mathbf{b}}^V \\ {\mathbf{b}}^H \end{bmatrix}, \quad {\mathbf{m}}({\boldsymbol\sigma}) = \begin{bmatrix} {\mathbf{m}}^V({\boldsymbol\sigma},{\mathbf{h}}) \\ {\mathbf{m}}^H({\boldsymbol\sigma},{\mathbf{h}}) \end{bmatrix}, \label{bmsigma}$$ and the residual vector as $${\mathbf{r}}({\boldsymbol\sigma}) = {\mathbf{b}} - {\mathbf{m}}({\boldsymbol\sigma}). \label{rsigma}$$ To estimate the computational complexity needed to evaluate ${\mathbf{r}}({\boldsymbol\sigma})$ we assume that the complex arithmetic operations are implemented according to the classical definitions, i.e., that 2 floating point operations (*flops*) are required for each complex sum, 6 for each product and 11 for each division. The count of other functions (exponential, square roots, etc.) is given separately because it is not clear how many *flops* they require. If $n$ is the number of layers, $2m$ the number of data values, and $q$ the nodes in the quadrature formula used to approximate , we obtain a complexity $O((45n+8m)q)$ *flops* plus $2nq$ evaluations of functions with a complex argument, and $mq$ with a real argument. Computing the Jacobian matrix {#sec:jacob} ============================= As we will see in the next section, being able to compute or to approximate the Jacobian matrix $J({\boldsymbol\sigma})$ of the vector function is crucial for the implementation of an effective inversion algorithms and to have information about its speed of convergence and conditioning. The approach used in [@hendr02] is to resort to a finite difference approximation $$\frac{\partial r_i({\boldsymbol\sigma})}{\partial \sigma_j} = \frac{r_i({\boldsymbol\sigma}+{\boldsymbol\delta}_j)-r_i({\boldsymbol\sigma})}{\delta}, \quad i=1,\ldots,2m,\ j=1,\ldots,n, \label{findiff}$$ where ${\boldsymbol\delta}_j=\delta\,{\mathbf{e}}_j=(0,\ldots,0,\delta,0,\ldots,0)^T$ and $\delta$ is a fixed constant. In this section we describe the explicit expression of the Jacobian matrix. We will show that the complexity of this computation is smaller than that required by the finite difference approximation . In the following lemma we omit for clarity the variable $\lambda$. \[lem:derY\] The derivatives $Y'_{kj} = \frac{\partial Y_k}{\partial \sigma_j}$, $k,j=1,\ldots,n$, of the surface admittances can be obtained starting from $$Y'_{nn} = \frac{1}{2u_n}, \qquad Y'_{nj} = 0, \quad j=1,\ldots,n-1, \label{recinit}$$ and proceeding recursively for $k=n-1,n-2,\dots,1$ by $$\begin{aligned} Y'_{kj} &= N_k^2 b_k Y'_{k+1,j}, \qquad j=n,n-1,\ldots,k+1, \\ Y'_{kk} &= \frac{a_k}{2u_k} + \frac{b_k}{2} \left[ N_k^2 d_k - Y_{k+1}\left(d_k Y_{k+1} + \frac{1}{{{\mathrm{i}}}\mu_k\omega}\right)\right], \\ Y'_{kj} &= 0, \qquad j=k-1,k-2,\ldots,1, \\ \end{aligned} \label{yprime}$$ where $$a_k = \frac{Y_{k+1}+N_k \tanh(d_k u_k)}{N_k + Y_{k+1} \tanh(d_k u_k)}, \quad b_k = \frac{1}{[N_k + Y_{k+1} \tanh(d_k u_k)]^2 \cosh^2(d_k u_k)}. \label{akbk}$$ From we obtain $$\frac{\partial u_k}{\partial \sigma_j} = \frac{\partial}{\partial \sigma_j} \sqrt{\lambda^2 + {{\mathrm{i}}}\sigma_k\mu_k\omega} = \frac{1}{2N_k} \delta_{kj}, \qquad \frac{\partial N_k}{\partial \sigma_j} = \frac{\partial}{\partial \sigma_j} \frac{u_k}{{{\mathrm{i}}}\mu_k\omega} = \frac{1}{2u_k} \delta_{kj}, \label{duknk}$$ where $\delta_{kj}$ is the Kronecker delta, that is $1$ if $k=j$ and $0$ otherwise. The recursion initialization follows from $Y_n=N_n$; see Section \[sec:nonlin\]. We have $$\begin{gathered} Y'_{kj} = \frac{\partial N_k}{\partial \sigma_j} a_k + N_k \cdot \frac{\frac{\partial Y_{k+1}}{\partial \sigma_j} + \frac{\partial N_k}{\partial \sigma_j}\tanh(d_k u_k) + N_k \frac{\partial\tanh(d_k u_k)}{\partial \sigma_j}}{N_k + Y_{k+1} \tanh(d_k u_k)} \\ - N_k a_k \cdot \frac{\frac{\partial N_k}{\partial \sigma_j} + \frac{\partial Y_{k+1}}{\partial \sigma_j}\tanh(d_k u_k) + Y_{k+1} \frac{\partial\tanh(d_k u_k)}{\partial \sigma_j}}{N_k + Y_{k+1} \tanh(d_k u_k)},\end{gathered}$$ with $a_k$ defined as in . If $j\neq k$, then $\frac{\partial N_k}{\partial \sigma_j} = \frac{\partial u_k}{\partial \sigma_j} = 0$ and we obtain $$Y'_{kj} = N_k^2 \frac{\frac{\partial Y_{k+1}}{\partial \sigma_j} \left(1 - \tanh^2(d_k u_k)\right) }{[N_k + Y_{k+1} \tanh(d_k u_k)]^2} = N_k^2 b_k Y'_{k+1,j}.$$ The last formula, with $b_k$ given by , avoids the cancellation in $1 - \tanh^2(d_k u_k)$. If $j = k$, after some straightforward simplifications, we get $$\begin{gathered} Y'_{kk} = \frac{\partial N_k}{\partial \sigma_k}a_k + \frac{N_k}{N_k + Y_{k+1} \tanh(d_k u_k) } \biggl[ Y'_{k+1,k} (1 - a_k\tanh(d_k u_k)) \biggr. \\ \biggl. + \frac{\partial N_k}{\partial \sigma_k} (\tanh(d_k u_k) - a_k) + \frac{d_k}{2} \left( 1-a_k\frac{Y_{k+1}}{N_k} \right) (1-\tanh^2(d_k u_k)) \biggr].\end{gathered}$$ This formula, using and , leads to $$Y'_{kk} = \frac{a_k}{2u_k} + N_k b_k \left[N_k\left(Y'_{k+1,k} + \frac{d_k}{2}\right) - \frac{1}{2}Y_{k+1}\left(\frac{d_k}{N_k}Y_{k+1} + \frac{1}{u_k}\right)\right].$$ The initialization implies that $Y'_{kj}=0$ for any $j<k$. In particular $Y'_{k+1,k}=0$, and since $N_k/u_k$ is constant one obtains the expression of $Y'_{kk}$ given in . This completes the proof. The quantity $a_k$ in appears in the right hand side of , and its denominator is present also in $b_k$. It is therefore possible to implement jointly the recursions and in order to reduce the number of floating point operations required by the computation of the Jacobian. We also note that since we only need the partial derivatives of $Y_1$ in the following Theorem \[theorem1\], we can overwrite the values of $Y'_{k+1,j}$ with $Y'_{kj}$ at each recursion step, so that only $n$ storage locations are needed for each $\lambda$ value, instead of $n^2$. \[theorem1\] The partial derivatives of the residual function are given by $$\frac{\partial r_i({\boldsymbol\sigma})}{\partial \sigma_j} = \begin{cases} \displaystyle \frac{4r}{\mu_0\omega} \mathcal{H}_0\left[ \lambda e^{-2h_i\lambda} \operatorname{Im}\left(\frac{\partial R_0(\lambda)}{\partial\sigma_j}\right) \right](r), \quad & i=1,\ldots,m, \\ \\ \displaystyle \frac{4}{\mu_0\omega} \mathcal{H}_1\left[ e^{-2h_{i-n}\lambda} \operatorname{Im}\left(\frac{\partial R_0(\lambda)}{\partial\sigma_j}\right) \right](r), \quad & i=m+1,\ldots,2m, \end{cases}$$ for $j=1,\ldots,n$. Here $\mathcal{H}_\nu$ ($\nu=0,1$) denotes the Hankel transform , $r$ is the inter-coil distance, $\frac{\partial R_0(\lambda)}{\partial \sigma_j}$ is the $j$th component of the gradient of the function $$\frac{\partial R_0(\lambda)}{\partial \sigma_j} = \frac{-2{{\mathrm{i}}}\mu_0\omega\lambda}{(\lambda + {{\mathrm{i}}}\mu_0\omega Y_1(\lambda))^2} \cdot \frac{\partial Y_1}{\partial \sigma_j},$$ and the partial derivatives $\frac{\partial Y_1}{\partial \sigma_j}$ are given by Lemma \[lem:derY\]. The proof follows easily from Lemma \[lem:derY\] and from equations , , and . The numerical implementation of the above formulae needs care. It has already been noted in the proof of Lemma \[lem:derY\] that equations – are written in order to avoid cancellations that may introduce huge errors in the computation. Moreover, to prevent overflow in the evaluation of the term $$\cosh^2(d_k u_k(\lambda)) = \cosh^2(d_k\sqrt{\lambda^2+{{\mathrm{i}}}\sigma_k\mu_k\omega})$$ in the denominator of $b_k$, we fix a value $\lambda_{\text{max}}$ and for $\operatorname{Re}(d_k u_k(\lambda))>\lambda_{\text{max}}$ we let $b_k=b_k(\lambda)=0$. In our numerical experiments we adopt the value $\lambda_{\text{max}}=300$. Under the same assumptions assumed at the end of Section \[sec:nonlin\], we obtain the complexity of the joint computation of the function ${\mathbf{r}}({\boldsymbol\sigma})$, defined in , and its Jacobian, given in Theorem \[theorem1\]. It amounts to $O((3n^2+8mn)q)$ *flops*, $3nq$ complex functions, and $mnq$ real functions. To approximate the Jacobian by finite differences, as in , one has to evaluate $n+1$ times ${\mathbf{r}}({\boldsymbol\sigma})$, corresponding to $O((45n^2+8mn)q)$ *flops*, $2n^2q$ complex functions, and $mnq$ real functions. It is immediate to observe that the computation of the Jacobian is not more time consuming than its approximation by finite differences, and that for a moderately large $n$ it is much faster to directly compute it, instead than using an approximation. In order to further reduce the computational cost, it is possible to resort to the *Broyden update* of the Jacobian, which can be interpreted as a generalization of the secant method. Let us denote with $J_0=J({\boldsymbol\sigma}_0)$ the Jacobian of the function ${\mathbf{r}}({\boldsymbol\sigma})$ computed in the initial point ${\boldsymbol\sigma}_0$. Then, the Broyden update consists of applying the following recursion $$J_k = J_{k-1} + \frac{({\mathbf{y}}_k-J_{k-1}{\mathbf{s}}_k){\mathbf{s}}_k^T}{{\mathbf{s}}_k^T{\mathbf{s}}_k}, \label{broyden}$$ where ${\mathbf{s}}_{k}={\boldsymbol\sigma}_k-{\boldsymbol\sigma}_{k-1}$ and ${\mathbf{y}}_k = r({\boldsymbol\sigma}_{k})-r({\boldsymbol\sigma}_{k-1})$. This formula makes the linearization $$r_k({\boldsymbol\sigma}) = r({\boldsymbol\sigma}_{k}) + J_k ({\boldsymbol\sigma}-{\boldsymbol\sigma}_{k})$$ exact in ${\boldsymbol\sigma}_{k-1}$ and guarantees the least change in the Frobenius norm $\|J_k-J_{k-1}\|_F$. The usual approach is to apply recursion for $1,\ldots,k_B-1$, and to recompute the Jacobian after $k_B$ iterations, before reapplying the update, in order to improve accuracy. A single application of takes $10mn+2(m+n)$ *flops*, to be added to the cost of the evaluation of ${\mathbf{r}}({\boldsymbol\sigma})$. We will investigate the performance of this method in the numerical experiments. Inversion algorithm {#sec:invalgo} =================== Let the measured data vector ${\mathbf{b}}$, the model predictions vector ${\mathbf{m}}({\boldsymbol\sigma})$, and the residual vector ${\mathbf{r}}({\boldsymbol\sigma})$, be defined as in –. The problem of data inversion, which is crucial in order to recover the inhomogeneities of the soil, consists of computing the conductivity $\sigma_i$ of each layer ($i=1,\ldots,n$) which determine a given data set ${\mathbf{b}}\in{{\mathbb{R}}}^{2m}$. As it is customary, we use a least squares approach, by solving the nonlinear problem $$\label{least} \displaystyle \min_{{\boldsymbol\sigma}\in{{\mathbb{R}}}^n} f({\boldsymbol\sigma}), \qquad f({\boldsymbol\sigma}) = \frac{1}{2} \|{\mathbf{r}}({\boldsymbol\sigma})\|^2 = \frac{1}{2}\sum_{i=1}^{2m}r_i^2({\boldsymbol\sigma}), $$ where $\|\cdot\|$ denotes the Euclidean norm and $r_i({\boldsymbol\sigma})$ is defined in . The vector ${\boldsymbol\sigma}^*$ is a local minimizer of if and only if it is a stationary point, i.e., if ${\mathbf{f}}'({\boldsymbol\sigma}^*)=0$, where ${\mathbf{f}}'({\boldsymbol\sigma})$ is the gradient of the function $f$, defined by $$[{\mathbf{f}}'({\boldsymbol\sigma})]_{j} = \frac{\partial f({\boldsymbol\sigma})}{\partial \sigma_j} = \sum_{i=1}^{2m} r_i({\boldsymbol\sigma})\frac{\partial r_i({\boldsymbol\sigma})}{\partial \sigma_j}, \qquad j=1,\ldots,n; \label{gradf}$$ see, e.g., [@bjo96] for a complete treatment. We assume that $f$ is differentiable and smooth enough that the following Taylor expansion $${\mathbf{f}}'({\boldsymbol\sigma}+ {\mathbf{s}}) = {\mathbf{f}}'({\boldsymbol\sigma}) + {\mathbf{f}}''({\boldsymbol\sigma}){\mathbf{s}} + O(\|{\mathbf{s}}\|^2) \simeq {\mathbf{f}}'({\boldsymbol\sigma}) + {\mathbf{f}}''({\boldsymbol\sigma}){\mathbf{s}}$$ is valid for $\|{\mathbf{s}}\|$ sufficiently small, where $$[{\mathbf{f}}''({\boldsymbol\sigma})]_{jk} = \frac{\partial^2 f({\boldsymbol\sigma})}{\partial \sigma_j \partial \sigma_k} = \sum_{i=1}^{2m} \left(\frac{\partial r_i({\boldsymbol\sigma})}{\partial \sigma_j} \frac{\partial r_i({\boldsymbol\sigma})}{\partial \sigma_k} + r_i({\boldsymbol\sigma})\frac{\partial^2 r_i({\boldsymbol\sigma})}{\partial \sigma_j \partial \sigma_k}\right) \label{hessf}$$ is the Hessian of the function $f$. Newton’s method chooses the step ${\mathbf{s}}_\ell$ by imposing that ${\boldsymbol\sigma}^*$ is a stazionary point, i.e., as the solution to $${\mathbf{f}}''({\boldsymbol\sigma}_\ell){\mathbf{s}}_{\ell} = - {\mathbf{f}}'({\boldsymbol\sigma}_\ell).$$ The next iterate is then computed as ${\boldsymbol\sigma}_{\ell+1}={\boldsymbol\sigma}_\ell+{\mathbf{s}}_\ell$. The analytic expression of the Hessian ${\mathbf{f}}''({\boldsymbol\sigma})$ is not always available; whenever it is, its computation implies a large computational cost. To overcome this problem, one possibility is to resort to the Gauss–Newton method, which is based on the solution of a sequence of linear approximations of ${\mathbf{r}}({\boldsymbol\sigma})$, rather than of ${\mathbf{f}}'({\boldsymbol\sigma})$. Let ${\mathbf{r}}$ be Fréchet differentiable and ${\boldsymbol\sigma}_k$ denote the current approximation, then we can write $${\mathbf{r}}({\boldsymbol\sigma}_{k+1}) \simeq {\mathbf{r}}({\boldsymbol\sigma}_{k}) + J({\boldsymbol\sigma}_{k}){\mathbf{s}}_k,$$ where ${\boldsymbol\sigma}_{k+1} = {\boldsymbol\sigma}_k + {\mathbf{s}}_k$ and $J({\boldsymbol\sigma})$ is the Jacobian of ${\mathbf{r}}({\boldsymbol\sigma})$, defined by $$[J({\boldsymbol\sigma})]_{ij} = \frac{\partial r_i({\boldsymbol\sigma})}{\partial \sigma_j}, \qquad i=1,\ldots,2m, \ j=1,\ldots,n.$$ At each step $k$, ${\mathbf{s}}_k$ is the solution of the linear least squares problem $$\label{gaussnewt} \displaystyle \min_{{\mathbf{s}}\in{{\mathbb{R}}}^n} \|{\mathbf{r}}({\boldsymbol\sigma}_k) + J_k{\mathbf{s}}\|,$$ where $J_k=J({\boldsymbol\sigma}_k)$ or some approximation; see, e.g.,  and . Problem is equivalent to the normal equation $$J_k^T J_k {\mathbf{s}} = - J_k^T {\mathbf{r}}({\boldsymbol\sigma}_k), \label{normal}$$ from which we obtain the following iterative method $$\label{gaussnewt2} {\boldsymbol\sigma}_{k+1} = {\boldsymbol\sigma}_k + {\mathbf{s}}_k = {\boldsymbol\sigma}_k - J_k^{\dagger} \, {\mathbf{r}}({\boldsymbol\sigma}_k),$$ where $J_k^{\dagger}$ is the Moore–Penrose pseudoinverse of $J_k$ [@bjo96]; if $2m\geq n$ and $J_k$ has full rank, then $J_k^{\dagger}=(J_k^TJ_k)^{-1}J_k^T$. Using this notation, the gradient and the Hessian of $f({\boldsymbol\sigma})$ can be written as $$\label{newt} \begin{aligned} {\mathbf{f}}'({\boldsymbol\sigma}) &= J({\boldsymbol\sigma})^T {\mathbf{r}}({\boldsymbol\sigma}), \\ {\mathbf{f}}''({\boldsymbol\sigma}) &= J({\boldsymbol\sigma})^TJ({\boldsymbol\sigma}) + \sum_{i=1}^{2m} r_i({\boldsymbol\sigma}) H_i({\boldsymbol\sigma}), \end{aligned}$$ where $$[H_i({\boldsymbol\sigma})]_{jk} = \frac{\partial^2 r_i({\boldsymbol\sigma})}{\partial \sigma_j \partial \sigma_k}$$ is the Hessian of the $i$th residual $r_i({\boldsymbol\sigma})$. Then, the Gauss–Newton method can be seen as a special case of Newton’s method, obtained by neglecting the term $\sum_{i=1}^{2m} r_i({\boldsymbol\sigma}) H_i({\boldsymbol\sigma})$ from . This term is small if either each $r_i({\boldsymbol\sigma})$ is mildly nonlinear at ${\boldsymbol\sigma}_k$, or the residuals $r_i({\boldsymbol\sigma}_k)$, $i = 1,...,2m$, are small. Since we are focused on the nonlinear case, we do not take into account the first assumption. We remark that in the case of a mildly nonlinear problem, a linear model is available [@borch97; @mcneill80]. When the residuals $r_i({\boldsymbol\sigma}_k)$ are small, or when the problem is consistent (${\mathbf{r}}({\boldsymbol\sigma}^*)=0$), the Gauss–Newton method can be expected to behave similarly to Newton’s method. In particular, the local convergence rate will be quadratic for both methods. If the above conditions are not satisfied, the Gauss–Newton method may not converge. We remark that, while the physical problem is obviously consistent, this is not necessarily true in our case, since we assume a layered soil, that is, we approximate the conductivity $\sigma(z)$ by a piecewise constant function. Furthermore, in the presence of noise in the data the problem will certainly be inconsistent. To ensure convergence, the damped Gauss–Newton method replaces the approximation by $${\boldsymbol\sigma}_{k+1} = {\boldsymbol\sigma}_k + \alpha_k{\mathbf{s}}_k, \label{dampedGN}$$ where $\alpha_k$ is a step length to be determined. To choose it, we used the Armijo–Goldstein principle [@ortega1970], which selects $\alpha_k$ as the largest number in the sequence $2^{-i}$, $i=0,1,\dots$, for which the following inequality holds $$\|{\mathbf{r}}({\boldsymbol\sigma}_k)\|^2 - \|{\mathbf{r}}({\boldsymbol\sigma}_k + \alpha_k{\mathbf{s}}_k)\|^2 \ge \frac{1}{2} \alpha_k\|J_k{\mathbf{s}}_k \|^2.$$ The damped method allows us to include an important physical constraint in the inversion algorithm, i.e., the positivity of the solution. In our implementation $\alpha_k$ is the largest step size which both satisfies the Armijo–Goldstein principle and ensures that all the solution components are positive. As we will show in the following section, the problem is severely ill-conditioned, so regularization is needed. Regularization methods {#sec:regul} ====================== To investigate the conditioning of problem , we studied the behaviour of the singular values of the Jacobian matrix $J=J({\boldsymbol\sigma})$ of the vector function ${\mathbf{r}}({\boldsymbol\sigma})$. Let $J=U\Gamma V^T$ be the singular value decomposition (SVD) [@bjo96] of the Jacobian, where $U$ and $V$ are orthogonal matrices of size $2m$ and $n$, respectively, $\Gamma=\diag(\gamma_1,\ldots,\gamma_p,0,\ldots,0)$ is the diagonal matrix of the singular values, and $p$ is the rank of $J$; its condition number is then given by $\gamma_1/\gamma_p$. ![SVD of the Jacobian matrix: left, average singular values and errors ($n=20$); right, average singular values for $n=10,20,30,40$.[]{data-label="fig:jacksvd"}](jacksvd1 "fig:"){width=".48\textwidth"} ![SVD of the Jacobian matrix: left, average singular values and errors ($n=20$); right, average singular values for $n=10,20,30,40$.[]{data-label="fig:jacksvd"}](jacksvd2 "fig:"){width=".48\textwidth"} Fixed $m=10$, we generate randomly $1000$ vectors ${\boldsymbol\sigma}\in{{\mathbb{R}}}^{20}$, having components in $[0,100]$. For each of them we evaluate the correponding Jacobian $J({\boldsymbol\sigma})$ by the formulae proved in Theorem \[theorem1\] and compute its SVD. The left graph in Figure \[fig:jacksvd\] shows the average of the singular values obtained by the above procedure and, for each of them, its minimum and maximum value. It is clear that deviation from the average is small, so that the condition number of the Jacobian matrix has of the same order of magnitude in all tests. Consequently, the linearized problem is severely ill-conditioned independently of the value of ${\boldsymbol\sigma}$, and we do not expect its condition number to change much during iteration. The right graph in Figure \[fig:jacksvd\] reports the average singular values when $n=2m=10,20,30,40$. The figure shows that the condition number is about $10^{14}$ when $n=10$ and increases with dimension. The singular values appear to be exponentially decaying, so the problem is not strictly rank-deficient. The decay rate of singular values appears to change below machine precision $2.2\cdot 10^{16}$, which is represented in the graph by a horinzontal line. The exact singular vales are likely to decay with a stronger rate while the computed ones, reported in the graph, are probably strongly perturbated by error propagation. A problem of this kind is generally referred to as a *discrete ill-posed problem* [@han98], so regularization is needed. A typical approach for the solution of ill-posed problems is Tikhonov regularization. It has been applied by various author to the inversion of geophysical data; see, e.g., [@borch97; @deidda03; @hendr02]. To apply Tikhonov’s method to the nonlinear problem , one has to solve the minimization problem $$\min_{{\mathbf{{\boldsymbol\sigma}\in{{\mathbb{R}}}^n}}} \{ \|{\mathbf{r}}({\boldsymbol\sigma})\|^2 + \mu^2\|L{\boldsymbol\sigma}\|^2 \} \label{tikhonov}$$ for a fixed value of the parameter $\mu$, where $L$ is a regularization matrix; $L$ is often chosen as the identity matrix, or a discrete approximation of the first or second derivative. When the variance of the noise in the data is known, the regularization parameter $\mu$ is usually chosen by the discrepancy principle, otherwise various heuristic methods are used; see [@han98]. The available methods to estimate the parameter require the computation of the regularized solution ${\boldsymbol\sigma}_\mu$ of for many values of $\mu$. This can be done, for example, by the Gauss–Newton method, leading to a a large computational effort. To reduce the complexity we consider an alternative regularization technique based a low-rank approximation of the Jacobian matrix. The best rank $\ell$ approximation ($\ell\leq p$) to the Jacobian according to the Euclidean norm, i.e., the matrix $A_\ell$ which minimizes $\|J-A\|$ over all the matrices of rank $\ell$, can be easily obtained by the above SVD decomposition $J=U\Gamma V$. This procedure allows us to replace the ill-conditioned Jacobian matrix with a well-conditioned rank-deficient matrix $A_\ell$. The corresponding solution to is known as the truncated SVD (TSVD) solution [@han87] and can be expressed as $${\mathbf{s}}^{(\ell)} = -A_\ell^\dagger {\mathbf{r}} = -\sum_{i=1}^\ell \frac{{\mathbf{u}}_i^T{\mathbf{r}}}{\gamma_i} {\mathbf{v}}_i, \label{tsvdsol}$$ where $\ell=1,\ldots,p$ is the regularization parameter, $\gamma_i$ are the singular values, the singular vectors ${\mathbf{u}}_i$ and ${\mathbf{v}}_i$ are the orthogonal columns of $U$ and $V$, respectively, and ${\mathbf{r}}={\mathbf{r}}({\boldsymbol\sigma}_k)$. To introduce a regularization matrix $L\in{{\mathbb{R}}}^{t\times n}$ ($t\leq n$), problem is usually replaced by $$\min_{{\mathbf{s}}\in\mathcal{S}} \|L{\mathbf{s}}\|, \qquad \mathcal{S} = \{ {\mathbf{s}}\in{{\mathbb{R}}}^n ~:~ J^TJ{\mathbf{s}} = -J^T{\mathbf{r}} \}, \label{minlnorm}$$ under the assumption $\mathcal{N}(J) \cap \mathcal{N}(L) = \{0\}$. The generalized singular value decomposition (GSVD) [@paige81] of the matrix pair $(J,L)$ is the factorization $$J = U \Sigma_J Z^{-1}, \qquad L = V \Sigma_L Z^{-1},$$ where $U$ and $V$ are orthogonal matrices and $Z$ is nonsingular. The general form of the diagonal matrices $\Sigma_J$ and $\Sigma_L$, having the same size of $J$ and $L$, is more complicated than we need, so we analyze two cases we are interested in. In the case $2m\geq n=p$, the two diagonal matrices are given by $$\Sigma_J = \begin{bmatrix} 0 & 0 \\ C & 0 \\ 0 & I_{n-t} \end{bmatrix}, \qquad \Sigma_L = \begin{bmatrix} S & 0 \end{bmatrix},$$ where $I_{n-t}$ is the identity matrix of size $n-t$ and $$C = \diag(c_1,\ldots,c_t), \qquad S = \diag(s_1,\ldots,s_t),$$ with $c_i^2+s_i^2=1$. The diagonal elements are ordered such that the *generalized singular values* $\gamma_i=c_i/s_i$ are nondecresing with $i=1,\ldots,t$. When $p=2m<n$, we have $$\Sigma_J = \begin{bmatrix} 0 & C & 0 \\ 0 & 0 & I_{n-t} \end{bmatrix}, \qquad \Sigma_L = \begin{bmatrix} I_{n-2m} & 0 & 0 \\ 0 & S & 0 \end{bmatrix},$$ where $C$ and $S$ are diagonal matrices of size $2m-n+t$. The positivity of this number poses a constraint on the size of $L$. The truncated GSVD (TGSVD) solution ${\mathbf{s}}_{\ell}$ to is then defined as $${\mathbf{s}}^{(\ell)} = -\sum_{i=\overline{p}-\ell+1}^{\overline{p}} \frac{{\mathbf{u}}_{2m-p+i}^T{\mathbf{r}}}{c_i}\, {\mathbf{z}}_{n-p+i} - \sum_{i=\overline{p}+1}^p ({\mathbf{u}}_{2m-p+i}^T{\mathbf{r}})\, {\mathbf{z}}_{n-p+i}, \label{tgsvdsol}$$ where $\ell=0,1,\ldots,\overline{p}$ is the regularization parameter, $\overline{p}=t$ if $2m\geq n$ and $\overline{p}=2m-n+t$ if $2m<n$. Our approach to construct a regularized solution to consists of regularizing each step of the damped Gauss-Newton method by either TSVD or TGSVD. For a fixed value of the regularization parameter $\ell$, we substitute ${\mathbf{s}}$ in by ${\mathbf{s}}^{(\ell)}$ expressed by either or . We let the resulting method $${\boldsymbol\sigma}_{k+1}^{(\ell)} = {\boldsymbol\sigma}_k^{(\ell)} + \alpha_k{\mathbf{s}}_k^{(\ell)} \label{regdampedGN}$$ iterate until $$\|{\boldsymbol\sigma}_k^{(\ell)}-{\boldsymbol\sigma}_{k-1}^{(\ell)}\| < \tau \|{\boldsymbol\sigma}_k^{(\ell)}\| \quad \text{or} \quad k > 100 \quad \text{or} \quad \alpha_k < 10^{-5},$$ for a given tolerance $\tau$. The constraint on $\alpha_k$ is due to its role in ensuring the positivity of the solution. Indeed, when the solution blows up because of ill-conditioning the damping parameter assumes very small values. We denote the solution at convergence by ${\boldsymbol\sigma}^{(\ell)}$. We will discuss the choice of $\ell$ in the next subsection. Choice of the regularization parameter -------------------------------------- In the previous Section we saw how to regularize the ill-conditioned problem with the aid of T(G)SVD. The choice of the regularization parameter is crucial in order to obtain a good approximation ${\boldsymbol\sigma}^{(\ell)}$ of ${\boldsymbol\sigma}$. In this work we make use of some well-known methods to choose a suitable index $\ell$. In real-world applications experimental data are always affected by noise. To model this situation, we assume that the data vector in the residual function , whose norm is minimized in problem , can be expressed as ${\mathbf{b}}=\widehat{{\mathbf{b}}}+{\mathbf{e}}$, where $\widehat{{\mathbf{b}}}$ contains the exact data and ${\mathbf{e}}$ is the noise vector. This vector is generally assumed to have normally distributed entries with mean zero and common variance. If an accurate estimate of the norm of the error $\mathbf{e}$ in $\mathbf{b}$ is known, the value of $\ell$ can often be determined with the aid of the discrepancy principle [@ehn96 Section 4.3]. It consists of determining the regularization parameter $\ell$ as the smallest index $\ell=\ell_{\text{discr}}$ such that $$\label{discrp} \|\mathbf{b}-{\mathbf{m}}({\boldsymbol\sigma}_{\ell_{\text{discr}}})\|\leq\kappa\|\mathbf{e}\|.$$ Here $\kappa>1$ is a user-supplied constant independent of $\|\mathbf{e}\|$. In our experiments we set $\kappa=1.5$, since it produced the best numerical results. The discrepancy principle typically yields a suitable truncation index when an accurate bound for $\|{\mathbf{e}}\|$ is available. We are also interested in the situation when an accurate bound for $\|{\mathbf{e}}\|$ is not available and, therefore, the discrepancy principle cannot be applied. A large number of methods for determining a regularization parameter in this situation have been introduced for linear inverse problems [@han98]. They are known as *heuristic* because it is not possible to prove convergence results for them, in the strict sense of the definition of a regularization method; see, e.g., [@ehn96 Chapter 4]. Nevertheless, it has been shown by numerical experiments, that some heuristic methods provide a good estimation of the optimal regularization parameter in many inverse problems of applicative interest. It is not possible, in general, to apply all the heuristic methods, which were developed in the linear case, to a nonlinear problem. In this paper we use the L-curve criterion [@hol93], which can be extended quite naturally to the nonlinear case. Let us consider the curve obtained by joining the points $$\left\{\log{\|{\mathbf{r}}({\boldsymbol\sigma}^{(\ell)})\|},\log{\|L {\boldsymbol\sigma}^{(\ell)} \|} \right\}, \quad \ell = 1,\dots,\overline{p},$$ where ${\mathbf{r}}({\boldsymbol\sigma}^{(\ell)}) = \mathbf{b}-{\mathbf{m}}({\boldsymbol\sigma}^{(\ell)})$ is the residual error associated to the approximate solution ${\boldsymbol\sigma}^{(\ell)}$ computed by the iterative method , using as a regularization method. If is used instead, it is sufficient to let $L=I$ and replace $\overline{p}$ by $p$. This curve exhibits a typical L-shape in many discrete ill-posed problems. The L-curve criterion seeks to determine the regularization parameter by detecting the index $\ell$ of the point of the curve closer to the corner of the “L”. This choice produces a solution for which both the norm and the residual are fairly small. Various method has been proposed to determine the corner of the L-curve. In our numerical experiments we use two of them. The first one, which we denote as the *corner* method, considers a sequence of pruned L-curves, obtained by removing an increasing number of points, and constructs a list of candidate “vertices” produced by two different selection algorithms. The corner is selected from this list by a procedure which compares the norms and the residuals of the corresponding solutions [@hjr07]. It is currently implemented in [@han07]. The second procedure we use has been recently proposed in [@rr13], by extending a method by T. Regińska [@reg96], which detects the corner by solving an optimization problem. We will refer to this method as the *restricted Regińska* (ResReg) method. Numerical experiments {#sec:numex} ===================== To illustrate the performance of the inversion methods described in the previous sections we present here the results of a set of numerical experiments. Initially, we will apply our method to synthetic data sets, generated by choosing a conductivity distribution and adding random noise to data. Finally, we will analyze a real data set. ![Graphs of the conductivity distribution models $f_1$, $f_2$, and $f_3$. The horizontal axis reports the depth in meters, the vertical axis the electrical conductivity in Siemens/meter.[]{data-label="fig:funs"}](fun1 "fig:"){width=".32\textwidth"} ![Graphs of the conductivity distribution models $f_1$, $f_2$, and $f_3$. The horizontal axis reports the depth in meters, the vertical axis the electrical conductivity in Siemens/meter.[]{data-label="fig:funs"}](fun2 "fig:"){width=".32\textwidth"} ![Graphs of the conductivity distribution models $f_1$, $f_2$, and $f_3$. The horizontal axis reports the depth in meters, the vertical axis the electrical conductivity in Siemens/meter.[]{data-label="fig:funs"}](fun3 "fig:"){width=".32\textwidth"} Figure \[fig:funs\] reports the three functions $f_\ell(z)$, $\ell=1,2,3$, used in our experiments to model the distribution of conductivity, expressed in Siemens/meter, with respect to the depth $z$, measured in meters. The first one is differentiable ($f_1(z)={{\mathrm{e}}}^{-(z-1)^2}$), the second is piecewise linear, the third is a step function. All model functions assume the presence of a strongly conductive material at a given depth. For a chosen model function $f_k$ and a fixed number of layers $n$, we let the layers thickness assume the constant value $d_k=\bar{d}=2/(n-1)$, $k=1,\ldots,n-1$ (see Section \[sec:nonlin\]), so that $z_j=(j-1)\bar{d}$, $j=1,\ldots,n$. The choice of $\bar{d}$ is motivated by the common assumption that a GCM can give useful information about the conductivity of the ground up to a depth of 2 meters. This fact is confirmed by our experiments. We assign to each layer the conductivity $\sigma_j=f_k(z_j)$. Then, we apply the nonlinear model to compute the exact data vector $\widehat{\mathbf{b}}$, letting $$\widehat{b}_i = \begin{cases} \hat{b}^V_i = m^V({\boldsymbol\sigma},h_i), \quad & i=1,\dots,m, \\ \hat{b}^H_{m-i} = m^H({\boldsymbol\sigma},h_{m-i}), \quad & i=m+1,\dots,2m. \end{cases}$$ We assume that the measurements are taken with the EMS in both vertical and horizontal orientation, placed at the heights $h_i=(i-1)\bar{h}$ above the ground, $i=1,\ldots,m$, for a chosen height step $\bar{h}$; see . In our experiments $\bar{h}\geq 0.1\mathrm{m}$. To simulate experimental errors, we determine the perturbed data vector $\mathbf{b}$ by adding a noise vector to $\widehat{\mathbf{b}}$. Specifically, we let the vector $\mathbf{w}$ have normally distributed entries with mean zero and variance one, and compute $$\mathbf{b}=\widehat{\mathbf{b}}+\mathbf{w} \, \|\widehat{\mathbf{b}}\| \frac{\tau}{\sqrt{2m}}.$$ This implies that $\|\mathbf{b}-\widehat{\mathbf{b}}\|\approx\tau\|\widehat{\mathbf{b}}\|$. In the computed examples we use the noise levels $\tau=10^{-3},10^{-2},10^{-1}$. The value of $\tau$ is used in the discrepancy principle (\[discrp\]), where we substitute $\tau\|\widehat{\mathbf{b}}\|$ for $\|\mathbf{e}\|$. For each data set, we solve the least squares problem by the damped Gauss–Newton method . The damping parameter is determined by the Armijo–Goldstein principle, modified in order to ensure the positivity of the solution. Each step of the iterative method is regularized by either the TSVD approach , or by TGSVD , for a given regularization matrix $L$. In our experiments we use both $L=D_1$ and $L=D_2$, the discrete approximations of the first and second derivatives. This two choices pose a constraint on the magnitude of the slope and the curvature of the solution, respectively. To assess the accuracy of the computations we use the relative error $$e_\ell = \frac{\|{\boldsymbol\sigma}-{\boldsymbol\sigma}^{(\ell)}\|}{\|{\boldsymbol\sigma}\|}, \label{error}$$ where ${\boldsymbol\sigma}$ denotes the exact solution of the problem and ${\boldsymbol\sigma}^{(\ell)}$ its regularized solution with parameter $\ell$, obtained by . The experiments were performed using Matlab 8.1 (R2013a) on an Intel Core i7/860 computer with 8Gb RAM, running Linux. The software developed is available from the authors upon request. --------- ---- --------- --------- --------- --------- --------- --------- example $n=20$ $n=20$ $n=20$ 5 2.4e-01 2.4e-01 8.6e-02 8.0e-02 6.9e-02 7.0e-02 $f_1$ 10 2.2e-01 2.1e-01 5.2e-02 5.7e-02 5.2e-02 4.6e-02 20 2.2e-01 2.2e-01 3.9e-02 4.9e-02 3.1e-02 3.5e-02 5 3.1e-01 3.7e-01 7.2e-02 6.4e-02 9.7e-02 1.2e-01 $f_2$ 10 2.8e-01 3.5e-01 6.3e-02 6.2e-02 7.3e-02 8.2e-02 20 2.8e-01 3.9e-01 6.5e-02 5.9e-02 7.9e-02 7.2e-02 5 4.2e-01 4.6e-01 2.9e-01 2.9e-01 2.9e-01 3.0e-01 $f_3$ 10 3.5e-01 4.7e-01 2.7e-01 2.6e-01 2.7e-01 2.8e-01 20 3.3e-01 4.7e-01 2.6e-01 2.6e-01 2.7e-01 2.9e-01 --------- ---- --------- --------- --------- --------- --------- --------- : Optimal error $e_{\text{opt}}$ for $m=5,10,20$ and $n=20,40$, for the TSVD solution ($L=I$) and the TGSVD solution with $L=D_1$ and $L=D_2$. The Jacobian is computed as in Section \[sec:jacob\].[]{data-label="tab:example2"} Our first experiment tries to determine the optimal experimental setting, that is, the number of measurements to be taken and the number of underground layers to be considered. At the same time, we investigate the difference between the TSVD and the TGSVD approaches, and the effect on the solution of the regularization matrix $L$. For each of the three test conductivity models, we discretize the soil by 20 or 40 layers, up to the depth of 2m. We generate synthetic measures at 5, 10, and 20 equispaced heights up to 1.9m, and we solve the problem. This process is repeated for each regularization matrix. The (exact) Jacobian is computed as described in Section \[sec:jacob\]. Table \[tab:example2\] reports the values of the relative error $e_{\text{opt}}=\min_\ell e_\ell$, representing the best possible performance of the method. This value is the average over 20 realizations of the noise. ![Optimal reconstruction for the model functions $f_2$ and $f_3$. The number of underground layers is $n=40$, the noise level is $\tau=10^{-3}$. The solid line is the solution obtained with $m=5$, the dashed line corresponds to $m=10$, the line with bullets to $m=20$. The exact solution is represented by a dash-dotted line.[]{data-label="fig:best"}](best2 "fig:"){width=".49\textwidth"} ![Optimal reconstruction for the model functions $f_2$ and $f_3$. The number of underground layers is $n=40$, the noise level is $\tau=10^{-3}$. The solid line is the solution obtained with $m=5$, the dashed line corresponds to $m=10$, the line with bullets to $m=20$. The exact solution is represented by a dash-dotted line.[]{data-label="fig:best"}](best3 "fig:"){width=".49\textwidth"} It is clear that the TSVD approach is the least accurate. The TGSVD with $L=D_2$ gives the best results for $f_1$, that is when the solution is smooth. When the conductivity distribution is less regular, like $f_2$ and $f_3$, the first derivative $L=D_1$ produces the more accurate approximations. From the results, it seems convenient to use a large number of layers to discretize the soil, that is $n=40$. This choice does not increase significantly the computation time. It is obviously desirable to have at disposal a large number of measurements, however the results obtained with $m=5$ and $m=10$ are not much worse than those computed with $m=20$, and they might be sufficient to give a rough approximation of the depth localization of a conductive substance. This is an important remark, as it reduces the time needed for data acquisition. Figure \[fig:best\] gives an idea of the quality of the computed reconstructions for the model functions $f_2$ and $f_3$, with $n=40$ and noise level $\tau=10^{-3}$. The exact solution is compared to the approximations corresponing to $m=5,10,20$. The above remarks about the influence of the number of measurements $m$ is confirmed. It is also remarkable that the position of the maximum is very well localized. ------------- ---- --------- --------- --------- --------- --------- --------- orientation $n=20$ $n=20$ $n=20$ 5 6.9e-02 7.0e-02 7.2e-02 6.4e-02 2.9e-01 2.9e-01 both 10 5.2e-02 4.6e-02 6.3e-02 6.2e-02 2.7e-01 2.6e-01 20 3.1e-02 3.5e-02 6.5e-02 5.9e-02 2.6e-01 2.6e-01 5 1.4e-01 1.0e-01 1.8e-01 1.8e-01 3.7e-01 3.7e-01 vertical 10 7.0e-02 1.2e-01 1.4e-01 1.4e-01 3.8e-01 3.5e-01 20 7.5e-02 7.5e-02 1.2e-01 1.1e-01 3.3e-01 3.3e-01 5 1.3e-01 1.3e-01 2.7e-01 2.6e-01 4.4e-01 4.1e-01 horizontal 10 8.4e-02 6.1e-02 1.4e-01 1.2e-01 3.8e-01 4.0e-01 20 7.2e-02 6.7e-02 1.1e-01 8.6e-02 3.5e-01 3.4e-01 ------------- ---- --------- --------- --------- --------- --------- --------- : Optimal error $e_{\text{opt}}$ for $m=5,10,20$ and $n=20,40$, for $f_1$ $(L=D_2)$, $f_2$ $(L=D_1)$, and $f_3$ $(L=D_1)$. The results obtained from measurements collected with the instrument in both vertical and horizontal orientation are compared to those obtained with a single orientation.[]{data-label="tab:example5"} In the previous experiments we assumed that all the $2m$ entries of vector ${\mathbf{b}}$ in were available. In Table \[tab:example5\] we compare these results with those obtained by using only half of them, i.e., those corresponding to either the vertical or horizontal orientation of the instrument. The results with the label “both” in the first column are extracted from Table \[tab:example2\]. The results are slightly worse when the number of data is halved, especially for the smooth model function, while they are almost equivalent for the step function $f_3$. In Section \[sec:jacob\] we described the computation of the Jacobian matrix of , and compared it to the slower finite difference approximation and to the Broyden update . To investigate the execution time corresponding to each method, we let the method perform 100 iterations, with $L=D_2$, for a fixed regularization parameter ($\ell=4$). When the Jacobian is exactly computed, the execution time is 7.18s, while the finite difference approximation requires 18.96s. The speedup factor is 2.6, which is far less than the one theoretically expected. This is probably due to the implementation details, and to the fact that the Matlab programming language is interpreted. We performed the same experiment by applying the Broyden update and recomputing the Jacobian every $k_B$ iterations. For $k_B=5$ the execution time was 2.00s, for $k_B=10$, 1.32s. Despite this strong speedup, the accuracy is not substantially affected by this approach. Table \[tab:example2broyden\] reports the relative error $e_{\text{opt}}$ obtained by repeating the experiment of Table \[tab:example2\] using the Broyden method with $k_B=10$. We only report the values of $e_{\text{opt}}$ for the most interesting examples. The loss of accuracy is minimal. ---- --------- --------- --------- --------- --------- --------- $n=20$ $n=20$ $n=20$ 5 7.3e-02 7.6e-02 7.7e-02 7.6e-02 3.0e-01 2.9e-01 10 5.5e-02 4.8e-02 6.9e-02 7.4e-02 2.7e-01 2.8e-01 20 4.3e-02 4.0e-02 7.3e-02 6.9e-02 2.6e-01 2.7e-01 ---- --------- --------- --------- --------- --------- --------- : Optimal error $e_{\text{opt}}$ for $m=5,10,20$ and $n=20,40$, for $f_1$ $(L=D_2)$, $f_2$ $(L=D_1)$, and $f_3$ $(L=D_1)$. The Jacobian is computed every 10 iterations and then updated by the Broyden method.[]{data-label="tab:example2broyden"} ![Results for the reconstruction of test function $f_3$ with a variable step length $\xi$, which is reported on the horizontal axis. The left graph reports the average error $e_{\text{opt}}$, obtained with three regularization matrices $L=I,D_1,D_2$. Each test is repeated 20 times for each noise level $\tau=10^{-3},10^{-2},10^{-1}$. The right graph reports the corresponding standard deviations.[]{data-label="fig:meanstd"}](meanstd){width="\textwidth"} Another interesting issue is understanding which is the spatial resolutions of the inversion algorithm, that is, which is the performance of the method in the presence of a very thin conductive layer. To this end, we consider the test function $f_3$, and let the length $\xi$ of the step vary. Each problem is solved for three regolarization matrices, three noise levels, and each test is repeated 20 times for different noise realizations. The left graph of Figure \[fig:meanstd\] reports the average errors for each value of $\xi$, while the right graph displays the standard deviations. The choice $L=D_1$ appears to be the best. Indeed, not only the errors are better, but the smaller standard deviations ensure that the method is more reliable. Figure \[fig:gradino\] shows the reconstructions of $f_3$ with three different step lengths, with $\xi=1.5,1.0,0.7$, $L=D_1$, and $\tau=10^{-2}$. It is remarkable that the position of the maximum is well located by the algorithm even in the presence of a very thin step. ![Optimal reconstructions for the test function $f_3$, with step lengths 1.5, 1.0, and 0.7, obtained with $L=D_1$ and noise level $\tau=10^{-2}$.[]{data-label="fig:gradino"}](gradino){width="\textwidth"} [10]{} W. L. Anderson. Numerical integration of related [H]{}ankel transforms of orders 0 and 1 by adaptive digital filtering. , 44(7):1287–1305, 1979. Å. Bj[ö]{}rck. . SIAM, Philadelphia, 1996. B. Borchers, T. Uram, and J. M. H. Hendrickx. Tikhonov regularization of electrical conductivity depth profiles in field soils. , 61(4):1004–1009, 1997. Package LINEM38 available at <http://infohost.nmt.edu/~borchers/linem38.html>. J. B. Callegary, T. Ferr[é]{}, and R. W. Groom. Vertical spatial sensitivity and exploration depth of low-induction-number electromagnetic-induction instruments. , 6(1):158–167, 2007. D. L. Corwin and S. M. Lesch. Characterizing soil spatial variability with apparent soil electrical conductivity: I. survey protocols. , 46(1):103–133, 2005. G. P. Deidda, E. Bonomi, and C. Manzi. Inversion of electrical conductivity data with [T]{}ikhonov regularization approach: some considerations. , 46(3):549–558, 2003. H. W. Engl, M. Hanke, and A. Neubauer. . Kluwer, Dordrecht, 1996. D. C. Fraser and G. Hodges. Induction-response functions for frequency-domain electromagnetic mapping system for airborne and ground configurations. , 72(2):F35–F44, 2007. R. Gebbers, E. L[ü]{}ck, and K. Heil. Depth sounding with the [EM38]{}-detection of soil layering by inversion of apparent electrical conductivity measurements. , 7:95–102, 2007. P. C. Hansen. The truncated [SVD]{} as a method for regularization. , 27:543–553, 1987. P. C. Hansen. . SIAM, Philadelphia, PA, 1998. P. C. Hansen. egularization [T]{}ools: [V]{}ersion 4.0 for [M]{}atlab 7.3. , 46:189–194, 2007. P. C. Hansen, T. K. Jensen, and G. Rodriguez. An adaptive pruning algorithm for the discrete [L]{}-curve criterion. , 198(2):483–492, 2007. P. C. Hansen and D. P. O’Leary. The use of the l-curve in the regularization of discrete ill-posed problems. , 14:1487––1503, 1993. J. M. H. Hendrickx, B. Borchers, D. L. Corwin, S. M. Lesch, A. C. Hilgendorf, and J. Schlue. Inversion of soil conductivity profiles from electromagnetic induction measurements. , 66(3):673–685, 2002. Package NONLINEM38 available at <http://infohost.nmt.edu/~borchers/nonlinem38.html>. S. M. Lesch, D. J. Strauss, and J. D. Rhoades. Spatial prediction of soil salinity using electromagnetic induction techniques: 1. statistical prediction models: A comparison of multiple linear regression and cokriging. , 31(2):373–386, 1995. H. P. Martinelli and A. M. Osella. Small-loop electromagnetic induction for environmental studies at industrial plants. , 7(1):91, 2010. J. D. McNeill. Electromagnetic terrain conductivity measurement at low induction numbers. Technical Report TN-6, Geonics Limited, Mississauga, Ontario, Canada, 1980. J. M. Ortega and W. C. Rheinboldt. . Academic Press, 1970. C. C. Paige and M. A. Saunders. Towards a generalized singular value decomposition. , 18(3):398–405, 1981. J. G. Paine. Determining salinization extent, identifying salinity sources, and estimating chloride mass using surface, borehole, and airborne electromagnetic induction methods. , 39(3), 2003. T. Regińska. A regularization parameter in discrete ill-posed problems. , 17:740–749, 1996. L. Reichel and G. Rodriguez. Old and new parameter choice rules for discrete ill-posed problems. , 63(1):65–87, 2013. J. Van Der Kruk, J. A. C. Meekes, P. M. Van Den Berg, and J. T. Fokkema. An apparent-resistivity concept for low-frequency electromagnetic sounding techniques. , 48(6):1033–1052, 2000. J. R. Wait. . Academic Press, New York, 1982. R. Yao and J. Yang. Quantitative evaluation of soil salinity and its spatial distribution using electromagnetic induction method. , 97(12):1961–1970, 2010. [^1]: Dipartimento di Ingegneria Civile, Ambientale e Architettura, Università di Cagliari, Piazza d’Armi 1, 09123 Cagliari, Italy. E-mail: `[email protected]`. [^2]: Dipartimento di Matematica e Informatica, Università di Cagliari, viale Merello 92, 09123 Cagliari, Italy. E-mail: `[email protected]`, `[email protected]`.
{ "pile_set_name": "ArXiv" }
--- abstract: | We investigate slow-light via stimulated Brillouin scattering in a room temperature optical fiber that is pumped by a spectrally broadened laser. Broadening the spectrum of the pump field increases the linewidth $\Delta\omega_p$ of the Stokes amplifying resonance, thereby increasing the slow-light bandwidth. One physical bandwidth limitation occurs when the linewidth becomes several times larger than the Brillouin frequency shift $\Omega_B$ so that the anti-Stokes absorbing resonance cancels out substantially the Stokes amplifying resonance and hence the slow-light effect. We find that partial overlap of the Stokes and anti-Stokes resonances can actually lead to an enhancement of the slow-light delay - bandwidth product when $\Delta\omega_p \simeq 1.3 \Omega_B$. Using this general approach, we increase the Brillouin slow-light bandwidth to over 12 GHz from its nominal linewidth of $\sim$30 MHz obtained for monochromatic pumping. We controllably delay 75-ps-long pulses by up to 47 ps and study the data pattern dependence of the broadband SBS slow-light system. author: - 'Zhaoming Zhu,  Andrew M. C. Dawes,  Daniel J. Gauthier,  Lin Zhang, , and Alan E. Willner, [^1] [^2][^3]' title: Broadband SBS Slow Light in an Optical Fiber --- Slow Light, Stimulated Brillouin Scattering, Optical Fiber, Pulse Propagation, Q penalty. Introduction ============ has been great interest in slowing the propagation speed of optical pulses (so-called slow light) using coherent optical methods [@Gauthier_Boyd]. Slow-light techniques have many applications for future optical communication systems, including optical buffering, data synchronization, optical memories, and signal processing [@Gauthier_PhysicsWorld_2005; @Gauthier_PhotonicsSpectra_2006]. It is usually achieved with resonant effects that cause large normal dispersion in a narrow spectral region (approximately equal to the resonance width), which increases the group index and thus reduces the group velocity of optical pulses. Optical resonances associated with stimulated Brillouin scattering (SBS) [@Okawachi_PRL_2005]–[@Zhu_JOSAB_2005], stimulated Raman scattering [@Sharping_OE_2005] and parametric amplification [@Dahan_OE_2005] in optical fibers have been used recently to achieve slow light. The width of the resonance enabling the slow-light effect limits the minimum duration of the optical pulse that can be effectively delayed without much distortion, and therefore limits the maximum data rate of the optical system [@Stenner_2005]. In this regard, fiber-based SBS slow light is limited to data rates less than a few tens of Mb/s due to the narrow Brillouin resonance width ($\sim$30 MHz in standard single-mode optical fibers). Recently, Herráez *et al*. [@Herraez_OE_2006] increased the SBS slow-light bandwidth to about 325 MHz by broadening the spectrum of the SBS pump field. Here, we investigate the fundamental limitations of this method and extend their work to achieve a SBS slow-light bandwidth as large as 12.6 GHz, thereby supporting data rates of over 10 Gb/s [@Zhu_OFC_2006]. With our setup, we delay 75-ps pulses by up to 47 ps and study the data pulse quality degradation in the broadband slow-light system. This paper is organized as follows. The next section describes the broadband-pump method for increasing the SBS slow-light bandwidth and discuss its limitations. Section \[sect3\] presents the experimental results of broadband SBS slow light, where we investigate the delay of single and multiple pulses passing through the system. From the multiple-pulse data, we estimate the degradation of the eye-diagram as a function of delay, a first step toward understanding performance penalties incurred by this slow-light method. Section \[sect4\] concludes the paper. SBS Slow Light ============== In a SBS slow-light system, a continuous-wave (CW) laser beam (angular frequency $\omega_p$) propagates through an optical fiber, which we take as the $-z$-direction, giving rise to amplifying and absorbing resonances due to the process of electrostriction. A counterpropagating beam (along the $+z$-direction) experiences amplification in the vicinity of the Stokes frequency $\omega_s=\omega_p-\Omega_B$, where $\Omega_B$ is the Brillouin frequency shift, and absorption in the vicinity of the anti-Stokes frequency $\omega_{as}=\omega_p+\Omega_B$. A pulse (denoted interchangeably by the “probe” or “data” pulse) launched along the $+z$-direction experiences slow (fast) light propagation when its carrier frequency $\omega$ is set to the amplifying (absorbing) resonance [@Okawachi_PRL_2005]–[@Zhu_JOSAB_2005]. In the small-signal regime, the output pulse spectrum is related to the input spectrum through the relation $E(z=L,\omega)=E(z=0,\omega)\exp[g(\omega)L/2]$, where $L$ is the fiber length and $g(\omega)$ is the complex SBS gain function. The complex gain function is the convolution of the intrinsic SBS gain spectrum $\tilde{g}_{0}(\omega)$ and the power spectrum of the pump field $I_p(\omega_p)$ and is given by $$\begin{aligned} \label{Eq:convolution} g(\omega) &= \tilde{g}_0(\omega) \otimes I_p(\omega_p)\\ \nonumber &=\int_{-\infty}^{\infty} \frac{g_0 I_p(\omega_p)}{1-i(\omega + \Omega_B -\omega_p)/(\Gamma_B/2)} d\omega_p,\end{aligned}$$ where $g_0$ is linecenter SBS gain coefficient for a monochromatic pump field, and $\Gamma_B$ is the intrinsic SBS resonance linewidth (FWHM in radians/s). The real (imaginary) part of $g(\omega)$ is related to the gain (refractive index) profile arising from the SBS resonance. In the case of a monochromatic pump field, $I_p(\omega_p)=I_0 \delta(\omega_p - \omega_{p0})$, and hence $g(\omega)=g_0 I_0 /[1-i(\omega+\Omega_B -\omega_{p0})/(\Gamma_B/2)]$; the gain profile is Lorentzian. For a data pulse whose duration is much longer than the Brillouin lifetime $1/\Gamma_B$ tuned to the Stokes resonance ($\omega=\omega_s$), the SBS slow-light delay is given by $T_{del}=G_0/\Gamma_B$ where $G_0=g_0 I_0 L$ is the gain parameter and $\exp(G_0)$ is the small-signal gain [@Okawachi_PRL_2005]–[@Zhu_JOSAB_2005]. The SBS slow-light bandwidth is given approximately by $\Gamma_B/2\pi$ (FWHM in cycles/s). Equation (\[Eq:convolution\]) shows that the width of the SBS amplifying resonance can be increased by using a broadband pump. Regardless of the shape of the pump power spectrum, the resultant SBS spectrum is approximately equal to the pump spectrum when the pump bandwidth is much larger than the intrinsic SBS linewidth. This increased bandwidth comes at some expense: the SBS gain coefficient scales inversely with the bandwidth, which must be compensated using a higher pump intensity or using a fiber with larger $g_0$. To develop a quantitative model of the broadband SBS slow-light, we consider a pump source with a Gaussian power spectrum, as realized in our experiment. To simplify the analysis, we first consider the case when the width of the pump-spectrum broadened Stokes and anti-Stokes resonances is small in comparison to $\Omega_B$, which is the condition of the experiment of Ref. [@Herraez_OE_2006]. Later, we will relax this assumption and consider the case when $\Delta\omega_p\sim\Omega_B$ where the two resonances begin to overlap, which is the case of our experiment. In our analysis, we take the pump power spectrum as $$I_p(\omega_p)=\frac{I_0}{\sqrt{\pi}\Delta\omega_p} \exp\left[-\left(\frac{\omega_p-\omega_{p0}}{\Delta \omega_p}\right)^2 \right ]. \label{Eq:pump-spec}$$ Inserting this expression into Eq. (\[Eq:convolution\]) and evaluating the integral results in a complex SBS gain function given by $$g(\omega)=g_0 I_0 \sqrt{\pi}\eta \text{w}(\xi+i\eta), \label{Eq:gain-profile}$$ where $\text{w}(\xi+i\eta)$ is the complex error function [@Handbook], $\xi = (\omega+\Omega_B -\omega_{p0})/\Delta\omega_p$, and $\eta =\Gamma_B/(2\Delta\omega_p)$. When $\eta \ll 1$ (the condition of our experiment), the gain function is given approximately by $$g(\omega)=g_0 I_0 \sqrt{\pi}\eta \exp(-\xi^2)\text{erfc}(-i\xi), \label{Eq:gain-profile-approx}$$ where erfc is the complementary error function. The width (FWHM, rad/s) of the gain profile is given by $\Gamma=2\sqrt{\text{ln}~2}\Delta\omega_p$, which should be compared to the unbroadened resonance width $\Gamma_B$. The line-center gain of the broadened resonance is given by $G=\sqrt{\pi}\eta G_0$. The SBS slow-light delay at line center for the broadened resonance is given by $$T_{del}=\frac{d {\rm Im}[g(\omega)L/2]}{d\omega}|_{\omega=\omega_{s}} = \frac{2\sqrt{\text{ln}~2}}{\sqrt{\pi}}\frac{G}{\Gamma} \approx 0.94\frac{G}{\Gamma}. \label{Eq:delay}$$ A Gaussian pulse of initial pulse width $T_0$ ($1/e$ intensity half-width) exits the medium with a broader pulse width $T_{out}$ determined through the relation $$T_{out}^2=T_0^2+\frac{G}{\Delta\omega_p^2}. \label{Eq:pulse-width}$$ Assuming that a slow-light application can tolerate no more than a factor of two increase in the input pulse width ($T_{out}=2T_0$), the maximum attainable delay is given by $$\left(\frac{T_{del}^{max}}{T_o}\right)=\frac{3}{\sqrt{\pi}}T_0 \Delta\omega_p, \label{Eq:max-delay}$$ which is somewhat greater than that found for a Lorentzian line [@Boyd_2005]. From Eq. (\[Eq:max-delay\]), it is seen that large absolute delays for fixed $\Delta\omega_p$ can be obtained by taking $T_0$ large. ![SBS gain profiles at different pump power spectrum bandwidth $\Delta \omega_p$: (a) real part and (b) imaginary part of $g(\omega)$ as a function of frequency detuning from the pump frequency. Solid curves: $\Delta \omega_p/\Omega_B=0.5$, dashed curves: $\Delta \omega_p/\Omega_B=1.3$, dashed-dotted curves: $\Delta \omega_p/\Omega_B=2.5$.[]{data-label="Fig:gain-profiles"}](fig1.eps){width="45.00000%"} ![Relative SBS delay as a function of the SBS resonance linewidth.[]{data-label="Fig:optimum-linewidth"}](fig2.eps){width="45.00000%"} We now turn to the case when the pump spectral bandwidth $\Delta \omega_p$ is comparable with the Brillouin shift $\Omega_B$. In this situation, the gain feature at the Stokes frequency $\omega_{p0}-\Omega_B$ overlaps with the absorption feature at the anti-Stokes frequency $\omega_{p0}+\Omega_B$. The combination of both features results in a complex gain function given by $$g(\omega)=\frac{G}{L} \left({\rm e}^{-\xi_+^2}\text{erfc}(-i\xi_+)- {\rm e}^{-\xi_-^2}\text{erfc}(-i\xi_-)\right),$$ where $\xi_{\pm}=(\omega \pm \Omega_B -\omega_{p0})/\Delta\omega_p$. As shown in Fig. \[Fig:gain-profiles\], the anti-Stokes absorption shifts the effective peak of the SBS gain to lower frequencies when $\Delta \omega_p$ is large, and reduces the slope of the linear phase-shift region and hence the slow-light delay. For intermediate values of $\Delta\omega_p$, slow-light delay arising from the wings of the anti-Stokes resonances enhances the delay at the center of the Stokes resonance. Therefore, there is an optimum value of the resonance linewidth that maximizes the delay. Figure \[Fig:optimum-linewidth\] shows the relative delay as a function of the resonance bandwidth, where it is seen that the optimum value occurs at $\Delta\omega_p \sim$ 1.3 $\Omega_B$ and that the delay falls off only slowly for large resonance bandwidths. This result demonstrates that it is possible to obtain practical slow-light bandwidths that can somewhat exceed a few times $\Omega_B$. Experiments and Results {#sect3} ======================= As discussed above, the SBS slow-light pulse delay $T_{del}$ is proportional to $G/\Gamma$. The decrease in $G$ that accompanies the increase in $\Delta\omega_p$ needs to be compensated by increasing the fiber length, pump power, and/or using highly nonlinear optical fibers (HNLF). In our experiment, we use a 2-km-long HNLF (OFS, Denmark) that has a smaller effective modal area and therefore a larger SBS gain coefficient $g_0$ when compared with a standard single-mode optical fiber. We also use a high-power Erbium-doped fiber amplifier (EDFA, IPG Model EAD-1K-C) to provide enough pump power to achieve appreciable gain. ![Experiment setup. EDFA: Erbium-doped fiber amplifier, MZM: Mach-Zehnder modulator, FPC: fiber polarization controller, HNLF: highly nonlinear fiber. []{data-label="Fig:setup"}](fig3.eps){width="48.00000%"} To achieve a broadband pump source, we directly modulate the injection current of a distributed feedback (DFB) single-mode semiconductor laser. The change in injection current changes the refractive index of the laser gain medium and thus the laser frequency, which is proportional to the current-modulation amplitude. We use an arbitrary waveform generator (TEK, AWG2040) to create a Gaussian noise source at a 400-MHz clock frequency, which is amplified and summed with the DC injection current of a 1550-nm DFB laser diode (Sumitomo Electric, STL4416) via a bias-T with an input impedance of 50 Ohms. The resultant laser power spectrum is approximately Gaussian. The pump power spectral bandwidth is adjusted by changing the peak-peak voltage of the noise source. The experiment setup is shown schematically in Fig. \[Fig:setup\]. Broadband laser light from the noise-current-modulated DFB laser diode is amplified by the EDFA and enters the HNLF via a circulator. The Brillouin frequency shift of the HNLF is measured to be $\Omega_B/2\pi$ = 9.6 GHz. CW light from another tunable laser is amplitude-modulated to form data pulses that counter-propagate in the HNLF with respect to the pump wave. Two fiber polarization controllers (FPC) are used to maximize the transmission through the intensity modulator and the SBS gain in the slow-light medium. The amplified and delayed data pulses are routed out of the system via a circulator and detected by a fast photoreceiver (12-GHz bandwidth, New Focus Model 1544B) and displayed on a 50-GHz-bandwidth sampling oscilloscope (Agilent 86100A). The pulse delay is determined from the waveform traces displayed on the oscilloscope. To quantify the effect of the bandwidth-broadened pump laser on the SBS process, we measured the broadened SBS gain spectra by scanning the wavelength of a CW laser beam and measuring the resultant transmission. Figure \[Fig:delay-data\](a) shows an example of the spectra. It is seen that the features overlap and that Eq. (\[Eq:gain-profile-approx\]) does an excellent job in predicting our observations, where we adjusted $\Gamma$ to obtain the best fit. We find $\Gamma/2\pi$ = 12.6 GHz ($\Delta\omega_p/\Omega_B\sim 0.8$), which is somewhat smaller than the optimum value. We did not attempt to investigate higher bandwidths to avoid overdriving the laser with the broadband signal. This non-ideality could be avoided by using a laser with a greater tuning sensitivity. \[bth\] ![Observation of broadband slow-light delay. (a) Measured SBS gain spectrum with a dual Gaussian fit. The SBS gain bandwidth (FWHM) is found to be 12.6 GHz. Pulse delay (b) and pulse width (c) as a function of SBS gain. In (b), the solid line is the linear fit of the measured data (solid squares), and the dashed line is obtained with Eq. (\[Eq:delay\]). In (c), the dashed curve is obtained with Eq. (\[Eq:pulse-width\]). (d) Pulse waveforms at 0-dB and 14-dB SBS gain. The input data pulsewidth is $\sim$75 ps.[]{data-label="Fig:delay-data"}](fig4.eps "fig:"){width="48.00000%"} Based on the measured SBS bandwidth, we chose a pulsewidth (FWHM) of $\sim$75 ps ($T_0 \sim$ 45 ps) produced by a 14 Gb/s electrical pulse generator. Figures \[Fig:delay-data\](b)-(d) show the experimental results for such input pulses. Figure \[Fig:delay-data\](b) shows the pulse delay as a function of the gain experienced by the pulse, which is determined by measuring the change in the pulse height. A 47-ps SBS slow-light delay is achieved at a pump power of $\sim$580 mW that is coupled into the HNLF, which gives a gain of about 14 dB. It is seen that the pulse delay scales linearly with the gain, demonstrating the ability to control all-optically the slow-light delay. The dashed line in Fig. \[Fig:delay-data\](b) is obtained with Eq. (\[Eq:delay\]), which tends to underestimate the time delay that is enhanced by the contribution from the anti-Stokes line (see Fig. \[Fig:optimum-linewidth\]). Figure \[Fig:delay-data\](c) shows the width of the delayed pulse as a function of gain. The data pulse is seen to be broadened as it is delayed, where it is broadened by about 40% at a delay of about 47 ps. The dashed curve in Fig. \[Fig:delay-data\](c) is obtained with Eq. (\[Eq:pulse-width\]). Figure \[Fig:delay-data\](d) shows the waveforms of the undelayed and delayed pulses at a gain of 14 dB. We observe pulse delays that are due to fiber lengthening under strong pump conditions due to fiber heating. These thermally-induced delays are not included in Fig. \[Fig:delay-data\](b). ![Pattern dependence of SBS slow-light delay. (a) Data pulses of pattern ‘101.’ (b) Data pulses of pattern ‘1001.’ Note the change in the horizontal scale. (c) Data pulse of pattern ‘10000000000000001.’ In (a)-(c), the data bit-rate is 14 Gb/s and the input single pulsewidth is $\sim$75 ps. (d) Calculated Q penalty vs. normalized time delay for 13.3 Gb/s and 10 Gb/s bit-rate data. []{data-label="Fig:pattern"}](fig5.eps){width="48.00000%"} To investigate how the pulse broadening seen in Fig. \[Fig:delay-data\](c) might impact a communication system, we examine the pattern dependence of the pulse distortion. For example, in NRZ data format, a single ‘1’ pulse has a different gain than consecutive ‘1’ pulses [@Zhang_OFC_06]. The pattern-dependent gain could induce a different ‘1’ level in the whole data stream, while pattern-dependent delay can lead to a large timing jitter. Figures \[Fig:pattern\](a)-(c) show the delayed pulse waveforms of three simple NRZ data patterns with a bit-rate of 14 Gb/s. It is clear that the pulses overlap when they are closer to each other, which degrades the system performance. To quantify the signal quality degradation, we use Q-factor (signal quality factor) of input and output pulses, which is defined as $(m_1-m_0)/(\sigma_1+\sigma_0)$, where $m_1$, $m_0$, $\sigma_1$, $\sigma_0$ are the mean and standard deviation of the signal samples when a ‘1’ or ‘0’ is received. We examine the Q-penalty (decrease in Q-factor) produced by the broadband SBS slow-light system by numerical simulations. Figure \[Fig:pattern\](d) shows the Q-penalty as a function of time delay for 10 Gb/s and 13.3 Gb/s bit-rate data streams, respectively. In the simulations, the ‘1’ pulse is assumed to be Gaussian-shaped with a pulsewidth (FWHM) of the bit time (100 ps for 10 Gb/s, 75 ps for 13.3 Gb/s). The slow-light delay is normalized by the bit time so that Q-penalties in different bit-rate systems can be compared. It is seen that the Q-penalty increases approximately linearly with the normalized delay, and that the 13.3 Gb/s data rate incurs a higher penalty than the 10 Gb/s data rate. The penalty is higher at the higher data rate because the higher-speed signal is more vulnerable to the pattern dependence, especially when the slow-light bandwidth is comparable to the signal bandwidth. Error-free transmission (BER $<10^{-9}$) is found at a normalized delay of 0.25 or less. In an optimized system, it is expected that the pattern dependence can be decreased using a spectrum-efficient signal modulation format or the signal carrier frequency detuning technique [@Zhang_OFC_06], for example. Conclusion {#sect4} ========== In summary, we have increased the bandwidth of SBS slow light in an optical fiber to over 12 GHz by spectrally broadening the pump laser, thus demonstrating that it can be integrated into existing data systems operating over 10 Gb/s. We observed a pattern dependence whose power penalty increases with increasing slow-light delay; research is underway to decrease this dependence and improve the performance of the high-bandwidth SBS slow-light system. Acknowledgment {#acknowledgment .unnumbered} ============== We gratefully acknowledge the loan of the fast pulse generator and sampling oscilloscope by Martin Brooke of the Duke Electrical and Computer Engineering Department. [99]{} R. W. Boyd and D. J. Gauthier, in [*Progress in Optics*]{}, E. Wolf, Ed. (Elsevier, Amsterdam, 2002), Vol. 43, Ch. 6, pp. 497–530. D. Gauthier, “Slow light brings faster communication,” [*Phys. World*]{}, vol. 18, no. 12, pp. 30–32, Dec. 2005. D. J. Gauthier, A. L. Gaeta, and R. W. Boyd, “Slow Light: From basics to future prospects,” [*Photonics Spectra*]{}, vol. 40, no. 3, pp. 44–50, Mar. 2006. R. W. Boyd, D. J. Gauthier, and A. L. Gaeta, “Applications of slow-light in telecommunications,” [*Optics & Photonics News*]{}, vol. 17, no. 4, pp. 19–23, Apr. 2006. Y. Okawachi, M. S. Bigelow, J. E. Sharping, Z. Zhu, A. Schweinsberg, D. J. Gauthier, R. W. Boyd, and A. L. Gaeta, “Tunable all-optical delays via Brillouin slow light in an optical fiber,” [*Phys. Rev. Lett.*]{}, vol. 94, pp. 153902-1–153902-4, Apr. 2005. K. Y. Song, M. G. Herráez, and L. Thévenaz, “Observation of pulse delaying and advancement in optical fibers using stimulated Brillouin scattering,” [*Opt. Express*]{}, vol. 13, no. 1, pp. 82–88, Jan. 2005. K. Y. Song, M. G. Herráez, and L. Thévenaz, “Long optically controlled delays in optical fibers,” [*Opt. Lett.*]{}, vol. 30, no. 14, pp. 1782–1784, Jul. 2005. M. G. Herráez, K. Y. Song, and L. Thévenaz, “Optically controlled slow and fast light in optical fibers using stimulated Brillouin scattering,” [*Appl. Phys. Lett.*]{}, vol. 87, pp. 081113-1–081113-3, Aug. 2005. Z. Zhu, D. J. Gauthier, Y. Okawachi, J. E. Sharping, A. L. Gaeta, R. W. Boyd, and A. E. Willner, “Numerical study of all-optical slow-light delays via stimulated Brillouin scattering in an optical fiber,” [*J. Opt. Soc. Am. B*]{}, vol. 22, no. 11, pp. 2378–2384, Nov. 2005. J. E. Sharping, Y. Okawachi, and A. L. Gaeta, “Wide bandwidth slow light using a Raman fiber amplifier,” [*Opt. Express*]{}, vol. 13, no. 16, pp. 6092–6098, Aug. 2005. D. Dahan and G. Eisenstein, “Tunable all optical delay via slow and fast light propagation in a Raman assisted fiber optical parametric amplifier: a route to all optical buffering,” [*Opt. Express*]{}, vol. 13, no. 16, pp. 6234–6249, Aug. 2005. M. D. Stenner and M. A. Neifeld, Z. Zhu, A. M. C. Dawes, and D. J. Gauthier, “Distortion management in slow-light pulse delay,” [*Opt. Express*]{}, vol. 13, no. 25, pp. 9995–10002, Dec. 2005. M. G. Herráez, K. Y. Song, and L. Thévenaz, “Arbitrary-bandwidth Brillouin slow light in optical fibers,” [*Opt. Express*]{}, vol. 14, no. 4, pp. 1395–1400, Feb. 2006. Z. Zhu, A. M. C. Dawes, D. J. Gauthier, L. Zhang, and A. E. Willner, “12-GHz-bandwidth SBS slow light in optical fibers,” presented at the Optical Fiber Communications Conf., Anaheim, CA, 2006, Paper PDP1. M. Abramowitz and I. A. Stegun, eds., [*Handbook of Mathematical functions*]{} (Dover, New York, 1974), Ch. 7. R. W. Boyd, D. J. Gauthier, A. L. Gaeta, and A. E. Willner, “Maximum time delay achievable on propagation through a slow-light medium,” [*Phys. Rev. A*]{}, vol. 71, pp. 023801-1–023801-4, 2005. L. Zhang, T. Luo, W. Zhang, C. Yu, Y. Wang, and A. E. Willner, “Optimizing operating conditions to reduce data pattern dependence induced by slow light elements,” presented at the Optical Fiber Communications Conf., Anaheim, CA, 2006, Paper OFP7. [Zhaoming Zhu]{} received a Bachelor degree in Electronic Engineering and an M.S. degree in Applied Physics from Tsinghua University, Beijing, China, in 1995 and 1998, respectively, and a Ph.D. degree in Optics from the University of Rochester in 2004. His Ph.D. research on “Photonic crystal fibers: characterization and supercontinuum generation" was supervised by Prof. T. G. Brown. Currently, he is a postdoctoral research associate under the mentorship of Prof. D. J. Gauthier at Duke University studying optical-fiber-based slow light effects and applications. His research interests include nonlinear optics, guided-wave and fiber optics, and photonic crystals. Dr. Zhu is a member of the Optical Society of America and the American Physical Society. [Andrew M. C. Dawes]{} received the B.A. degree with honors in physics from Whitman College, Walla Walla, WA, and the M.A. degree in physics from Duke University, Durham, NC in 2002 and 2005 respectively. He is currently pursuing the Ph.D. degree in the Duke University Department of Physics. His research interests include slow-light in optical fiber, pattern formation in nonlinear optics, and all-optical switching and processing systems. Mr. Dawes is a student member of the Optical Society of America (OSA) and the American Physical Society (APS) and currently a Walter Gordy Graduate Fellow of the Duke University Department of Physics and a John T. Chambers Fellow of the Fitzpatrick Center for Photonics and Communications Systems. [Daniel J. Gauthier]{} received the B.S., M.S., and Ph.D. degrees from the University of Rochester, Rochester, NY, in 1982, 1983, and 1989, respectively. His Ph.D. research on “Instabilities and chaos of laser beams propagating through nonlinear optical media" was supervised by Prof. R. W. Boyd and supported in part through a University Research Initiative Fellowship. From 1989 to 1991, he developed the first CW two-photon optical laser as a Post-Doctoral Research Associate under the mentorship of Prof. T. W. Mossberg at the University of Oregon. In 1991, he joined the faculty of Duke University, Durham, NC, as an Assistant Professor of Physics and was named a Young Investigator of the U.S. Army Research Office in 1992 and the National Science Foundation in 1993. He is currently the Anne T. and Robert M. Bass Professor of Physics and Biomedical Engineering at Duke. His research interests include: applications of slow light in classical and quantum information processing and controlling and synchronizing the dynamics of complex electronic, optical, and biological systems. Prof. Gauthier is a Fellow of the Optical Society of America and the American Physical Society. [Lin Zhang]{} was born in Anshan, Liaoning, China, in 1978. He received the B.S. and M.S. degree from Tsinghua University, Beijing, China, in 2001 and 2004, respectively. His thesis was on birefringence and polarization dependent coupling in photonic crystal fibers. Now he is pursuing the Ph.D. degree in the Department of Electrical Engineering, the University of Southern California, Los Angeles. His current research interests include fiber-based slow light, photonic crystal fibers, nonlinear optics, and fiber optical communication systems. Lin Zhang is a student member of the Optical Society America (OSA) and IEEE Lasers and Electro-Optics Society (LEOS). He was awarded as one of top-ten outstanding graduate students of 2003 year at Tsinghua University. [Alan E. Willner]{} (S’87-M’88-SM’93-F’04) received the Ph.D. degree from Columbia University, New York. He has worked at AT&T Bell Laboratories and Bellcore. He is currently Professor of Electrical Engineering at the University of Southern California (USC), Los Angeles. He has 525 publications, including one book. Prof. Willner is a Fellow of the Optical Society of America (OSA) and was a Fellow of the Semiconductor Research Corporation. He has received the NSF Presidential Faculty Fellows Award from the White House, the Packard Foundation Fellowship, the NSF National Young Investigator Award, the Fulbright Foundation Senior Scholars Award, the IEEE Lasers & Electro-Optics Society (LEOS) Distinguished Traveling Lecturer Award, the USC University-Wide Award for Excellence in Teaching, the Eddy Award from Pennwell for the Best Contributed Technical Article, and the Armstrong Foundation Memorial Prize. His professional activities have included: President of IEEE LEOS, Editor-in-Chief of the IEEE/OSA JOURNAL OF LIGHTWAVE TECHNOLOGY, Editor-in-Chief of the IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS, Co-Chair of the OSA Science and Engineering Council, General Co-Chair of the Conference on Lasers and Electro-Optics (CLEO), General Chair of the LEOS Annual Meeting Program, Program Co-Chair of the OSA Annual Meeting, and Steering and Program Committee Member of the Conference on Optical Fiber Communications (OFC). [^1]: This work was supported by DARPA DSO Slow-Light program. [^2]: Z. Zhu, A.M.C. Dawes and D.J. Gauthier are with the Department of Physics and the Fitzpatrick Center for Photonics and Communications Systems, Duke University, Durham, NC 27708, USA. [^3]: L. Zhang and A.E. Willner are with the Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We improve the results in a previous article of Dieulefait and Manoharmayum and we deduce some stronger modularity results.' author: - | Luis Dieulefait\ Dept. d’Álgebra i Geometria, Universitat de Barcelona;\ Gran Via de les Corts Catalanes 585; 08007 - Barcelona; Spain.\ e-mail: [email protected]\ title: '[**Improvements on Dieulefait-Manoharmayum and applications** ]{}' --- -20mm The results =========== First let us prove the following lemma for $\rho$ an odd, irreducible, $2$-dimensional Galois representations of the absolute Galois group of ${\mathbb{Q}}$, with values in a finite field of odd characteristic $p$:\ **[Lemma:]{} Let $M$ be the quadratic number field ramifying at $p$ only. If $\rho$ is dihedral induced from $M$, the Serre’s weight of $\rho$ is $k < p$ and its Serre’s level is arbitrary, then $\rho$ restricted to $I_p$ (the inertia group at $p$) is reducible and the action of $I_p$ (after passing to tame inertia) is given by level $1$ fundamental characters.\ Proof: If $p$ is different from $2k-3$ this is proved in: \[DM\] (Dieulefait, Manoharmayum: “Modularity of rigid Calabi-Yau threefolds over Q", in “Calabi-Yau Varieties and Mirror Symmetry", Fields Institute Communications Series, American Mathematical Society 38 (2003) 159-166) for $k=4$ but the proof works in general (see the “last comment" below). Also, this is why R. Taylor takes $p$ different from $2k-3$ in his modularity lifting result in the crystalline non-ordinary case (in his article “On the meromorphic continuation of degree two L-functions", preprint).** So, for $p= 2k-3$ let us try to see that a similar proof applies. Since $k$ is even, this $p$ is congruent to $1$ mod $4$, thus $M$ is a real quadratic field. Since $\rho$ is an odd representation and $M$ is real, when we restrict to $M$ the image must be in a SPLIT Cartan subgroup of $GL_2(\mathbb{F})$, where $\mathbb{F}$ is the field of coefficients of $\rho$. If the inertia group $I_p$ acts through level $2$ fundamental characters, the cyclic group generated by the values of the corresponding character $\psi_2^{k-1} = \psi_2^{(p+1)/2}$, which, precisely because $p=2k-3$, has order equal to $2$ in the projective representation, is not contained in this Cartan (because in the opposite case $M/{\mathbb{Q}}$ would not ramify at $p$), and thus corresponds to the quadratic extension $M/{\mathbb{Q}}$. But level $2$ fundamental characters have nothing to do with $M/{\mathbb{Q}}$, because it is $\chi^{(p-1)/2}$, where $\chi$ is the cyclotomic character, the character that corresponds to $M/{\mathbb{Q}}$. Moreover, since in this case we have that after restricting $\rho$ to the absolute Galois group of $M$ the action of $I_p$ is given just by (the group of scalar matrices generated by) $\chi$ , which is a character that extends to the absolute Galois group of ${\mathbb{Q}}$, it is clear that this contradicts the fact that the action of $\rho$ restricted to $I_p$ is given by the character $\psi_2^{(p+1)/2}$.\ We get a contradiction, so $\rho$ restricted to $I_p$ must be reducible and given by level $1$ fundamental characters.\ Of course, this Lemma has consequences: first those already noted in \[DM\], that when you exclude this degenerate case then a combination of modularity lifting results à la Diamond-Taylor-Wiles (more precisely, a result of Diamond-Flach-Guo or a similar result of R. Taylor) with those of Skinner- Wiles are enough in the crystalline case (as proved in \[DM\]: recall that this also uses a result of Breuil to guarantee ordinarity in the case of level $1$ fundamental characters under the hypothesis of crystalline lifts and $k<p$), thus we have:\ **[Corollary 1]{} (combining modularity liftings theorems (cf. \[DM\])): If $\rho$ is reducible or modular, of Serre’s weight $k<p$ and any level, any crystalline irreducible lift of Hodge-Tate weights $(0,k-1)$ is modular.\ See \[DM\] for a proof. The idea is just the following: in the case of level $1$ fundamental characters by a result of Breuil the deformation we are considering is ordinary, and then the results of Skinner-Wiles apply. In the case of level $2$ fundamental characters, the technical condition needed to apply results à la Diamond-Taylor-Wiles is satisfied, as proved in the lemma above.\ Another consequence is that the principle of “switching the residual characteristic" (used in the articles that prove some cases of Serre’s conjecture by Dieulefait and Khare-Wintenberger) holds for any level and weight:\ **[Corollary 2]{} (combine corollary 1 with “existence of families" and “lowering the conductor" or “existence of minimal lifts"): If, for $k, N, p$ fixed with $k<p$ we know that: Serre’s conjecture is true in characteristic $p$, weight $k$ and ANY level $N'$ dividing $N$; then for any prime $q >k$ Serre’s conjecture in characteristic $q$, weight $k$ and level $N$ is true. (of course, we are taking $p$ and $q$ not dividing $N$).\ (For example, this principle is used in my preprint “The level 1 weight 2 case of Serre’s conjecture", to reduce the proof to the case of characteristic $p=3$, where the conjecture was proved by Serre for this weight and level in 1973. See also the preprint of Khare-Wintenberger “On Serre’s reciprocity conjecture for $2$-dimensional mod $p$ representations of ${{\rm Gal}}(\bar{{\mathbb{Q}}}/{\mathbb{Q}})$" for a similar strategy). The proof is the same that in the mentioned papers on Serre’s conjecture (“existence of minimal lifts" is proved in these preprints and “existence of compatible families" in my paper “Existence of compatible families and new cases of the Fontaine-Mazur conjecture", J. Reine Angew. Math. 577 (2004) 147-151) : if you start in characteristic $q$, after taking a minimal lift and building a compatible family containing it, you look at a $p$-adic member of this family. It only remains to prove modularity of this $p$-adic member, but if you assume Serre’s conjecture in characteristic $p$ (the weight is fixed at $k$ and the level bounded by $N$ in all the process, just observe that the level may descend after switching, because sometimes conductors descend when reducing mod $p$) the modularity of this $p$-adic representation follows from corollary 1.\ Finally, just observe that in the article \[DM\] to prove modularity of rigid Calabi-Yau threefolds we have not a very good result at $p=5$ precisely because for $k=4$, $5= 2k-3$ (this is the reason why in \[DM\] we obtained a better result at $p=7$). Now, with the above lema and corollary 1, we also get as in \[DM\] from the truth of Serre’s conjecture on the field of $5$ elements:\ **[Corollary 3:]{} Any rigid Calabi-Yau threefold defined over ${\mathbb{Q}}$ with good reduction at $5$ is modular.\ Last comment: in \[DM\] the full proofs of the above lemma, for $p$ different than $2k-3$, and of the above corollary 1, are given. They are given for $k=4$ and it is even explicitely said that proofs work for general $p$ and $k$ under the condition $(p+1)/ \gcd(k-1,p+1) >2$ and $p > k$, which is the same as saying $p >k$ and different than $2k-3$.\ **[Epilogue]{}: the criterion “Any rigid Calabi-Yau threefold over ${\mathbb{Q}}$ with good reduction at $3$ is modular" also follows, under the assumption that the modularity lifting result of Diamond-Flach-Guo (in their paper “Adjoint motives of modular forms and the Tamagawa number conjecture", Ann Sci ENS. 37 (2004), no. 5, 663-727) can be extended, with the rest of conditions unchanged, to the case of crystalline representations of weights $(0, p)$ where $p$ (an odd prime) is the residual characteristic (currently the proof given by Diamond-Flach-Guo assumes that the weights $(0,w)$ satisfy $w < p$).\ Remark: Recall that the $p$-adic Galois representation attached to a rigid Calabi-Yau threefold is crystalline of weights $(0,3)$ for any prime $p$ where the variety has good reduction.\ The proof is similar to the proof of corollary 3 above: as in Wiles original paper, we start by observing that the mod $3$ representation is either modular or reducible. In the ordinary case the results of Skinner and Wiles suffice for a proof. In the non-ordinary case, results of Berger-Li-Zhu (in “Construction of some families of 2-dimensional crystalline representations", Math. Annalen 329 (2004), no. 2, 365-377) give a precise description of the action of $I_p$ in the residual mod $3$ representation $\rho$: it acts through level $2$ fundamental characters and with Serre’s weight equal to $2$. So again we can apply the lemma above, since $\rho$ is irreducible (fundamental characters of level $2$) and $k=2 < 3$, and conclude that $\rho$ restricted to the quadratic field ramifying only at $3$ is still irreducible. This is the technical condition required to apply the “stronger version" of the result of Diamond-Flach-Guo in the case of a crystalline deformation of weights $(0,p)$.\ (Hope: given the precise information about the action of $I_p$ on the residual representation in the case of weights $(0,p)$, non-ordinary, obtained by Berger-Li-Zhu, perhaps the extension of the result of Diamond-Flach-Guo to cover also this case is something that may be obtained in the near future).\ **[Corollary 4:]{} Assume that the result of Diamond-Flach-Guo can be extended to the case of crystalline deformations of weights $(0,p)$. Then any rigid Calabi-Yau threefold defined over ${\mathbb{Q}}$ with good reduction at $3$ is modular.**********
{ "pile_set_name": "ArXiv" }
--- abstract: 'A powerful approach for understanding neural population dynamics is to extract low-dimensional trajectories from population recordings using dimensionality reduction methods. Current approaches for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements. We propose an approach for extracting low-dimensional dynamics from multiple, sequential recordings. Our algorithm scales to data comprising millions of observed dimensions, making it possible to access dynamics distributed across large populations or multiple brain areas. Building on subspace-identification approaches for dynamical systems, we perform parameter estimation by minimizing a moment-matching objective using a scalable stochastic gradient descent algorithm: The model is optimized to predict temporal covariations across neurons and across time. We show how this approach naturally handles missing data and multiple partial recordings, and can identify dynamics and predict correlations even in the presence of severe subsampling and small overlap between recordings. We demonstrate the effectiveness of the approach both on simulated data and a whole-brain larval zebrafish imaging dataset.' author: - | Marcel Nonnenmacher[^1^]{}, **Srinivas C. Turaga[^2^]{} and Jakob H. Macke[^1^]{}[^1]**\ [[^1^]{}research center caesar, an associate of the Max Planck Society, Bonn, Germany]{}\ [[^2^]{}HHMI Janelia Research Campus, Ashburn, VA]{}\ [`[email protected], [email protected]`]{}\ [`[email protected]`]{} title: | Extracting low-dimensional dynamics from\ multiple large-scale neural population recordings\ by learning to predict correlations --- Introduction ============ Dimensionality reduction methods based on state-space models [@cunningham_yu_14; @yu_sahani_09; @macke_buesing_12; @pfau_paninski_13; @gao_cunningham_15] are useful for uncovering low-dimensional dynamics hidden in high-dimensional data. These models exploit structured correlations in neural activity, both across neurons and over time [@ChurchlandCunningham_12]. This approach has been used to identify neural activity trajectories that are informative about stimuli and behaviour and yield insights into neural computations [@MazorLaurent_05; @BriggmanAbarbanel_05; @BuonomanoMaass_09; @ShenoySahani_13; @ManteSussillo_13; @GaoGanguli_15; @LiDaie_16]. However, these methods are designed for analyzing one population measurement at a time and are typically applied to population recordings of a few dozens of neurons, yielding a statistical description of the dynamics of a small sample of neurons within a brain area. How can we, from sparse recordings, gain insights into dynamics distributed across entire circuits or multiple brain areas? One promising approach to scaling up the empirical study of neural dynamics is to *sequentially* record from multiple neural populations, for instance by moving the field-of-view of a microscope [@sofroniew_svoboda_16]. Similarly, chronic multi-electrode recordings make it possible to record neural activity within a brain area over multiple days, but with neurons dropping in and out of the measurement over time [@dhawale_olveczky_15]. While different neurons will be recorded in different sessions, we expect the underlying dynamics to be preserved across measurements. The goal of this paper is to provide methods for extracting low-dimensional dynamics shared across multiple, potentially overlapping recordings of neural population activity. Inferring dynamics from such data can be interpreted as a missing-data problem in which data is missing in a structured manner (referred to as ’serial subset observations’ [@huys_paninski_09], SSOs). Our methods allow us to capture the relevant subspace and predict instantaneous and time-lagged correlations between all neurons, even when substantial blocks of data are missing. Our methods are highly scalable, and applicable to data sets with millions of observed units. On both simulated and empirical data, we show that our methods extract low-dimensional dynamics and accurately predict temporal and cross-neuronal correlations. #### Statistical approach: The standard approach for dimensionality reduction of neural dynamics is based on search for a maximum of the log-likelihood via expectation-maximization (EM) [@dempster_donald_77; @ghahramani_hinton_96]. EM can be extended to missing data in a straightforward fashion, and SSOs allow for efficient implementations, as we will show below. However, we will also show that subsampled data can lead to slow convergence and high sensitivity to initial conditions. An alternative approach is given by subspace identification (SSID) [@overschee_12; @katayama_06]. SSID algorithms are based on matching the moments of the model with those of the empirical data: The idea is to calculate the time-lagged covariances of the model as a function of the parameters. Then, spectral methods (e.g. singular value decompositions) are used to reconstruct parameters from empirically measured covariances. However, these methods scale poorly to high-dimensional datasets where it impossible to even construct the time-lagged covariance matrix. Our approach is also based on moment-matching – rather than using spectral approaches, however, we use numerical optimization to directly minimize the squared error between empirical and reconstructed time-lagged covariances without ever explicitly constructing the full covariance matrix, yielding a subspace that captures both spatial and temporal correlations in activity. This approach readily generalizes to settings in which many data points are missing, as the corresponding entries of the covariance can simply be dropped from the cost function. In addition, it can also generalize to models in which the latent dynamics are nonlinear. Stochastic gradient methods make it possible to scale our approach to high-dimensional ($p=10^7$) and long ($T=10^5$) recordings. We will show that use of temporal information (through time-lagged covariances) allows this approach to work in scenarios (low overlap between recordings) in which alternative approaches based on instantaneous correlations are not applicable [@yu_sahani_09; @balzano_recht_10]. #### Related work: Several studies have addressed estimation of linear dynamical systems from subsampled data: Turaga et al. [@turaga_macke_14] used EM to learn high-dimensional linear dynamical models form multiple observations, an approach which they called ‘stitching’. However, their model assumed high-dimensional dynamics, and is therefore limited to small population sizes ($N\approx 100$). Bishop & Yu [@bishop_yu_14] studied the conditions under which a covariance-matrix can be reconstructed from multiple partial measurements. However, their method and analysis were restricted to modelling time-instantaneous covariances, and did not include *temporal* activity correlations. In addition, their approach is not based on learning parameters jointly, but estimates the covariance in each observation-subset separately, and then aligns these estimates *post-hoc*. Thus, while this approach can be very effective and is important for theoretical analysis, it can perform sub-optimally when data is noisy. In the context of SSID methods, Markovsky [@markovsky_16a; @markovsky_16b] derived conditions for the reconstruction of missing data from deterministic univariate linear time-invariant signals, and Liu et al. [@liu_vandenberghe_13] use a nuclear norm-regularized SSID to reconstruct partially missing data vectors. Balzano et al. [@balzano_recht_10; @he_lui_11] presented a scalable dimensionality reduction approach (GROUSE) for data with missing entries. This approach does not aim to capture temporal corrrelations, and is designed for data which is missing at random. Soudry et al. [@soudry_paninski_15] considered population subsampling from the perspective of inferring functional connectivity, but focused on observation schemes in which there are at least some simultaneous observations for each pair of variables. Methods ======= Low-dimensional state-space models with linear observations ----------------------------------------------------------- #### Model class: Our goal is to identify low-dimensional dynamics from multiple, partially overlapping recordings of a high-dimensional neural population, and to use them to predict neural correlations. We denote neural activity by $\mathcal{Y} = \{{\mathbf{y}}_t \}_{t=1}^T$, a length-$T$ discrete-time sequence of ${p}$-dimensional vectors. We assume that the underlying ${n}$-dimensional dynamics ${\mathbf{x}}$ linearly modulate ${\mathbf{y}}$, $$\begin{aligned} {\mathbf{y}}_t &= C {\mathbf{x}}_t + \varepsilon_t, &\varepsilon_t \sim \mathcal{N}(0,R) \label{eq:sLTI_y} \\ {\mathbf{x}}_{t+1} &= f({\mathbf{x}}_{t},{\mathbf{\eta}}_t) , &{\mathbf{\eta}}_t \sim p({\mathbf{\eta}}) \label{eq:sLTI_x}, \end{aligned}$$ with diagonal observation noise covariance matrix $R \in \mathbb{R}^{{p}\times {p}}$. Thus, each observed variable ${\mathbf{y}}^{(i)}_t$, $i = 1, \ldots, {p}$ is a noisy linear combination of the shared time-evolving latent modes ${\mathbf{x}}_t$. We consider stable latent zero-mean dynamics on ${\mathbf{x}}$ with time-lagged covariances $\Pi_s := \mbox{Cov}[{\mathbf{x}}_{t+s}, {\mathbf{x}}_{t}] \in \mathbb{R}^{{n}\times {n}}$ for time-lag $s \in \{0, \ldots, S\}$. Time-lagged observed covariances $\Lambda(s) \in \mathbb{R}^{{p}\times {p}}$ can be computed from $\Pi_s$ as $$\begin{aligned} \Lambda(s) := C \Pi_s C^\top + \delta_{s=0} R. \label{eq:predicted_covs}\end{aligned}$$ An important special case is the classical linear dynamical system (LDS) with $f({\mathbf{x}}_{t},{\mathbf{\eta}}_{t}) = A {\mathbf{x}}_t + {\mathbf{\eta}}_t$, with ${\mathbf{\eta}}_t \sim \mathcal{N}(0,Q)$ and $\Pi_s = A^s \Pi_0$. As we will see below, our SSID algorithm works directly on these time-lagged covariances, so it is also applicable also to generative models with non-Markovian Gaussian latent dynamics, e.g.  Gaussian Process Factor Analysis [@yu_sahani_09]. #### Partial observations and missing data: {#missing_data} We treat multiple partial recordings as a missing-data problem– we use ${\mathbf{y}}_t$ to model all activity measurements across multiple experiments, and assume that at any time $t$, only some of them will be observed. As a consequence, the data-dimensionality ${p}$ could now easily be comprised of thousands of neurons, even if only small subsets are observed at any given time. We use index sets $\Omega_t \subseteq \{1,\ldots,{p}\}$, where $i \in \Omega_t$ indicates that variable $i$ is observed at time point $t$. We obtain empirical estimates of time-lagged pairwise covariances for variable each pair $(i,j)$ over all of those time points where the pair of variables is jointly observed with time-lag $s$. We define co-occurrence counts $T^{s}_{ij} = |\{t | i \in \Omega_{t+s} \wedge j \in \Omega_{t}\}|$. In total there could be up to $S{p}^2$ many co-occurrence counts– however, for SSOs the number of unique counts is dramatically lower. To capitalize on this, we define co-ocurrence groups $F \subseteq \{1,\ldots,{p}\}$, subsets of variables with identical observation patterns: $\forall i,j \in F \ \forall t\leq T: i \in \Omega_t \mbox{ iff } j \in \Omega_t$. All element pairs $(i,j) \in F^2$ share the same co-occurence count $T^{s}_{ij}$ per time-lag $s$. Co-occurence groups are non-overlapping and together cover the whole range $\{1,\ldots,{p}\}$. There might be pairs $(i,j)$ which are never observed, i.e. for which $T^{s}_{ij}=0$ for each $s$. We collect variable pairs co-observed at least twice at time-lag s, $\Omega^s = \{(i,j) | T^{s}_{ij} > 1\}$. For these pairs we can calculate an unbiased estimate of the s-lagged covariance, $$\begin{aligned} \mbox{Cov}[{\mathbf{y}}^{(i)}_{t+s}, {\mathbf{y}}^{(j)}_{t}] \approx \frac{1}{T^{s}_{ij} - 1} \sum_t {\mathbf{y}}^{(i)}_{t+s} {\mathbf{y}}^{(j)}_{t} := \tilde{\Lambda}(s)_{(ij)}. \label{eq:cov_ij}\end{aligned}$$ Expectation maximization for stitching linear dynamical systems --------------------------------------------------------------- EM can readily be extended to missing data by removing likelihood-terms corresponding to missing data [@TuragaBuesing_13]. In the E-step of our stitching-version of EM ([sEM]{}), we use the default Kalman filter and smoother equations with subindexed $C_t = C_{(\Omega_t,:)}$ and $R_t = R_{(\Omega_t,\Omega_t)}$ parameters for each time point $t$. We speed up the E-step by tracking convergence of latent posterior covariances, and stop updating these when they have converged [@pnevmatikakis_paninski_14]– for long $T$, this can result in considerably faster smoothing. For the M-step, we adapt maximum likelihood estimates of parameters $\theta = \{A, Q, C, R$}. Dynamics parameters ($A$, $Q$) are unaffected by SSOs. The update for $C$ is given by $$\begin{aligned} C_{(i,:)} &= \left( \sum {\mathbf{y}}_{t}^{(i)} \mbox{E}[{\mathbf{x}}_t]^T - \frac{1}{|O_i|} \left(\sum {\mathbf{y}}_{t}^{(i)} \right) \left(\sum \mbox{E}[{\mathbf{x}}_t]^T\right) \right) \label{eq:ML_stitching_C_inverse} \\ &\quad \times \left( \sum \mbox{E}[{\mathbf{x}}_t {\mathbf{x}}_t^T] - \frac{1}{|O_i|} \left( \sum \mbox{E}[{\mathbf{x}}_t] \right) \left( \sum\mbox{E}[{\mathbf{x}}_t]^T \right) \right)^{-1}, \nonumber $$ where $O_i = \{t| i \in \Omega_t \}$ is the set of time points for which $y_i$ is observed, and all sums are over $t \in O_i$. For SSOs, we use temporal structure in the observation patterns $\Omega_t$ to avoid unnecessary calculations of the inverse in : all elements $i$ of a co-occurence group share the same $O_i$. Scalable subspace-identification with missing data via moment-matching {#loss_function} ---------------------------------------------------------------------- #### Subspace identification: Our algorithm (Stitching-SSID, [S3ID]{}) is based on moment-matching approaches for linear systems [@aoki_90]. We will show that it provides robust initialisation for EM, and that it performs more robustly (in the sense of yielding samples which more closely capture empirically measured correlations, and predict missing ones) on non-Gaussian and nonlinear data. For fully observed linear dynamics, statistically consistent estimators for $\theta = \{C, A, \Pi_0, R\}$ can be obtained from $\{\tilde{\Lambda}(s)\}_s$ [@katayama_06] by applying an SVD to the ${p}K \times {p}L$ block Hankel matrix $H$ with blocks $H_{k,l} = \tilde{\Lambda}(k+l-1)$. For our situation with large ${p}$ and massively missing entries in $\tilde{\Lambda}(s)$, we define an explicit loss function which penalizes the squared difference between empirically observed covariances and those predicted by the parametrised model , $$\begin{aligned} \mathcal{L}(C, \{\Pi_s\}, R) = \frac{1}{2} \sum_{s} r_s || \Lambda(s) - \tilde{\Lambda}(s) ||^2_{\Omega^s} \label{eq:stitching_SSID_Hankel_L2_target_naive},\end{aligned}$$ where $||\cdot||_{\Omega}$ denotes the Froebenius norm applied to all elements in index set $\Omega$. For linear dynamics, we constrain $\Pi_s$ by setting $\Pi_s = A^s \Pi_0$ and optimize over $A$ instead of over $\Pi_s$. We refer to this algorithm as ‘linear [S3ID]{}’, and to the general one as ‘nonlinear [S3ID]{}’. However, we emphasize that only the latent dynamics are (potentially) nonlinear, dimensionality reduction is linear in both cases. #### Optimization via stochastic gradients: {#stochastic gradients} For large-scale applications, explicit computation and storage of the observed $\tilde{\Lambda}(s)$ is prohibitive since they can scale as $|\Omega^s| \sim {p}^2$, which renders computation of the full loss $\mathcal{L}$ impractical. We note, however, that the gradients of $\mathcal{L}$ are linear in $\tilde{\Lambda}(s)^{(i,j)} \propto \sum_t {\mathbf{y}}^{(i)}_{t+s} {\mathbf{y}}^{(j)}_{t}$. This allows us to obtain unbiased stochastic estimates of the gradients by uniformly subsampling time points $t$ and corresponding pairs of data vectors ${\mathbf{y}}_{t+s},{\mathbf{y}}_t$ with time-lag $s$, without explicit calculation of the loss $\mathcal{L}$. The batch-wise gradients are given by $$\begin{aligned} \frac{\partial{}\mathcal{L}_{t,s}}{\partial{}C_{(i,:)}} &= \left( \Lambda(s)_{(i,:)} - {\mathbf{y}}^{(i)}_{t+s} {\mathbf{y}}_{t}^\top \right) N_s^{i,t} C \Pi_s^\top + \left( [\Lambda(s)^\top]_{(i,:)} - {\mathbf{y}}^{(i)}_{t} {\mathbf{y}}_{t+s}^\top \right) N_s^{i,t+s} C \Pi_s \\ \frac{\partial{}\mathcal{L}_{t,s}}{\partial{}\Pi_s} &= \sum_{i \in \Omega_{t+s}} C_{(i,:)}^\top \left( \Lambda(s)_{(i,:)} - {\mathbf{y}}_{t+s}^{(i)} {\mathbf{y}}_t^\top \right) N_s^{i,t} C \\ \frac{\partial{}\mathcal{L}_{t,s}}{\partial{}R_{ii}} &= \frac{\delta_{s0}}{T^0_{ii}} \left( \Lambda(0)_{(i,i)} - \left( {\mathbf{y}}_t^{(i)} \right)^2 \right), \end{aligned}$$ where $N_s^{i,t}\in \mathbb{N}^{{p}\times {p}}$ is a diagonal matrix with $[N_s^{i,t}]_{jj} = \frac{1}{T^{s}_{ij}}$ if $ j\in \Omega_t$, and $0$ otherwise. Gradients scale linearly in $p$ both in memory and computation and allow us to minimize $\mathcal{L}$ without explicit computation of the empirical time-lagged covariances, or $\mathcal{L}$ itself. To monitor performance and convergence for large systems, we compute the loss over a random subset of covariances. The computation of gradients for $C$ and $R$ can be fully vectorized over all elements $i$ of a co-occurence group, as these share the same matrices $N^{i,t}_s$. We use ADAM [@kingma_ba_14] for stochastic gradient descent, which combines momentum over subsequent gradients with individual self-adjusting step sizes for each parameter. By using momentum on the stochastic gradients, we effectively obtain a gradient that aggregates information from empirical time-lagged covariances across multiple gradient steps. How temporal information helps for stitching {#conditions} -------------------------------------------- The key challenge in stitching is that the latent space inferred by an LDS is defined only up to choice of coordinate system (i.e. a linear transformation of $C$). Thus, stitching is successful if one can align the $C$s corresponding to different subpopulations into a shared coordinate system for the latent space of all $p$ neurons [@bishop_yu_14] (Fig. \[fig1\]). In the noise-free regime and if one ignores temporal information, this can work only if the overlap between two sub-populations is at least as large as the latent dimensionality, as shown by [@bishop_yu_14]. However, dynamics (i.e. temporal correlations) provide additional constraints for the alignment which can allow stitching even without overlap: Assume two subpopulations $I_1, I_2$ with parameters $\theta^1, \theta^2$, latent spaces ${\mathbf{x}}^1,{\mathbf{x}}^2$ and with overlap set $J = I_1 \cap I_2$ and overlap $o=|J|$. The overlapping neurons ${\mathbf{y}}_t^{(J)}$ are represented by both the matrix rows $C^1_{J,:}$ and $C^2_{J,:}$, each in their respective latent coordinate systems. To stitch, one needs to identify the base change matrix $M$ aligning latent coordinate systems consistently across the two populations, i.e. such that $M {\mathbf{x}}^1 = {\mathbf{x}}^2$ satisfies the constraints $C^1_{(J,:)}=C^2_{(J,:)}M^{-1}$. When only considering time-instantaneous covariances, this yields $o$ linear constraints, and thus the necessary condition that $o\geq {n}$, i.e. the overlap has to be at least as large the latent dimensionality [@bishop_yu_14]. Including temporal correlations yields additional constraints, as the time-lagged activities also have to be aligned, and these constraints can be combined in the *observability* matrix $J$: $$\begin{aligned} \mathcal{O}^1_{J} = \left(\begin{array}{l} C^1_{(J,:)}\\ C^1_{(J,:)} A^1 \\ \cdots \\ C^1_{(J,:)}{(A^1)}^{n-1} \end{array} \right) & = \left(\begin{array}{l} C^2_{(J,:)} \\ C^2_{(J,:)} A^2\\ \cdots \\ C^2_{(J,:)} {(A^2)}^{n-1} \end{array} \right) M^{-1} = \mathcal{O}^2_J M^{-1} \nonumber.\end{aligned}$$ If both observability matrices $\mathcal{O}^1_{J}$ and $\mathcal{O}^2_{J}$ have full rank (i.e. rank $n$), then $M$ is uniquely constrained, and this identifies the base change required to align the latent coordinate systems. To get consistent latent dynamics, the matrices $A^1$ and $A^2$ have to be similar, i.e. $M A^1 M^{-1} = A^2$, and correspondingly the time-lagged latent covariance matrices $\Pi^1_s$, $\Pi^2_s$ satisfy $\Pi^1_s = M \Pi^2_s M^\top$. These dynamics might yield additional constraints: For example, if both $A^1$ and $A^2$ have unique (and the same) eigenvalues (and we know that we have identified all latent dimensions), then one could align the latent dimensions of ${\mathbf{x}}$ which share the same eigenvalues, even in the absence of overlap. Details of simulated and empirical data --------------------------------------- #### Linear dynamical system: We simulate LDSs to test algorithms [S3ID]{}and [sEM]{}. For dynamics matrices $A$, we generate eigenvalues with absolute values linearly spanning the interval $[0.9, 0.99]$ and complex angles independently von Mises-distributed with zero mean and concentration $\kappa = 1000$, resulting in smooth latent tractories. To investigate stitching-performance on SSOs, we divded the entire population size of size ${p}=1000$ into two subsets $I_1 = [1, \ldots {p}_1]$, $I_2 = [{p}_2 \ldots {p}]$, ${p}_2 \leq {p}_1$ with overlap $o = {p}_1 - {p}_2$. We simulate for $T_{m}= 50k$ time points, ${m}= 1,2$ for a total of $T=10^5$ time points. We set the $R_{ii}$ such that $50\%$ of the variance of each variable is private noise. Results are aggregated over $20$ data sets for each simulation. For the scaling analysis in section \[scaling\], we simulate population sizes $p = 10^3, 10^4, 10^5$, at overlap $o=10\%$, for $T_{m}= 15k$ and $10$ data sets (different random initialisation for LDS parameters and noise) for each population size. We compute subspace projection errors between $C$ and $\hat{C}$ as $e(C, \hat{C}) = || (I - \hat{C} \hat{C}^\top) C ||_F / || C ||_F$. #### Simulated neural networks: We simulate a recurrent network of $1250$ exponential integrate-and-fire neurons [@brette_gerstner_05] ($250$ inhibitory and $p=1000$ excitatory neurons) with clustered connectivity for $T=60k$ time points. The inhibitory neurons exhibit unspecific connectivity towards the excitatory units. Excitatory neurons are grouped into $10$ clusters with high connectivity ($30\%$) within cluster and low connectivity ($10\%$) between clusters, resulting in low-dimensional dynamics with smooth, oscillating modes corresponding to the $10$ clusters. #### Larval-zebrafish imaging: We applied [S3ID]{} to a dataset obtained by light-sheet fluorescence imaging of the whole brain of the larval zebrafish [@Ahrens:2013gga]. For this data, every data vector ${\mathbf{y}}_t$ represents a $2048\times{}1024\times{}41$ three-dimensional image stack of of fluorescence activity recorded sequentially across $41$ z-planes, over in total $T=1200$ time points of recording at $1.15$ Hz scanning speed across all z-planes. We separate foreground from background voxels by thresholding per-voxel fluorescence activity variance and select ${p}= 7,828,017$ voxels of interest ($\approx 9.55\%$ of total) across all z-planes, and z-scored variances. Results ======= Stitching on simulated data --------------------------- To test how well parameters of LDS models can be reconstructed from high-dimensional partial observations, we simulated an LDS and observed it through two overlapping subsets, parametrically varying the size of overlap between them from $o = 1\%$ to $o = 100\%$. As a simple baseline, we apply a ‘naive’ Factor Analysis, for which we impute missing data as $0$. GROUSE [@balzano_recht_10], an algorithm designed for randomly missing data, recovers a consistent subspace for overlap $o=30\%$ and greater, but fails for smaller overlaps. As [sEM]{} (maximum number of $200$ iterations) is prone to get stuck in local optima, we randomly initialise it with 4 seeds per fit and report results with highest log-likelihood. [sEM]{} worked well even for small overlaps, but with increasingly variable results (see Fig. \[fig2\]c). Finally, we applied our SSID algorithm [S3ID]{} which exhibited good performance, even for small overlaps. ![ **Choice of latent dimensionality** Eigenvalue spectra of system matrices estimated from simulated LDS data with $o=5\%$ overlap and different latent dimensionalities $n$. [**a**]{}) Eigenvalues of instantaneous covariance matrix $\Pi_0$. [**b**]{}) Eigenvalues of linear dynamics matrix $A$. Both spectra indicate an elbow at real data dimensionality $n=10$ when [S3ID]{} is run with $n \geq 10$. \[figS1\] ](./figS1.pdf){width="99.00000%"} To quantify recovery of dynamics, we compare predictions for pairwise time-lagged covariances between variables not co-observed simultaneously (Fig. \[fig2\]b). Because GROUSE itself does not capture temporal correlations, we obtain estimated time-lagged correlations by projecting data ${\mathbf{y}}_t$ onto the obtained subspace and extract linear dynamics from estimated time-lagged latent covariances. [S3ID]{} is optimized to capture time-lagged covariances, and therefore outperforms alternative algorithms. ![ **Comparison with post-hoc alignment of subspaces** [**a**]{}) Multiple partial recordings with $20$ sequentially recorded subpopulations. [**b**]{}) We apply [S3ID]{} to the full population, as well as factor analysis to each of these subpopulations. The latter gives $20$ subspace estimates, which we sequentially align using subpopulation overlaps. \[fig6\] ](./fig6.pdf){width="90.00000%"} When we use a latent dimensionality ($n=20,50$) larger than the true one ($n=10$), we observe ‘elbows’ in the eigen-spectra of instantaneous covariance estimate $\Pi_0$ and dynamics matrix $A$ located at the true dimensionality (Fig. \[figS1\]). This observation suggests we can use standard techniques for choosing latent dimensionalities in applications where the real $n$ is unknown. Choosing $n$ too large or too small led to some decrease in prediction quality of unobserved (time-lagged) correlations. Importantly though, performance degraded gracefully when the dimensionality was chosen too big: For instance, at 5% overlap, correlation between predicted and ground-truth unobserved instantaneous covariances was 0.99 for true latent dimensionality $n=10$ (Fig. \[fig2\]b). At smaller $n=5$ and $n=8$, correlations were $0.69$ and $0.89$, respectively, and for larger $n=20$ and $n=50$, they were $0.97$ and $0.96$. In practice, we recommend using $n$ larger than the hypothesized latent dimensionality. [S3ID]{} and [sEM]{} jointly estimate the subspace $C$ across the entire population. An alternative approach would be to identify the subspaces for the different subpopulations via separate matrices $C_{(I,:)}$ and subsequently align these estimates via their pairwise overlap [@bishop_yu_14]. This works very well on this example (as for each subset there is sufficient data to estimate each $C_{I,:}$ individually). However, in Fig. \[fig6\] we show that this approach performs suboptimally in scenarios in which data is more noisy or comprised of many (here $20$) subpopulations. In summary, S3ID can reliably stitch simulated data across a range of overlaps, even for very small overlaps. Stitching for different population sizes: Combining S3ID with sEM works best {#scaling} ---------------------------------------------------------------------------- The above results were obtained for fixed population size ${p}=1000$. To investigate how performance and computation time scale with population size, we simulate data from an LDS with fixed overlap $o=10\%$ for different population sizes. We run [S3ID]{} with a single pass, and subsequently use its final parameter estimates to initialize [sEM]{}. We set the maximum number of iterations for [sEM]{} to $50$, corresponding to approximately $1.5$h of training time for $p=10^5$ observed variables. We quantify the subspace estimates by the largest principal angle between ground-truth and estimated subspaces. We find that the best performance is achieved by the combined algorithm ([S3ID]{} + [sEM]{}, Fig. \[fig3\]a,b). In particular, [S3ID]{} reliably and quickly leads to a reduction in error (Fig. \[fig3\]a), but (at least when capped at one pass over the data), further improvements can be achieved by letting [sEM]{} do further ‘fine-tuning’ of parameters from the initial estimate [@BuesingMacke_13]. When starting [sEM]{} from random initializations, we find that it often gets stuck in local minima (potentially, shallow regions of the log-likelihood). While convergence issues for EM have been reported before, we remark that these issues seems to be much more severe for stitching. We hypothesize that the presence of two potential solutions (one for each observation subset) makes parameter inference more difficult. Computation times for both stitching algorithms scale approximately linear with observed population size $p$ (Fig. \[fig3\]c). When initializing [sEM]{} by [S3ID]{}, we found that the cose of [S3ID]{}is amortized by faster convergence of [sEM]{}. In summary, S3ID performs robustly across different population sizes, but can be further improved when used as an initializer for sEM. Spiking neural networks ----------------------- How well can our approach capture and predict correlations in spiking neural networks, from partial observations? To answer this question, we applied S3ID to a network simulation of inhibitory and excitatory neurons (Fig. \[fig5\]a), divided into into $10$ clusters with strong intra-cluster connectivity. We apply [S3ID]{}-initialised [sEM]{} with $n=20$ latent dimensions to this data and find good recovery of time-instantaneous covariances (Fig. \[fig5\]b), but poor recovery of long-range temporal interactions. Since [sEM]{} assumes linear latent dynamics, we test whether this is due to a violation of the linearity assumption by applying [S3ID]{} with nonlinear latent dynamics, i.e. by learning the latent covariances $\Pi_s$, $s = 0, \ldots, 39$. This comes at the cost of learning $40$ rather than $2$ $n\times{}n$ matrices to characterise the latent space, but we note that this here still amounts to only $76.2\%$ of the parameters learned for $C$ and $R$. We find that the nonlinear latent dynamics approach allows for markedly better predictions of time-lagged covariances (Fig. \[fig5\]b). We attempt to recover cluster membership for each of the neurons from the estimated emission matrices ${C}$ using K-means clustering on the rows of ${C}$. Because the $10$ clusters are distributed over both subpopulations, this will only be successful if the latent representations for the two subpoplations are sufficiently aligned. While we find that both approaches can assign most neurons correctly, only the nonlinear version of S3ID allows correct recovery for every neuron. Thus, the flexibility of S3ID allows more accurate reconstruction and prediction of correlations in data which violates the assumptions of linear Gaussian dynamics. We also applied dynamics-agnostic [S3ID]{} when undersampling two out of the ten clusters. Prediction of unobserved covariances for the undersampled clusters was robust down to sampling only 50% of neurons from those clusters. For 50/40/30% sampling, we obtained correlations of instantaneous covariances of 0.97/0.80/0.32 for neurons in the undersampled clusters. Correlation across all clusters remained above 0.97 throughout. K-means on the rows of learned emission matrix $C$ still perfectly identified the ten clusters at 40% sampling, whereas below that it fused the undersampled clusters. Zebrafish imaging data ---------------------- Finally, we want to determine how well the approach works on real population imaging data, and test whether it can scale to millions of dimensions. To this end, we apply (both linear and nonlinear) [S3ID]{} to volume scans of larval zebrafish brain activity obtained with light-sheet fluorescence microscopy, comprising $p=7,828,017$ voxels. We assume an observation scheme in which the first 21 (out of 41) imaging planes are imaged in the first session, and the remaining 21 planes in the second, i.e. with only z-plane 21 ($234.572$ voxels) in overlap (Fig. \[fig4\]a,b). We evaluate the performance by predicting (time-lagged) pairwise covariances for voxel pairs not co-observed under the assumed multiple partial recording, using eq. \[eq:predicted\_covs\]. We find that nonlinear [S3ID]{} is able to reconstruct correlations with high accuracy (Fig. \[fig4\]c), and even outperforms linear [S3ID]{} applied to full observations. FA applied to each imaging session and aligned post-hoc (as by [@bishop_yu_14]) obtained a correlation of $0.71$ for instantaneous covariances, and applying GROUSE to the observation scheme gave correlation $0.72$. Discussion ========== In order to understand how large neural dynamics and computations are distributed across large neural circuits, we need methods for interpreting neural population recordings with many neurons and in sufficiently rich complex tasks [@GaoGanguli_15]. Here, we provide methods for dimensionality reduction which dramatically expand the range of possible analyses. This makes it possible to identify dynamics in data with millions of dimensions, even if many observations are missing in a highly structured manner, e.g. because measurements have been obtained in multiple overlapping recordings. Our approach identifies parameters by matching model-predicted covariances with empirical ones– thus, it yields models which are optimized to be realistic generative models of neural activity. While maximum-likelihood approaches (i.e. EM) are also popular for fitting dynamical system models to data, they are not guaranteed to provide realistic samples when used as generative models, and empirically often yield worse fits to measured correlations, or even diverging firing rates. Our approach readily permits several possible generalizations: First, using methods similar to [@BuesingMacke_13], it could be generalized to nonlinear observation models, e.g. generalized linear models with Poisson observations. In this case, one could still use gradient descent to minimize the mismatch between model-predicted covariance and empirical covariances. Second, one could impose non-negativity constraints on the entries of $C$ to obtain more interpretable network models [@buesing_paninski_14]. Third, one could generalize the latent dynamics to nonlinear or non-Markovian parametric models, and optimize the parameters of these nonlinear dynamics using stochastic gradient descent. For example, one could optimize the kernel-function of GPFA directly by matching the GP-kernel to the latent covariances. #### Acknowledgements We thank M. Ahrens for the larval zebrafish data. Our work was supported by the caesar foundation. [10]{} J. P. Cunningham and M. Y. Byron, “Dimensionality reduction for large-scale neural recordings,” [*Nature neuroscience*]{}, vol. 17, no. 11, pp. 1500–1509, 2014. M. Y. Byron, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, “Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity,” in [*Advances in neural information processing systems*]{}, pp. 1881–1888, 2009. J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani., “Empirical models of spiking in neural populations.,” in [ *Advances in Neural Information Processing Systems*]{}, vol. 24, 2012. D. Pfau, E. A. Pnevmatikakis, and L. Paninski, “Robust learning of low-dimensional dynamics from large neural ensembles,” in [*Advances in neural information processing systems*]{}, pp. 2391–2399, 2013. Y. Gao, L. Busing, K. V. Shenoy, and J. P. Cunningham, “High-dimensional neural spike train analysis with generalized count linear dynamical systems,” in [*Advances in Neural Information Processing Systems*]{}, pp. 2044–2052, 2015. M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu, and K. V. Shenoy, “Neural population dynamics during reaching,” [*Nature*]{}, vol. 487, pp. 51–6, Jul 2012. O. Mazor and G. Laurent, “Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons,” [*Neuron*]{}, vol. 48, pp. 661–73, Nov 2005. K. L. Briggman, H. D. I. Abarbanel, and W. B. Kristan, Jr, “Optical imaging of neuronal populations during decision-making,” [*Science*]{}, vol. 307, pp. 896–901, Feb 2005. D. V. Buonomano and W. Maass, “State-dependent computations: spatiotemporal processing in cortical networks.,” [*Nat Rev Neurosci*]{}, vol. 10, no. 2, pp. 113–125, 2009. K. V. Shenoy, M. Sahani, and M. M. Churchland, “Cortical control of arm movements: a dynamical systems perspective,” [*Annu Rev Neurosci*]{}, vol. 36, pp. 337–59, 2013. V. Mante, D. Sussillo, K. V. Shenoy, and W. T. Newsome, “Context-dependent computation by recurrent dynamics in prefrontal cortex,” [*Nature*]{}, vol. 503, pp. 78–84, Nov 2013. P. Gao and S. Ganguli, “On simplicity and complexity in the brave new world of large-scale neuroscience,” [*Curr Opin Neurobiol*]{}, vol. 32, pp. 148–55, 2015. N. Li, K. Daie, K. Svoboda, and S. Druckmann, “Robust neuronal dynamics in premotor cortex during motor planning,” [*Nature*]{}, vol. 532, pp. 459–64, Apr 2016. N. J. Sofroniew, D. Flickinger, J. King, and K. Svoboda, “A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging,” [*bioRxiv*]{}, p. 055947, 2016. A. K. Dhawale, R. Poddar, E. Kopelowitz, V. Normand, S. Wolff, and B. Olveczky, “Automated long-term recording and analysis of neural activity in behaving animals,” [*bioRxiv*]{}, p. 033266, 2015. Q. J. Huys and L. Paninski, “Smoothing of, and parameter estimation from, noisy biophysical recordings,” [*PLoS Comput Biol*]{}, vol. 5, no. 5, p. e1000379, 2009. A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the em algorithm,” [*Journal of the royal statistical society. Series B (methodological)*]{}, pp. 1–38, 1977. Z. Ghahramani and G. E. Hinton, “Parameter estimation for linear dynamical systems,” tech. rep., Technical Report CRG-TR-96-2, University of Totronto, Dept. of Computer Science, 1996. P. Van Overschee and B. De Moor, [*Subspace identification for linear systems: Theory—Implementation—Applications*]{}. Springer Science & Business Media, 2012. T. Katayama, [*Subspace methods for system identification*]{}. Springer Science & Business Media, 2006. L. Balzano, R. Nowak, and B. Recht, “Online identification and tracking of subspaces from highly incomplete information,” in [*Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on*]{}, pp. 704–711, IEEE, 2010. S. Turaga, L. Buesing, A. M. Packer, H. Dalgleish, N. Pettit, M. Hausser, and J. Macke, “Inferring neural population dynamics from multiple partial recordings of the same neural circuit,” in [*Advances in Neural Information Processing Systems*]{}, pp. 539–547, 2013. W. E. Bishop and B. M. Yu, “Deterministic symmetric positive semidefinite matrix completion,” in [*Advances in Neural Information Processing Systems*]{}, pp. 2762–2770, 2014. I. Markovsky, “The most powerful unfalsified model for data with missing values,” [*Systems & Control Letters*]{}, 2016. I. Markovsky, “A missing data approach to data-driven filtering and control,” [*IEEE Transactions on Automatic Control*]{}, 2016. Z. Liu, A. Hansson, and L. Vandenberghe, “Nuclear norm system identification with missing inputs and outputs,” [*Systems & Control Letters*]{}, vol. 62, no. 8, pp. 605–612, 2013. J. He, L. Balzano, and J. Lui, “Online robust subspace tracking from partial information,” [*arXiv preprint arXiv:1109.3827*]{}, 2011. D. Soudry, S. Keshri, P. Stinson, M.-h. Oh, G. Iyengar, and L. Paninski, “Efficient“ shotgun” inference of neural connectivity from highly sub-sampled activity data,” [*PLoS Comput Biol*]{}, vol. 11, no. 10, p. e1004464, 2015. S. C. Turaga, L. Buesing, A. Packer, H. Dalgleish, N. Pettit, M. Hausser, and J. H. Macke, “Inferring neural population dynamics from multiple partial recordings of the same neural circuit,” in [*Advances in Neural Information Processing Systems*]{}, vol. 26, 2014. E. A. Pnevmatikakis, K. R. Rad, J. Huggins, and L. Paninski, “Fast kalman filtering and forward–backward smoothing via a low-rank perturbative approach,” [*Journal of Computational and Graphical Statistics*]{}, vol. 23, no. 2, pp. 316–339, 2014. M. Aoki, [*State space modeling of time series*]{}. Springer Science & Business Media, 1990. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” [*arXiv preprint arXiv:1412.6980*]{}, 2014. R. Brette and W. Gerstner, “Adaptive exponential integrate-and-fire model as an effective description of neuronal activity,” [*Journal of neurophysiology*]{}, vol. 94, no. 5, pp. 3637–3642, 2005. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “[Whole-brain functional imaging at cellular resolution using light-sheet microscopy.]{},” [*Nature Methods*]{}, vol. 10, pp. 413–420, May 2013. L. Buesing, J. H. Macke, and M. Sahani, “Spectral learning of linear dynamics from generalised-linear observations with application to neural population data,” in [*Advances in Neural Information Processing Systems*]{}, vol. 25, 2013. L. Buesing, T. A. Machado, J. P. Cunningham, and L. Paninski, “Clustered factor analysis of multineuronal spike data,” in [*Advances in Neural Information Processing Systems*]{}, pp. 3500–3508, 2014. [^1]: current primary affiliation: Centre for Cognitive Science, Technical University Darmstadt
{ "pile_set_name": "ArXiv" }
--- abstract: 'High-throughput sequencing techniques such as metagenomic and metatranscriptomic technologies allow cataloging of functional characteristics of microbial community members as well as their taxonomic identity. Such studies have found that a community’s composition in terms of ecologically relevant functional traits or guilds can be conserved more strictly across varying settings than taxonomic composition is. I use a standard ecological resource-consumer model to examine the dynamics of traits relevant to resource consumption, and analyze determinants of functional composition. This model demonstrates that interaction with essential resources can regulate the community-wide abundance of ecologically relevant traits, keeping them at consistent levels despite large changes in the abundances of the species housing those traits in response to changes in the environment, and across variation between communities in species composition. Functional composition is shown to be able to track differences in environmental conditions faithfully across differences in community composition. Mathematical conditions on consumers’ vital rates and functional responses sufficient to produce conservation of functional community structure across taxonomic differences are presented.' author: - Lee Worden --- Introduction ============ Microbes play a key role in every ecological community on earth, and are crucial to the health of plants and animals both as mutualists and as pathogens. Understanding the ecological function and dynamics of microbes is important to human health and to the health of the planet. Because microbes exhibit short generation times, rapid evolution, horizontal transmission of genes, and great diversity, and can coexist in a massive number of partially isolated local communities, the study of their communities can bring different questions to the fore than are raised in the more common traditions of ecological theory focused on plants and animals. Newly available techniques of high-throughput genetic and transcriptomic sequencing are making microbial community structure visible in detail for the first time. One pattern appearing in microbial communities, in multiple very different settings, is that communities composed very differently in terms of species, genera, and even higher-level classifications of microbes can have much more similar structure when viewed in terms of the functional genes and genetic pathways present in the communities than when a catalog of taxa is constructed. Additionally, environmental changes can induce consistent changes in community-level abundance of relevant genes or pathways while leaving others unchanged, in situations where such a pattern is not readily visible in taxonomic data due to high taxonomic variability across communities. Metagenomic sequencing of samples collected from a variety of ocean settings around the world shows high taxonomic variability (even at the phylum level) with relatively stable distribution of categories of functional genes [@sunagawa_structure_2015], and that the environmental conditions predict the composition of the community in terms of functional groups better than in taxonomic structure, suggesting that functional and taxonomic structure may constitute roughly independent “axes of variation” in which functional structure captures most of the variation predicted by environmental conditions [@louca_high_2017]. The same pattern of conserved functional community structure across variation in taxonomic structure is seen in the human microbiome [@turnbaugh_core_2009; @consortium_structure_2012; @gosalbes_metagenomics_2012; @gosalbes_metatranscriptomic_2011], and in microbial communities assembled *in vitro* on a single nutrient resource [@goldford_emergent_2018]. Convergence of functional community structure with variation in species structure as a result of assembly history is also seen in plant communities [@fukami_species_2005], suggesting that explaining this pattern can have application beyond microbial ecology. A study of functional structure in *in vitro* community assembly [@goldford_emergent_2018] presents a mathematical model based on the MacArthur consumer-resource dynamics model, which numerically reproduces this pattern, but the model is not analyzed. Here I present a general class of consumer-resource models that describes the community-wide abundances of functional traits together with the abundances of species, and analyze these models to explain how regularity of functional structure can be an outcome despite variability in species composition, and when this outcome can occur in communities governed by resource-consumer interactions. I have used these models to construct a series of simulation experiments applying this result to functional community structure across variation in enviromental conditions and in taxonomic community structure. First I tested a scenario in which functional structure was preserved in a single community across changes in species abundances as its environment changes. Second I turned to the question of when multiple communities converge to a common functional structure despite differing taxonomic composition. I present mathematical analysis of when this result occurs, and then three model examples. In one example, functional structure coincided with high-level (genus or higher) taxonomic composition, and community structure at that level was conserved across multiple communities with different histories of assembly and different species composition. In the second, functional traits were shared across taxa and co-occurring in diverse compositions within organisms, so that functional structure was not reflected at a higher taxonomic level, and conservation of functional structure was achieved by a complex balance of functionally overlapping species. Third is a simulated controlled experiment in which selected traits were upregulated and downregulated by manipulation of the environment while other traits were unaffected, in a community model similar to the second, above. Trait abundances in consumer-resource models ============================================ A standard model framework for resource-consumer dynamics is widely used and well understood, particularly given a finite number of distinct species without spatial patchiness [@macarthur_competition_1964; @levin_community_1970; @tilman_resource_1982]. Resource abundances are increased by supply from outside the model community, and decreased by uptake by consumer species, and species abundances are increased by reproduction at a rate that depends on resource consumption, and decreased by fixed per-capita mortality. For example, one such model has this form: = \_j c\_[ij]{} r\_[ij]{} R\_j X\_i - m\_i X\_i = s\_j - \_i r\_[ij]{} R\_j X\_i , where $X_i$ is the abundance of consumer species $i$ and $R_j$ is the abundance of resource $j$, while $r_{ij}$ is the consumption rate of resource $j$ by consumer $i$, $c_{ij}$ is a conversion rate of resource $j$ into reproductive fitness of $i$, $m_i$ is the per-capita mortality rate of consumer $i$, and $s_j$ is the rate of supply of resource $j$. To analyze the behavior of functional traits and genes across the community, it is necessary to include a definition of trait abundance in the model. Let us assume that a species that consumes a given resource has a trait of consumption of that resource. Thus given $n_r$ resources I define the corresponding $n_r$ traits, one for each, which each consumer may possess or not: let $A_{ij}$ be one if consumer $i$ has trait $j$ and zero if not, and let $f_{ij}(\mathbf{R})$, the functional response, or uptake rate of resource $j$ by consumer $i$, be a continuous, nondecreasing, nonnegative function of the vector $\mathbf{R}$ of resource abundances. Trait assignments $A_{ij}$ that take on a greater range of nonnegative values may also be of interest, for future research. Including this description of trait possession, a consumer-resource dynamics model has the form = \_j A\_[ij]{} c\_[ij]{} f\_[ij]{}() X\_i - m\_i X\_i = s\_j - \_i A\_[ij]{} f\_[ij]{}() X\_i . A type I functional response has the linear form \[eqn:typei\] f\_[ij]{}() = r\_[ij]{} R\_j , and a type II functional response (e.g. [@holling1959some]) can take at least two forms: \[eqn:typeii\] f\_[ij]{}() = , or \[eqn:typeiicomplex\] f\_[ij]{}() = , depending on whether saturation occurs independently for each trait a consumer possesses, with $h_{ij}$ as a constant describing how quickly resource consumption saturates in response to its availability. The functional response may also be a type III response [@holling1959components], which can be described by a variety of mathematical forms. In the example model systems I present below, I use the type I and the first of the above two type II functional response forms. There are at least two measures of abundance that can be used, motivated by forms of next-generation sequencing in widespread use. Using a measure of *possession* of genes, as seen in metagenomic sequencing processes based on DNA sequences, the community-wide abundance of trait $j$ is defined as the total value of $A_{ij}$ over all consumers: T\_j = \_i A\_[ij]{} X\_i . A measure of *expression* of traits, more like the data reported by metatranscriptomic sequencing processes such as RNA-Seq, describes not the presence of genetic sequences but the rates at which their functions are actively used: E\_j = \_i A\_[ij]{} X\_i f\_[ij]{}() . This paper analyzes conditions under which trait abundances $T_j$ and $E_j$ remain unchanged or nearly so while species abundances $X_i$ vary, and when species composition, in the sense of the presence and absence of specific species in a community, varies across communities. I present conditions for conservation of both measures of traits across environmental conditions and community structures, and examples in which the abundances of genetic material $T_j$ are conserved, the more stringent case. Analysis of consumer-resource models ------------------------------------ Given the above form of model, the behavior of these models is well understood [@macarthur_competition_1964; @levin_community_1970; @tilman_resource_1982]. When the community consists of a single consumer species dependent on a single limiting resource, the population size grows until its increasing resource consumption lowers the resource abundance to a level at which the consumer’s reproduction and mortality rates balance. In this way, the resource abundance is regulated by the consumer: the abundance of the resource at equilibrium is a quantity determined by those organisms’ processes of reproduction and mortality. The population brings its limiting resource to the same equilibrium level, conventionally known as $R^*$ [@tilman_resource_1982], regardless of whether the flow of the resource into the community is small or large. If there is a large inflow, the population size grows until it is consuming the resource at an equally high rate, drawing the resource abundance down to the required level. If inflow is small, population size becomes as small as needed to balance the flows. In this way, the size of the population is determined by the resource supply rate, but the abundance of the resource is not. When there are multiple species and multiple resources, for each species there are certain combinations of resource abundances that balance its birth and death rates. With $n_s$ species and $n_r$ resources, these equilibrium conditions take the form of $n_s$ equations, one for each population $X_i$, each in $n_r$ unknowns $R_j$: \[eq:rstar-equilibrium\] = \_j A\_[ij]{} c\_[ij]{} f\_[ij]{}(\^\*) X\^\*\_i - m\_i X\^\*\_i 0 , where $\mathbf{R^*}$ is the vector $(R^*_1,\ldots,R^*_{n_r})$ of equilibrium values of the resource abundances. Each of these equations, one for each $i$, can in principle be solved for the set of values of $R^*_1$ through $R^*_{n_r}$ that satisfy this condition. Note that these solutions are not affected by the population sizes $X^*_i$ as long as the population sizes are nonzero. The solution set of the $i$th equation describes the set of values of the $n_r$ resources at which net growth of species $i$ is zero. The solution of all these equations simultaneously is the set of resource abundances at which all species’ growth is at equilibrium. This is why $n_r$ resources can support at most $n_r$ coexisting populations in these models under most conditions: because outside of special cases, no more than $n_r$ equations can be solved for $n_r$ variables simultaneously [@macarthur_competition_1964; @levin_community_1970]. The equilibrium resource abundances $R^*_j$, taken together, are the solution of that system of equations. Thus the combination of all $n_r$ resource abundances at equilibrium is determined by the requirements of all the consumer populations combined. Note that they are independent of the resource supply rates as well as of the consumer population sizes. That balance of resources is enforced by the sizes of consumer populations: if resources increase above the levels that produce consumer equilibrium, consumer populations grow, drawing resources at increased rates, and the opposite if resource levels drop, until the resources are returned to the required levels and supply rates are matched by the rates of consumption. Equilibrium levels of consumers are determined not by the above equilibrium equations, but by the model’s other set of equilibrium conditions: \[eq:xstar-equilibrium\] = s\_j - \_i A\_[ij]{} f\_[ij]{}( ) X\_i 0 . At equilibrium, the consumer population sizes must be whatever values $X^*_i$ are required to make the overall uptake rate of each resource $j$ described by this equation equal to the supply rate $s_j$, when the resources are at the levels $R^*$ implied by the earlier equilibrium conditions (\[eq:rstar-equilibrium\]). Thus the consumer population sizes, all taken together, are determined by the supply rates of all the relevant resources taken together, given the equilibrium resource levels, in such a way that resource inflow and outflow rates are balanced. When each resource is controlled by multiple consumers all of whom use multiple resources, each consumer abundance is determined by all the resource supplies in balance with the other consumers in ways that may be difficult to predict or explain. In summary, there is a duality of causal relationships between the two players in this system, resources and consumers, in which resource levels are determined by the consumers’ physiology (\[eq:rstar-equilibrium\]), and consumer levels are determined in a complex interlocking way by the resource supply rates given the above resource levels (\[eq:xstar-equilibrium\]). Conditions for conservation of trait abundances across differences in community composition ------------------------------------------------------------------------------------------- Given an arbitrary assemblage of resource consumer species, described by some unconstrained assignment of values to the functions $f_{ij}$, mortality rates $m_i$, and conversion factors $c_{ij}$, without knowledge of those values nothing can be concluded about the abundances of species, resources, and traits that will be observed in the long term. However, under certain constraints on the relationships between these values, it can be shown that trait abundances at equilibrium, given that enough consumer species coexist at equilibrium, are determined only by the resources’ supply rates without dependence on the consumers’ characteristics. I have derived conditions for simple dependence of trait abundances on their resources’ supply rates in the appendix (\[app:analysis\]), and I summarize them here. **Condition for conservation of rates of trait expression, $\mathbf{E}$.** The community-wide rate of expression of a trait, labeled $E_j$ above, is determined by the supply rate of its resource in all model communities of the above form, provided that the community is consuming all resources at equilibrium. This result is simply because $E_j$ is tied to the rate of resource uptake, which must match the supply rate of the resource at equilibrium. The community-wide abundance of possession of a trait ($T_j$) is not tied to supply rates in all cases, but conditions exist under which these abundances are directly predicted by supply rates independent of species abundances. **Simple condition for conservation of trait abundances, $\mathbf{T}$.** A condition for conservation of trait abundances $T_j$ is that there are constant numbers $k_j$, one for each resource $j$, for which \[eqn:diagonalcondition\] \_j c\_[ij]{} A\_[ij]{} / k\_j = m\_i If that condition is met, and the response functions $f_{ij}()$ are defined in such a way that there is a set of resource levels $R_j$ that can satisfy the constraint $ f_{ij}(\mathbf{R}) = 1/k_j $ for each $i$ and $j$ for which $A_{ij}>0$, at the same time, then those resource levels describe an equilibrium for each community structure, at which community-wide trait abundance $T_j$ will be held fixed at a level that depends only on the supply rate $s_j$, even though community structure and species abundances may vary. This condition can be explained by recognizing that the resource uptake rates $f$ are a scaling factor between the raw trait abundances $T$ and the trait expression rates $E$: since the expression rates are in a fixed relation to supply rates across communities, for the trait abundances to be fixed in that way as well, the ratio between the two, which is $f_{ij}$, must be fixed. **General condition for conservation of trait abundances, $\mathbf{T}$.** Condition (\[eqn:diagonalcondition\]), in which each trait abundance $T^*_j$ depends only on the supply rate of the one corresponding resource supply rate $s_j$, is a special case of a more general case in which the full vector of trait abundances is determined by the full vector of resource supply rates, the condition for which is the more abstract one that constants $K_{jk}$ exist for which \_k K\_[jk]{} A\_[ik]{} f\_[ik]{}() = A\_[ij]{} \_j c\_[ij]{} A\_[ij]{} f\_[ij]{}() = m\_i simultaneously, for at least one value of $\mathbf{R}$. **Approximate conservation of trait structure.** If either of these relations is not exactly but very nearly satisfied, then the model can almost exactly conserve the trait abundances. See (\[app:analysis\]) for more formal discussion of this point and derivation of the above conditions. **Simple construction of example models.** Examples in this paper, below, are constructed using the simple condition (\[eqn:diagonalcondition\]) for conservation of trait abundances, by assigning all mortality rates equal to a constant value $m$, conversion rates $c_{ij}$ equal to a constant $c$, with a fixed number $p$ of traits assigned to each consumer. This satisfies (\[eqn:diagonalcondition\]) with $k_j = m/cp$ for each $j$. Under these conditions, given that there is sufficient diversity within a community to fix resource abundances at the required equilibrium point, each trait $T_j$ will have equilibrium abundance $T^*_j=s_j/k_j$, independent of the trait assigments $A_{ij}$ and species abundances $X^*_i$. Species abundances are implied by the definition $T^*_j = \sum_i A_{ij} X^*_i$, and vary with the specifics of the community structure. Example: Complex regulation of functional structure within a community ====================================================================== The above analysis implies that abundance of each of a palette of traits can be regulated by the availability of the one resource associated with that trait, even though every organism in the community possesses multiple such traits. I observed the regulation of community-wide trait abundances within a single community using a model of four resources and four consumers. For each resource I defined a trait corresponding to consumption of that resource, which was shared by multiple consumer populations. The first three resources were supplied at a constant rate, while the fourth resource was supplied at a rate that changed at discrete times. Species and trait abundances shifted in response to the changing supply of the fourth resource. In this model, with type I functional responses as in (\[eqn:typei\]), the trait assignments $A_{ij}$ were constructed such that each species consumed a different three of the four resources (Fig. \[fig:fixed-traits-results\]A). The resource supply rate constants $s_j$ were held constant for resources 1, 2, and 3, while $s_4$ was piecewise constant, changing between three different values at discrete moments (Figure \[fig:fixed-traits-results\]B). The abundances of the four consumer species making up the model community came to equilibrium when their habitat was unchanging, but when the supply rate of resource 4 changed, they all shifted to different equilibrium levels (Figure \[fig:fixed-traits-results\]C). However, despite these complex shifts in all the consumer species’ abundances, the community-wide abundances of the traits of consumption of the first three resources were conserved at equilibrium across these changes in community structure, aside from brief transient adjustments (Figure \[fig:fixed-traits-results\]D). Of the four traits modeled, only the fourth changed in equilibrium abundance in response to the changing resource supply. The community was able to regulate the community-wide abundance of the trait involved in consumption of the fourth resource independent of the other three consumption traits, despite that fact that multiple of the four traits coexisted in every organism in the community. [ll]{} **A.** ![ \[fig:fixed-traits-results\] [**Regulation of functional structure within a community.**]{} A community model that satisfies the condition (\[eqn:diagonalcondition\]) maintains trait abundances fixed through changing environmental conditions by rebalancing all consumer population sizes. [**A. Assignment of traits to species**]{} (dark=present, white=absent); [**B. Supply rates of resources**]{}, with resources 1 through 3 supplied at constant rate, supply of resource 4 changing at discrete times. [**C. Species abundances**]{} all vary with changes in supply of resource 4, while [**D. Whole-community trait abundances**]{} 1 through 3 are constant apart from transient fluctuations with only trait 4 changing in response to changing supply of resource 4. ](fixed-4-1110-species-traits.png){width="90.00000%"} & **B.** ![ \[fig:fixed-traits-results\] [**Regulation of functional structure within a community.**]{} A community model that satisfies the condition (\[eqn:diagonalcondition\]) maintains trait abundances fixed through changing environmental conditions by rebalancing all consumer population sizes. [**A. Assignment of traits to species**]{} (dark=present, white=absent); [**B. Supply rates of resources**]{}, with resources 1 through 3 supplied at constant rate, supply of resource 4 changing at discrete times. [**C. Species abundances**]{} all vary with changes in supply of resource 4, while [**D. Whole-community trait abundances**]{} 1 through 3 are constant apart from transient fluctuations with only trait 4 changing in response to changing supply of resource 4. ](fixed-4-1110-inflow.png){width="90.00000%"} \ **C.** ![ \[fig:fixed-traits-results\] [**Regulation of functional structure within a community.**]{} A community model that satisfies the condition (\[eqn:diagonalcondition\]) maintains trait abundances fixed through changing environmental conditions by rebalancing all consumer population sizes. [**A. Assignment of traits to species**]{} (dark=present, white=absent); [**B. Supply rates of resources**]{}, with resources 1 through 3 supplied at constant rate, supply of resource 4 changing at discrete times. [**C. Species abundances**]{} all vary with changes in supply of resource 4, while [**D. Whole-community trait abundances**]{} 1 through 3 are constant apart from transient fluctuations with only trait 4 changing in response to changing supply of resource 4. ](fixed-4-1110-XiK.png){width="90.00000%"} & **D.** ![ \[fig:fixed-traits-results\] [**Regulation of functional structure within a community.**]{} A community model that satisfies the condition (\[eqn:diagonalcondition\]) maintains trait abundances fixed through changing environmental conditions by rebalancing all consumer population sizes. [**A. Assignment of traits to species**]{} (dark=present, white=absent); [**B. Supply rates of resources**]{}, with resources 1 through 3 supplied at constant rate, supply of resource 4 changing at discrete times. [**C. Species abundances**]{} all vary with changes in supply of resource 4, while [**D. Whole-community trait abundances**]{} 1 through 3 are constant apart from transient fluctuations with only trait 4 changing in response to changing supply of resource 4. ](fixed-4-1110-T.png){width="84.00000%"} This model community achieved equilibrium by bringing trait abundances to the needed levels after each change in community structure, even though the sizes of the four populations embodying those trait abundances were all different after each change. The population sizes were all altered in just the way necessary to adjust the total abundance of the trait of consumption of resource 4 to match the changing supply rate of resource 4 and leave the other three unaltered. Example: Conservation of functional groups across differences in species composition ==================================================================================== One possible way in which communities may have a functional regularity that is not captured at the species level is that species may be interchangeable within guilds or groups of functionally equivalent species, where total counts in a group are conserved while species composition is not. The species in a group may be members of a family or phylum, or may be unrelated but perform similar functions. I constructed a model involving multiple guilds, in which members of each guild shared a functional trait of consumption of a guild-defining resource and varied in other traits. Communities were assembled by drawing species from a common pool of species. Consumer species were grouped into three guilds, each guild defined by consumption of a guild-specific resource, with each species belonging to exactly one guild. Each species also consumed three other resources, assigned randomly from a common pool of five resources without regard to guild membership. Consumers’ functional response to resources was type II (\[eqn:typeii\]). For parsimony, resource supply rates were set equal at a numerical value of $3/2$, and the saturation parameters $h_{ij}$, ideal uptake rates $r_{ij}$, conversion factors $c_{ij}$, and mortality rates $m_i$ were all set to 1. Thirty species were constructed, ten in each guild, by assigning non-guild traits at random conditional on the species-trait assignment matrix having the maximum possible rank[^1], and thirty communities were constructed by randomly assigning twenty-one species to each, conditional on each community’s ${A}$ matrix having maximal rank (Figure \[fig:guild\]A and B). The dynamics of these model communities was evaluated, starting from initial conditions at which all species and resource abundances were $1.0$ in their respective units, for 200 time steps. Total abundances in each guild at the end of that time were plotted for comparison across communities. At the end of that process, species composition varied across communities (Figure \[fig:guild\]C), but the overall abundance of each guild was uniform across the model communities (Figure \[fig:guild\]D). [ll]{} **A.** ![ \[fig:guild\] **Conservation of functional groups across differences in species composition.** Overall abundances of each of three “guilds” of consumers of different resources are held fixed across communities assembled randomly from varying species of each guild. (**A.**) **Assignment of traits to species** in guild model. Guild membership is defined by the first three traits, corresponding to consumption of “guild-defining resources,” while the other five traits are guild-independent traits that distinguish species from one another. (**B.**) **Assignment of species to communities**. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. (**C.**) **Species abundances** (color coded by guild membership) and (**D.**) **overall abundances of guilds** at equilibrium, by community, in guild model. ](bulk-guild-2-species-traits.png){height="60.00000%"} & **B.** ![ \[fig:guild\] **Conservation of functional groups across differences in species composition.** Overall abundances of each of three “guilds” of consumers of different resources are held fixed across communities assembled randomly from varying species of each guild. (**A.**) **Assignment of traits to species** in guild model. Guild membership is defined by the first three traits, corresponding to consumption of “guild-defining resources,” while the other five traits are guild-independent traits that distinguish species from one another. (**B.**) **Assignment of species to communities**. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. (**C.**) **Species abundances** (color coded by guild membership) and (**D.**) **overall abundances of guilds** at equilibrium, by community, in guild model. ](bulk-guild-2-species-communities.png){height="60.00000%"} \ **C.** ![ \[fig:guild\] **Conservation of functional groups across differences in species composition.** Overall abundances of each of three “guilds” of consumers of different resources are held fixed across communities assembled randomly from varying species of each guild. (**A.**) **Assignment of traits to species** in guild model. Guild membership is defined by the first three traits, corresponding to consumption of “guild-defining resources,” while the other five traits are guild-independent traits that distinguish species from one another. (**B.**) **Assignment of species to communities**. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. (**C.**) **Species abundances** (color coded by guild membership) and (**D.**) **overall abundances of guilds** at equilibrium, by community, in guild model. ](bulk-guild-2-equilibrium-0.ribbon-plot.X.png){height="60.00000%"} & **D.** ![ \[fig:guild\] **Conservation of functional groups across differences in species composition.** Overall abundances of each of three “guilds” of consumers of different resources are held fixed across communities assembled randomly from varying species of each guild. (**A.**) **Assignment of traits to species** in guild model. Guild membership is defined by the first three traits, corresponding to consumption of “guild-defining resources,” while the other five traits are guild-independent traits that distinguish species from one another. (**B.**) **Assignment of species to communities**. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. (**C.**) **Species abundances** (color coded by guild membership) and (**D.**) **overall abundances of guilds** at equilibrium, by community, in guild model. ](bulk-guild-2-equilibrium-0.ribbon-plot.T.png){height="60.00000%"} Example: Conservation of overlapping functional traits across differences in species composition {#sec:overlapping} ================================================================================================ While in the above model, each species belonged to a single functional guild, I constructed a second model in which each consumer possessed multiple functional traits that were shared by other consumer species in varying combinations, so that regulation of trait abundances required a complex balancing of all the overlapping species. The model was the same as above with the difference that rather than assigning species to guilds characterized by special traits, each species was assigned two of the ten resource consumption traits at random (Figure \[fig:rc\]A). As above, I constructed 30 communities by assigning 24 species to each, chosen at random from a common pool of 30 candidate species, conditional on full rank (Figure \[fig:rc\]B). Numerical parameters and functional responses were as in the above guild model, except that here $m_i=6.4$, $c_{ij}=4$, and $s_i=2$ for all $i$ and $j$. I recorded functional and taxonomic abundances after 200 time steps from initial conditions of uniform resource and species abundances of $1.0$. At the end of that time I found that species abundances varied widely from community to community, but trait abundances were uniform across communities (Figure \[fig:rc\]). [ll]{} **A.** ![ \[fig:rc\] **Conservation of overlapping functional traits across differences in species composition.** When all consumption traits are randomly assorted across consumers, overall trait abundances are equal, as predicted by equal resource supply rates, independent of consumer species presence/absence or abundances. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **species abundances** at equilibrium, by community, and (**D.**) **trait abundances** at equilibrium, by community, in overlapping-traits model. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-consumer-resource-species-traits.png){height="60.00000%"} & **B.** ![ \[fig:rc\] **Conservation of overlapping functional traits across differences in species composition.** When all consumption traits are randomly assorted across consumers, overall trait abundances are equal, as predicted by equal resource supply rates, independent of consumer species presence/absence or abundances. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **species abundances** at equilibrium, by community, and (**D.**) **trait abundances** at equilibrium, by community, in overlapping-traits model. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-consumer-resource-species-communities.png){height="60.00000%"} \ **C.** ![ \[fig:rc\] **Conservation of overlapping functional traits across differences in species composition.** When all consumption traits are randomly assorted across consumers, overall trait abundances are equal, as predicted by equal resource supply rates, independent of consumer species presence/absence or abundances. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **species abundances** at equilibrium, by community, and (**D.**) **trait abundances** at equilibrium, by community, in overlapping-traits model. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-cr-equilibrium-0.ribbon-plot.X.png){height="60.00000%"} & **D.** ![ \[fig:rc\] **Conservation of overlapping functional traits across differences in species composition.** When all consumption traits are randomly assorted across consumers, overall trait abundances are equal, as predicted by equal resource supply rates, independent of consumer species presence/absence or abundances. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **species abundances** at equilibrium, by community, and (**D.**) **trait abundances** at equilibrium, by community, in overlapping-traits model. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-cr-equilibrium-0.ribbon-plot.T.png){height="60.00000%"} Example: Coexistence of conserved and variable traits ===================================================== Where the above model results explored cases in which functional community structure was the same across communities due to an underlying equality in conditions, here I look at how differences in conditions can be reflected by predictable differences in functional structure. The model I present here was constructed in the same way as in Section \[sec:overlapping\], with the difference that 15 communities were constructed and then evaluated subject to two different environments, labeled control and treatment. All parameters were set as above in the control arm of the experiment, while in the treatment arm the resources were partitioned into three classes, one whose supply rates were unchanged at 2.0, one in which supply rates were elevated to 2.8, and one which which they were reduced to 1.2. Trait abundances at equilibrium (Figure \[fig:difference\]D) clearly distinguished treated from control communities, and treated from unmodified resources. The traits associated with fixed-supply resources behaved like a “core functional structure” across these communities, while the traits associated with treated resources were variable in their abundances in accordance with the variation in resource supply. Species abundances varied between communities in both control and treatment groups (Figure \[fig:difference\]C), and did not provide a visually apparent indicator of group membership. [ll]{} **A.** ![ \[fig:difference\] **Coexistence of conserved and variable traits in simulated experimental conditions.** Randomly assembled model communities are evaluated in “control” conditions of equal resource supply rates, and “treatment” conditions with altered supply rates. Trait abundances track supply rates across differences in community composition and across arms of the experiment. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **Species abundances** at equilibrium, by community and treatment arm, and (**D.**) **trait abundances** at equilibrium, by community and treatment arm, in simulated experiment model. Each community is simulated under both treatment and control conditions. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-difference-species-traits.png){height="60.00000%"} & **B.** ![ \[fig:difference\] **Coexistence of conserved and variable traits in simulated experimental conditions.** Randomly assembled model communities are evaluated in “control” conditions of equal resource supply rates, and “treatment” conditions with altered supply rates. Trait abundances track supply rates across differences in community composition and across arms of the experiment. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **Species abundances** at equilibrium, by community and treatment arm, and (**D.**) **trait abundances** at equilibrium, by community and treatment arm, in simulated experiment model. Each community is simulated under both treatment and control conditions. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-difference-species-communities.png){height="60.00000%"} \ **C.** ![ \[fig:difference\] **Coexistence of conserved and variable traits in simulated experimental conditions.** Randomly assembled model communities are evaluated in “control” conditions of equal resource supply rates, and “treatment” conditions with altered supply rates. Trait abundances track supply rates across differences in community composition and across arms of the experiment. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **Species abundances** at equilibrium, by community and treatment arm, and (**D.**) **trait abundances** at equilibrium, by community and treatment arm, in simulated experiment model. Each community is simulated under both treatment and control conditions. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-difference-equilibrium-0.ribbon-plot.X.png){height="60.00000%"} & **D.** ![ \[fig:difference\] **Coexistence of conserved and variable traits in simulated experimental conditions.** Randomly assembled model communities are evaluated in “control” conditions of equal resource supply rates, and “treatment” conditions with altered supply rates. Trait abundances track supply rates across differences in community composition and across arms of the experiment. (**A.**) **Assignment of traits to species**, (**B.**) **assignment of species to communities**, (**C.**) **Species abundances** at equilibrium, by community and treatment arm, and (**D.**) **trait abundances** at equilibrium, by community and treatment arm, in simulated experiment model. Each community is simulated under both treatment and control conditions. Some species assigned to communities may not survive beyond an initial transient as community comes to equilibrium. ](bulk-difference-equilibrium-0.ribbon-plot.T.png){height="60.00000%"} Discussion ========== The above analysis has demonstrated, in a broad class of widely used models of consumer-resource ecological community dynamics, conditions on consumer physiology under which the community-wide abundance of traits, pathways, or genes involved in resource usage can be predicted by resource availability, independent of the taxonomic makeup of the community and the abundances of the taxa it includes. In these model examples, and in general, the total rate of uptake of a given resource must balance the resource’s net rate of supply. Therefore if the supply does not change, and the community continues to consume that resource, the uptake rate must return to the same level, after a possible transient fluctuation, after a perturbation in the community, and two communities comprised of different species but encountering the same rates of supply must necessarily manifest the same uptake rates. In the models analyzed here, this matching of outflow to inflow can cause the community-wide rate of expression of traits associated with resource consumption to be conserved. If expression of a given trait or pathway is directly related to the rate of consumption of a resource, it is natural that the rate of trait expression should be predicted by the rate of resource supply because of the relation between supply rate and overall uptake rate. Even though those traits may be distributed across multiple consumer species, each involved with multiple resources, a central result of these models is that species abundances are driven by resource availability in such a way as to regulate all the resources simultaneously [@levin_community_1970], even if that balance requires a complex adjustment of all the species in the community. In this way, community-wide rates of trait expression can be matched to resource supplies even though all species abundances may vary irregularly in whatever ways are needed to make uptake rates balance supply rates. Such a concept of trait expression (the quantity $E_j$ defined above) is likely more appropriate to a metatranscriptomic description of a community, as implemented by high-throughput sequencing techniques such as RNA-Seq [@wang2009rna], than to a metagenomic description. A metagenomic description generated by measuring abundances of DNA sequences in cells may be better described as a measure of trait or gene prevalence, in the sense of abundance of organisms possessing the genes, such as the quantity $T_j$ used here. This quantity can also be tied to resource supply rates as trait expression can, but the relation is not as universal and requires more conditions. In summary, organisms that possess a genetic pathway may use it at varying rates depending on the availability of the resources involved, and on whether conditions are more favorable to the use of other pathways. In terms of the model dynamics, the uptake rate depends both on the abundance of organisms possessing the relevant trait and on the availability of the various resources relevant to those organisms. The conditions for conservation of the overall prevalence of such genes across the community include regulation of resource uptake rates *per capita* to common levels across communities. In many if not all conditions, this likely requires control of resource abundances to common levels across communities (the vector $\mathbf{R}^*$). Under those conditions, a fixed relation between trait expression and trait prevalence is maintained, allowing both to be conserved across differences in consumer species. The examples in this article have demonstrated this more stringent condition of regulation of trait prevalences, for illustration purposes. They showed a series of different results involving regulation of community functional structure, as defined by trait prevalences, that can be manifest by this effect. The first example demonstrated that in a single model community, a temporal fluctuation in one resource supply rate can induce a coordinated shift in all of the species abundances, though its effect on the trait abundances is restricted to the one trait, leaving the community’s functional structure otherwise unchanged. Second, in a model of guilds of consumers specialized on different resources, that is, each characterized by a different resource-consumption trait, the overall size of each guild was shown to be predicted directly by resource supply, across multiple community structures, while the species composition of each guild varied widely across communities. This follows from the definition of guilds which makes their sizes effectively identical to trait prevalences. Next was a model in which consumption traits were not partitioned into disjoint guilds, but shared in overlapping ways by consumers of multiple resources. In multiple model communities differing widely in species composition, the community-wide abundance of consumption traits was nonetheless seen to be uniform across communities when resource supplies were uniform. Finally, differences in resource supply in a model controlled experiment were shown to produce regular, predictable differences in trait prevalences across model communities while core functional traits corresponding to unaltered resources were held fixed, at the same time that all species abundances varied across communities and treatment groups in apparently irregular ways. All the above examples demonstrated conditions in which functional structure of a community is exactly determined by its environment, independent of its taxonomic composition. The theory that predicts these outcomes also predicts that functional structure will be approximately conserved across community structures if the conditions are nearly enough met, an important consideration when attempting to apply such results to the imprecise world of biology outside of models. These results demonstrate a mechanism by which functional structure can be predicted directly from environmental conditions in a simple case, bypassing the complexities of taxonomic variation. It should be read not as a faithful model to be applied directly to communities in the lab or field, but as a step toward a fuller theory to describe them. This paper offers a proof of concept that conservation of trait abundances can be explained by known models of community dynamics, and that functional observations of communities can describe and predict their behavior more parsimoniously than taxonomic observations. Interestingly, these results turn out to be insensitive to the range of different types of functional response curves that can be manifested by resource consumers. Instead, they require a condition of uniformity of response across consumer taxa. The different consumers’ responses to resource availability must satisfy a somewhat opaque consistency condition, to allow the total presence of resource consumption traits to be held in a consistent relation to resource supply at the same time that the rates of trait expression are as well. In the example model results presented above, this is achieved by assuming the functional response to each resource is the same among all consumers that use it, which is an especially simple way to satisfy the condition, regardless of the type of functional response. Note that the result does not require that all species included in community assembly meet these conditions, but only that they be met by the consumers that are present in the community at equilibrium. This work has multiple limitations. It does not apply directly to communities whose dynamics are shaped by interactions other than resource competition, for example bacteria-phage interactions, direct competition or facilitation between microbes, or host-guest interactions such as host immunity. While similar results may hold in these cases, they require expanded models to investigate them. The assumptions made here about the close mapping between trait expression and uptake rates are likely not satisfied in many cases, and should be unpacked to allow a fuller treatment of the subject. Spatial heterogeneity alters the behavior of consumer-resource models and must be studied separately, and can open up additional interesting questions such as the response of communities to spatially variable resource supply. Endogenous taxonomic heterogeneity driven by local dispersal may not imply comparable functional heterogeneity if underlying abiotic conditions are homogeneous. The analysis of equilibrium community structures also likely does not apply to many communities, and it may be worthwhile to expand the analysis to describe slow dynamics of community structure in conditions of constant immigration, seasonal or other temporal variability in the environment, or evolutionary change in which equilibrium is not attained. It is not obvious whether the conditions presented here for compatibility of vital rates across taxa to make trait abundances behave regularly are realistic for microbes. Having established that these are the necessary conditions in these models, if they are not considered believable, then this work serves to illuminate the questions that must be answered about how microbial communities diverge from these models, and how else their observed functional regularities can be explained. One avenue might be to investigate whether $R^*$ competition under conditions of high diversity can reduce a community without the closely matched $R^*$ conditions described here to a subcommunity in which such a condition is roughly though not precisely met. These results suggest a number of further questions to be investigated, such as the impact of more complex mappings between genetic pathways and resource uptake dynamics, and the dynamics of functional community structure in the presence of mechanisms such as direct microbe-microbe interactions or host immunity. The dynamics of traits involved in functions other than resource consumption is left to be studied, such as for example drug resistance or dispersal ability, as is the impact of evolutionary dynamics, including horizontal transfer, on the dynamics of functional composition. It may be productive to investigate whether a community’s need to regulate its functional composition in certain ways can lead to selection for certain kinds of genetic robustness, dispersal, horizontal transfer, or other characteristics. It would be of interest to study whether conditions selecting for sharing of traits across taxa can be distinguished from those selecting for specialization by taxon. The present study is offered as an initial investigation, presenting an existence proof of the ability of a community to regulate its functional composition independent of its taxonomic makeup, in hope it will open doors to further work. Community ecology theory often focuses on questions of import primarily to communities of plants and animals, examining models of interactions among a relatively small number of species, whose traits are stably defined, to explain patterns of coexistence and diversity. In microbial ecology, where organisms of different taxa share and exchange genes, and communities can be very diverse and variable in composition over time and space, theoretical questions particular to microbial ecology may be posed, potentially driving ecological theory into new and productive arenas. Acknowledgements ================ This study was partially supported by a Models of Infectious Disease Agent Study (MIDAS) grant from the US NIH/NIGMS to the University of California, San Francisco (U01GM087728). LW is grateful to Peter Ralph for a comment that motivated this project, and to PR, Todd Parsons, Travis Porco, Sarah Ackley, Rae Wannier, and several anonymous reviewers for helpful conversation and feedback. [10]{} Sunagawa S, Coelho LP, Chaffron S, Kultima JR, Labadie K, Salazar G, et al. Structure and function of the global ocean microbiome. Science. 2015 May;348(6237):1261359. Available from: <http://science.sciencemag.org/content/348/6237/1261359>. Louca S, Jacques SMS, Pires APF, Leal JS, Srivastava DS, Parfrey LW, et al. High taxonomic variability despite stable functional structure across microbial communities. Nature Ecology & Evolution. 2017 Jan;1(1):0015. Available from: <https://www.nature.com/articles/s41559-016-0015>. Turnbaugh PJ, Hamady M, Yatsunenko T, Cantarel BL, Duncan A, Ley RE, et al. A core gut microbiome in obese and lean twins. Nature. 2009 Jan;457(7228):480–484. Available from: <http://www.nature.com/nature/journal/v457/n7228/full/nature07540.html>. . Structure, function and diversity of the healthy human microbiome. Nature. 2012 Jun;486(7402):207–214. Available from: <https://www.nature.com/nature/journal/v486/n7402/abs/nature11234.html>. Gosalbes MJ, Abellan JJ, Durbán A, Pérez-Cobas AE, Latorre A, Moya A. Metagenomics of human microbiome: beyond 16s [rDNA]{}. Clinical Microbiology and Infection. 2012 Jul;18:47–49. Available from: <http://onlinelibrary.wiley.com/doi/10.1111/j.1469-0691.2012.03865.x/abstract>. Gosalbes MJ, Durbán A, Pignatelli M, Abellan JJ, Jiménez-Hernández N, Pérez-Cobas AE, et al. Metatranscriptomic [Approach]{} to [Analyze]{} the [Functional]{} [Human]{} [Gut]{} [Microbiota]{}. PLOS ONE. 2011 Mar;6(3):e17447. Available from: <http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0017447>. Goldford JE, Lu N, Bajić D, Estrela S, Tikhonov M, Sanchez-Gorostiaga A, et al. Emergent simplicity in microbial community assembly. Science. 2018 Aug;361(6401):469–474. Available from: <http://science.sciencemag.org/content/361/6401/469>. Fukami T, Martijn Bezemer T, Mortimer SR, van der Putten WH. Species divergence and trait convergence in experimental plant community assembly. Ecology Letters. 2005;8(12):1283–1290. Available from: <http://dx.doi.org/10.1111/j.1461-0248.2005.00829.x>. MacArthur R, Levins R. Competition, [Habitat]{} [Selection]{}, and [Character]{} [Displacement]{} in a [Patchy]{} [Environment]{}. Proceedings of the National Academy of Sciences of the United States of America. 1964;51(6):1207–1210. Available from: <http://www.jstor.org/stable/72141>. Levin SA. Community [Equilibria]{} and [Stability]{}, and an [Extension]{} of the [Competitive]{} [Exclusion]{} [Principle]{}. The American Naturalist. 1970 Sep;104(939):413–423. Available from: <http://www.journals.uchicago.edu/doi/10.1086/282676>. Tilman D. Resource [Competition]{} and [Community]{} [Structure]{}. Princeton University Press; 1982. Holling CS. Some characteristics of simple types of predation and parasitism. The Canadian Entomologist. 1959;91(7):385–398. Holling CS. The components of predation as revealed by a study of small-mammal predation of the European pine sawfly. The Canadian Entomologist. 1959;91(5):293–320. Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nature reviews genetics. 2009;10(1):57. Appendix ======== Analysis of conditions for conservation of trait structure across composition {#app:analysis} ----------------------------------------------------------------------------- This section presents a mathematical analysis of the conditions under which the model presented above conserves community functional structure across variation in the selection of species composing the community. Let us assume $n_r$ resources, and a community defined by $n_s$ species coexisting at equilibrium, parametrized by constants $A_{ij}$, $c_{ij}$, and $m_i$, and the functions $f_{ij}(\mathbf{R})$. We wish to analyze conditions for the vectors of trait abundances $\mathbf{T}$ and $\mathbf{E}$ at equilibrium to be independent of the community composition. That is, imagine a large pool of species described by ${A}$, ${c}$, and $m$ values and response functions ${f}$: how must those values be constrained such that when a community is assembled from a subset of those species, the resulting trait abundances at equilibrium are the same regardless of which assemblage of those species was chosen. This section will analyze the model equations in matrix form. A model community is parametrized by constant matrices $\mathbf{A}$ and $\mathbf{c}$ and vectors $\mathbf{m}$ and $\mathbf{s}$, and a matrix $\mathbf{f}$ whose entries are functions $f_{ij}$ of the resource abundances $R_j$. The state of the model system is vectors $\mathbf{X}$ and $\mathbf{R}$, and derived vectors $\mathbf{T}$ and $\mathbf{E}$. Many of these objects have equilibrium values $\mathbf{X}^*$, $\mathbf{R}^*$, etc. $\mathbf{1}$ is a vector of ones. The operator $\odot$ stands for elementwise multiplication. The relevant relations are as follows. First there are the equilibrium conditions for the $X$ variables, \[eqX\] ( ) = , and for the $R$ variables, \[eqR\] = ( )\^T . The trait abundances are defined in terms of the $X$ variables: \[Tdef\] = \^T \[Edef\] = ( )\^T . It follows immediately from the equilibrium condition (\[eqR\]) that the equilibrium vector $\mathbf{E}^*$ of trait expression rates is entirely determined by the vector of supply rates $\mathbf{s}$, regardless of the population abundances $\mathbf{X}$ and per-consumer uptake rates $\mathbf{f}$ that are the components of the expression rates. Because these expression rates are defined equal to the rate of uptake of the resources, and equilibrium requires the rate of uptake to equal the rate of supply, no specific conditions on the structure of the community are needed to guarantee this result. For this reason, trait expression rates in this model are conserved across community structures. However, conditions for conservation of the vector $\mathbf{T}^*$ of rates of presence of traits are more restrictive and require more analysis. The first equation (\[eqX\]) is solved by finding values of $f_{ij}$ that bring the two sides to equality. As written this equation is underdetermined as there are $n_s$ conditions for $n_sn_r$ values $f_{ij}$. However, the $f$ variables are also constrained by their dependence on the $n_r$-dimensional vector $\mathbf{R}$. Because of that condition, the matrix $\mathbf{f}$ does not range freely over $n_sn_r$ dimensions, but over an $n_r$-dimensional submanifold of that space defined by the parametrization $\mathbf{f}(\mathbf{R})$: \[fR\] = () . An equilibrium matrix of resource uptake rates $\mathbf{f}^*$ is found by solving (\[eqX\]) and (\[fR\]) simultaneously. The functional forms of the response functions $f_{ij}(\mathbf{R})$ can be substituted into (\[eqX\]) to yield a system of $n_s$ equations in the $n_r$ variables $R_j$. In the generic case, when $n_s=n_r$, this determines a unique solution vector $\mathbf{R}^*$, which determines the values of all entries of the matrix $\mathbf{f}^*=\mathbf{f}(\mathbf{R}^*)$. In other cases, multiple solutions for $\mathbf{R}^*$ and $\mathbf{f}^*$ may be possible. Given $\mathbf{f}^*$, equilibrium population sizes are described by (\[eqR\]). If the matrix $(\mathbf{A}\odot\mathbf{f}^*)^T$ is square and singular, then the vector $\mathbf{X}^*$ of equilibrium population sizes is the unique solution of (\[eqR\]). If the matrix is nonsingular, then there can be a space of solutions for $\mathbf{X}^*$. The trait abundances $\mathbf{T}^*$ must satisfy (\[Tdef\]) given equilibrium values of $\mathbf{X}$. Now let us imagine that the community’s equilibrium trait abundances can be predicted from the resource supply alone, without dependence on the parameters describing the species in the community. The above relations show that given a community structure, both $\mathbf{T}^*$ and $\mathbf{s}$ are linearly related to $\mathbf{X}^*$. For their relation to be independent of the community, let us assume \^\* = for some constant matrix $\mathbf{K}$. This implies that \^\* = (\^\*)\^T \^\* . Comparing this to (\[Tdef\]), it can be satisfied if (\^\*)\^T = \^T , \_k [A]{}\_[ik]{}[f]{}\_[ik]{}(\^\*) [K]{}\_[jk]{} = [A]{}\_[ij]{} for each $i$ and $j$. Given a community parametrized by the constant matrix $\mathbf{A}$ and the functional forms $f_{ij}()$, the above equation describes a set of $n_sn_r$ constraints on the resource concentrations $R^*_j$ and trait assignments $A_{ij}$, which must be satisfied at equilibrium simultaneously with the previously discussed constraints. The above solves a general case of the problem, in which the entire vector $\mathbf{T}^*$ of trait abundances is determined by the full vector $\mathbf{s}$ of supply rates. The more restrictive case that each trait abundance $T^*_j$ depends only on the supply of resource $j$, rather than on all the resources’ supply rates, requires the matrix $\mathbf{K}$ to be diagonal. In this case, the condition becomes A\_[ij]{} f\_[ij]{}(\^\*) k\_[j]{} = A\_[ij]{} for each $i$ and $j$, where $k_j$ is the $j$’th diagonal entry of $\mathbf{K}$, or \[T-diag\] f\_[ij]{}(\^\*) = 1/k\_j for all $i$ and $j$ for which $A_{ij}$ is nonzero. This condition implies that for each resource, the equilibrium uptake rates $f^*$ of that resource must be equal across its consumer species, and equal to a value that is uniform across different community structures. Given that, the resource concentrations are those implied by these values of $f$, and the species abundances are a solution of (\[eqR\]). Note that that species abundances can vary depending on $\mathbf{A}$. Also, I note that this condition can permit more than $n_r$ species to coexist on $n_r$ resources, as it makes them compatible in a non-generic way. Note also, however, that because the operations of pointwise multiplication and division, matrix multiplication, and matrix inversion are continuous in the values of all matrix entries, the above results have the property that if the above two conditions are nearly met, that is, if the $f$ and $A$ entries are within a suitably small distance $\varepsilon$ of values that satisfy the conditions exactly, then the trait abundances will be close to values that are exactly conserved. In other words, the functional regularity in question is approximately achieved when the conditions are nearly enough met. In this approximate but not exact case, the model does behave generically and its diversity can be expected to be limited by the number of resources. If resources have multiple consumers apiece, the result that for each resource $j$, $f_{ij}(\mathbf{R}^*)=1/k_j$ across all consumers $i$ of resource $j$ does not require that the response function have a uniform form across consumers, $f_{ij}(\mathbf{R})\equiv f_j(R_j)$, but that is certainly one way it can be achieved. The example models in this paper are a special case of this condition, constructed by assigning some fixed number $p$ of traits to each species, and setting all consumers’ functional response curves for each resource equal, with $m_i \equiv m$ and $c_{ij}\equiv c$. In this case, (\[eqX\]) is satisfied by $f^*_{ij}=m/pc$ for all $i$ and $j$, which also satisfies condition (\[T-diag\]) with $k_j=pc/m$. Equilibrium resource abundances are $R^*_j = f_{ij}^{-1}(m/pc)$ for each $j$, for any consumer $i$, which is well-defined given that the functions $f_{ij}()$ are assumed independent of $i$. Under these assumptions, the above results imply that $T^*_j = p\hspace{.1em}c s_j/m$ for each $j$. [^1]: One fewer than the number of traits, or seven, is the highest rank this matrix can attain, given the requirement that guild traits sum to one and all traits sum to four for every species.
{ "pile_set_name": "ArXiv" }
--- author: - 'J. Weratschnig' - 'M. Gitti' - 'S. Schindler' - 'K. Dolag' title: | The complex galaxy cluster Abell 514:\ New results obtained with the XMM - Newton satellite [^1] --- Introduction ============ It is now well accepted that the intra-cluster medium (ICM)in clusters of galaxies is magnetized. The magnetic fields can be traced by diffuse cluster wide synchrotron radio emission (Giovannini et al. 1991, 1993, Feretti 1999 and Feretti & Giovannini 2007) or Inverse Compton hard X-ray radiation caused by relativistic electrons. Additionally, an indirect measure of the strength of magnetic fields is the rotation measure (RM), in which radiation from background radio sources is studied: according to the strength of the magnetic field inside the cluster, the polarization angle of the radio emission is rotated. The different observations lead to the conclusion that magnetic fields in clusters of galaxies have strengths of a few $\mu$G (Carilli & Taylor 2002).\ Dolag et al. (2001) showed that a relation exists between the X-ray [**surface brightness**]{} and the root mean square scatter ($\sigma_{\rm RM}$) of the Faraday Rotation Measures ($S_{\rm X}$ - $\sigma_{\rm RM}$ relation) that are used to evaluate the strength of the magnetic field. This relation is an important tool to study the connection between the magnetic field and the intra-cluster gas density and temperature (Dolag et al. 2001). In particular clusters with polarized extended radio sources are of interest, because it is possible to evaluate the RM scatter well. More sources in one cluster give the possibility to get values for the magnetic field strength in different parts of the cluster, and are therefore very important observational objects to understand the relation between the magnetic field and the X-ray properties. In order to compare the magnetic field and other cluster properties at the position of each radio source an X-ray image is required. The surface brightness $S_{\rm X}$ and the RMS can be determined at the position of each radio source.\ Since Abell 514 has several radio sources that offer the possibility to study the $S_{\rm X}$ - $\sigma_{\rm RM}$ relation, it was chosen for our study. In this paper, we present results from three *XMM–Newton* observations of this cluster.\ Throughout the paper, a $\Lambda$CDM ($\Omega_{\Lambda}$ = 0.7 and $\Omega_{\rm m}$ = 0.3) cosmology with a Hubble constant of 70 km s$^{-1}$ Mpc$^{-1}$ was assumed. Connection of the magnetic field and the ICM density ---------------------------------------------------- The two observables $S_{\rm X}$ and the RMS scatter ($\sigma_{\rm RM}$) compare the two line of sight integrals: $$\label{integrals} S_{\rm X} \propto \int{n^{2}_{\rm e}\sqrt{T}dx} \leftrightarrow \sigma_{\rm RM} \propto \int{n_{\rm e} B_{\|}dx}$$ where n$_{\rm e}$ is the electron density and $B_{\|}$ the magnetic field component parallel to the line of sight. (Dolag et al. 2001; Clarke et al. 2001) [![The scatter of the root mean square of the Faraday Rotation measure (RMS) against the X-ray flux of a sample of clusters, for which both measurements are available.[]{data-label="rmvsflux"}](rmvsflux.eps "fig:"){width="\columnwidth"}]{} When $\sigma_{\rm RM}$ is plotted versus the X-ray flux a clear relation can be seen. This relation can be fitted by: $$\label{sigma_rm_relation} \sigma_{\rm RM} = A\Big(\frac{S_{\rm X}}{10^{-5} \mbox{erg}/\mbox{cm}^2/\mbox{s}} \Big)^{\alpha}$$ A simple interpretation of this relation (e.g. assuming the temperature within the ICM and the scale-length of the magnetic field to be fixed) is that the slope $\alpha$ reflects the scaling of the magnetic field strength ($B$) with the electron density ($n_e$). An exact relation between these two scalings, $B$-$n_{\rm e}$ and $\sigma_{\rm RM}$-$S_{\rm X}$, is derived in Dolag et al. (2001) assuming a simplified model for galaxy clusters. Note that the uncertainties in the 3D position of the individual sources (which are not known) lead to significant uncertainties in the derived $\sigma_{\rm RM}$ and therefore imprints a substantial scatter in the scaling relation. In fact this is the largest contribution to the the error bars we calculate for $\sigma_{\rm RM}$ (see Dolag et al. 2001 for details). Additionally, it seems that there is a suspected dependence on the cluster temperature: clusters with a high overall temperature also seem to have high $\sigma_{\rm RM}$ values (see Fig. \[rmvsflux\]). To study such matter in detail, clusters that contain radio sources have to be investigated very accurately in radio and X-rays. Abell 514 ========= The cluster of galaxies Abell 514 is of Rood-Sastry type F, richness class 1, and lies between type II and III in the Bautz-Morgan classification. The cluster was first identified by George Abell 1958 using the National Geographic Society Palomar Observatory Sky Survey (Abell 1958). In 1966 it was observed by Fomalont & Rogstad (1966) during a radio survey at the 21 cm line. Waldthausen et al. (1979) mapped this cluster using the wavelength $\lambda$ = 11.1 cm. The optical centre is indicated by Abell et al. (1989) at RA(J2000) 04:47:40 and DEC(J2000) -20:25.7. Earlier X-ray observations were performed with ROSAT and Einstein and revealed a highly interesting X-ray morphology (e.g. Govoni et al. 2001).\ This cluster is very special in several ways. A very prominent characteristic is the rich morphology that can be seen in ROSAT images. In contrast to a spherical, relaxed cluster Abell 514 seems to be in a phase of ongoing merging, making it an example for the study of dynamical events connected with cluster formation. Another important point is the fact that six extended radio sources lie inside the cluster. These radio sources were studied in detail by Govoni et al. (2001), who derived information on the strength and structure of the cluster magnetic field by starting from Faraday Rotation measurements. Three of these sources are within the central field of view of the [*XMM–Newton*]{} observations which we present in this paper.\ Govoni et al. (2001) found observational evidence for the existence of a strong magnetic field. The strength of the magnetic field was estimated to be 4-7 $\mu$G in the centre with a coherence length of 9 kpc. They also give the $\sigma_{RM}$ of the radio sources that can be seen in the cluster region. Three of them - B2, D North and D South - (Marked as B2, D north and D south in Fig. \[radio\_ps\]) are inside the field of view of the *XMM* observations and will be presented in this paper. The radio source B1 was found only marginally polarized by Govoni et al. (2001) and is not used as a data point for the $S_{\rm X}$ - $\sigma_{\rm RM}$ relation.\ [![The location of three of the radio sources within the cluster Abell 514 (D$_{\rm north}$ and D$_{\rm south}$ are measurements from the same source, but in two slightly offset positions). The other three radio sources lie outside the field of view of this X-ray observation.[]{data-label="radio_ps"}](Smo_IMA_B2DnDs_invert.eps "fig:"){width="\columnwidth"}]{} Observations and data reduction =============================== The data we analyse in this paper result from two different *XMM–Newton* pointings, split into three distinct observations. The first observation took place in 2003, February 7th, the second in 2003, March 16th. In 2005, August 15th the cluster was observed for a third time. All observations were performed with the European Photon Imaging Camera (EPIC) using the medium filter in full frame mode. Table \[expTime\] displays the exposure times for the individual observations.\ For the third observation, CCD number six from the MOS 1 camera was switched off, because of an incident that occurred during revolution number 961 (the camera was hit by a micrometeroid). Therefore, this camera is only used for our analysis when the studied area does not lie inside the affected region. \[expTime\] Camera Obs. 1 (s) Obs. 2 (s) Obs. 3 (s) ----------- ------------ ------------ ------------ MOS1 tot. 14963 14959 15571 MOS1 eff. 9388 5269 5026 MOS2 tot. 14963 14954 15580 MOS2 eff. 9355 5585 5486 PN tot. 13388 13337 14148 PN eff. 5007 3506 3922 : Total and effective exposure times The data were reduced using SAS version 6.5. All three observations are heavily polluted by solar flares. The times with high count rates are therefore rejected. The rejection of times with high count rate is done by creating good time interval tables with defining an upper threshold for the count rates for each camera and observation. The times with count rates above the threshold are rejected and new data sets containing only the flare-free times produced. This threshold was defined using the count rates in the high energy (10 - 12 keV for MOS1 and MOS2, 12 - 14 for PN camera) bands. Times where the count rate was high and also changing with time, were cut out. We also had a look how the exposure time changes with the threshold: this curve has at first a steep slope if we take very low thresholds (cutting away most of the observation time) and gets shallow with high threshold (cutting away no observation time). A good criterium to choose the threshold is to take the point where the slope starts to change. The original and resulting exposure times are listed in Table \[expTime\].\ To study the diffuse emission of the ICM, point sources are also removed. This is done by a combination of a source list provided from the Science Operations Centre (SOC) of XMM data processing and visual inspection. For each camera and observation, region files that are to be excluded for the further analysis are created. We also check if point sources are coincident with the radio sources. However, this is only the case for B2. For the flux calculation, the reduced area is taken into account.\ Also, the images are corrected for the vignetting effect. To achieve this, we use two different methods. For the image preparation - especially to get exposure corrected mosaic images - we produce an exposure map and divide the images by this. Additionally, the method proposed by Arnaud et al. (2001) is used to correct for vignetting. Here, every photon is multiplied by a weight factor according to its position on the detector.\ Since the PN camera images have many bright columns, they are not used for the production of a mosaiced and smoothed image. However, for the spectral analysis, we use the data from those cameras as well. The areas that show bright pixels or columns are removed by using a mask.\ Another important reduction step is the correct background subtraction. The *XMM* background consists of three parts (a cosmic X-ray background (CXB), the background produced by soft proton flares and a non X-ray cosmic background (NXB) induced by high energy protons). The soft proton flares are already removed from the data files in the first reduction step, when the flare free event files are produced. To get rid of the CXB and the NXB we use the double-subtraction method proposed by Arnaud et al. (2002) throughout the spectral analysis. Results ======= Morphological Analysis {#morph_analysis} ---------------------- To study the structure of the cluster in detail, we produce a mosaic image of the MOS cameras of all three observations using the energy band between 0.3 and 10 keV (see fig. \[Mosaic\]). This image is smoothed using an adaptive smoothstyle and a signal to noise ratio (SNR) of 40 (see fig. \[smoIma\]). The adaptive smoothstye is especially created for poissonian images like X-ray images. Here, every pixel is assigned with a desired SNR and is then smoothed towards this SNR by a weighted cyclic convolution. We tried smoothing the image with different SNR and settled for a SNR of 40, because with this value the structure of the image is kept and the borders are not smoothed or enhanced in brightness too much. [![A mosaic image of the MOS cameras of the three different observations. The image is exposure corrected. The field of view of the observation is about 37$\times$28 arcmin.[]{data-label="Mosaic"}](mosaic_image_inv.eps "fig:"){width="\columnwidth"}]{} The size of the whole field of view of the observation has a length of 37 and a width of 28 arcmin. This corresponds to a size of 3.0 Mpc x 2.3 Mpc. The ICM emission seems to be elongated along a filament/main axis over the length of 1.6 Mpc. In the direction perpendicular to this axis, the cluster emission can be detected out to 0.8 Mpc.\ The X-ray centre lies at RA 04:48:04 (J2000) and DEC -20:26:42 (J2000). The area with the brightest X-ray emission is not a clear point-like feature. This might be the main reason that this value differs from the result of earlier observations (Govoni et al. 2001), which give the X-ray centre at RA 04:48:13 (J2000) and DEC -20:27:18 (J2000). It also depends on the used smoothing method. The most important point to mention here is however the differently sized point spread function (PSF) of ROSAT and *XMM–Newton*: ROSAT’s PSF is considerably larger (about 1 arcmin vs. 5-6 arcsec). This together with the different smoothing methods applied can explain the offset between the two positions for the X-ray centre. Especially with a cluster as inhomogeneous as Abell 514 the exact positioning of a centre is very dependant on smoothing techniques and detector sensibility.\ In Fig. \[opticalXrayCon\] we show the X-ray contours superposed on an optical image of the cluster (image taken from Aladin Previewer, Space Telescope Science Institute). The two subclumps that can be seen in the X-ray image correspond to the galaxy distribution of the optical image. The X-ray centre is offset with respect to the optical centre, which is at RA 04:47:40 (J2000) and DEC -20:25.70 (J2000) (Abell et al. 1989). This offset can be explained by the fact that Abell 514 is a merger cluster. If we assume that the Northwest peak has undergone a merger in recent times (more evidence for this scenario is also discussed in section \[spec\_analysis\] and \[discussion\].) the fact that the galaxy and gas distributions are offset is not surprising. [![The optical image overlayed with the X-ray contours. The two main X-ray clumps correspond well with the distribution of the galaxies, especially the area around the most X-ray bright emission shows the highest density in galaxies.[]{data-label="opticalXrayCon"}](optical_XrayCon_Centres_blob_invert.eps "fig:"){width="\columnwidth"}]{} The rich substructure that hints at a merger cluster can be seen clearly in Fig. \[smoIma\]. To the Northwest of the main cluster a small blob-like feature is also visible. In the optical image there are galaxies with cluster redshift seen in the area of this blob. Therefore we conclude that this is most likely another subpart of the cluster, which is infalling along the main axis and will merge with the cluster. It is about 500 kpc away from the closest part of the rest of the cluster and no connection can be seen towards the cluster. The brightest peak of the main cluster shows a steeper decline in surface brightness in the outwards direction than in the direction towards the second X-ray peak. This feature will be addressed later (see Sect. 5.1). [![A smoothed image of the whole cluster. This image is corrected for vignetting and smoothed with an adaptive smoothstyle with a signal to noise ratio of 40. It shows two subclumps and the overall elongated shape of the cluster. The size of the box is about 25 arcmin ($\sim$ 2.05 Mpc). The elliptical region indicates the area where we extracted a spectrum for the whole cluster.[]{data-label="smoIma"}](cluster_grey.eps "fig:"){width="\columnwidth"}]{} Around both main peaks visible in the image, the X-ray brightest one to the Northwest (NW) and the second brightest one to the Southeast (SE) of the cluster, we extract a surface brightness profile (see Fig. \[where\_SB\]). In both cases we chose regions that seem to be mostly unaffected by the merger between those two subparts. To do this, we selected the areas where no obvious substructures can be seen in the image (see Fig.\[where\_SB\]). In particular, we adopted wide-angular regions pointing outwards from the area connecting the two peaks, where instead substructures can be seen both in the image and in the temperature map (see Fig.\[Tmap\]). To correct for vignetting, a weight factor is applied to the data. The background is again subtracted using the double background subtraction method.\ The profile for the NW peak is shown in Fig. \[NW\_peak\]. Apart from one bump around $\sim$1.5 arcmin from the centre, the profile around the NW peak does not show any irregularities like bumps or similar structures. It is noticeable, that the decline between roughly 1.0 and 2.5 arcmin from the centre is steep compared with a relaxed cluster. For a relaxed cluster, the surface brightness profile can be fitted very well with a single $\beta$ profile: $$\label{betaprofile} S_{\rm X}(r)=S_{\rm X,0} \Big[1+ \Big( \frac{r}{r_{\rm c}} \Big)^{2}\Big]^{(0.5 - 3\beta)}$$ Here, $S_{0}$ is the central surface brightness, $r_{\rm c}$ the core radius and $\beta$ the slope parameter. In the case of a relaxed cluster, $\beta$ has a value of roughly 0.6. If we try to fit the profile of Abell 514 with a single $\beta$ profile, we get a value of 1.98 for $\beta$. This again shows that it is not a relaxed cluster part, although no substructure is seen. The steep decline will be discussed later. [![The image shows the regions considered for deriving the surface brightness profiles around the NW and SE peaks. Each area was divided in different annuli and the gaps in the detector and point sources were masked before extracting the surface brightness profiles.[]{data-label="where_SB"}](Smo_IMA_sb_prof_regions_invert.eps "fig:"){width="\columnwidth"}]{} We attempted a similar analysis around the SE peak. We choose five annuli around the center (see Fig. \[where\_SB\]) in a direction away from the connection towards the other peak. However, this analysis was complicated by the low count rates in this region. We got indications that the surface brightness profile around the SE peak is shallower than the NW one. [![The surface brightness profile around the NW peak (the X-ray brightest). The profile declines rapidly outside $\sim$ 2 arcmin, a feature most likely caused by a shock due to a merger (see Sect. 5.1).[]{data-label="NW_peak"}](NW_peakProfile.eps "fig:"){width="\columnwidth"}]{} Spectral Analysis {#spec_analysis} ----------------- As a first step we obtain the temperature and metallicity for the whole cluster. To get this information, we extract a spectrum in the elliptical region shown in Fig.\[smoIma\]. This is done separately for each camera and observation to maximize the signal to noise ratio. The background is subtracted using the double subtraction method proposed by Arnaud et al. (2002). The spectra are then loaded into Xspec and fitted with a redshifted [MeKaL]{} model. To include the Galactic absorption, the Tuebinger Absorption model ([tbabs]{}) was used.\ The energy range for the spectra was between 0.5 and 8.0 keV. This energy range was chosen because the distinct cameras have the best agreement in the results in this range. The redistribution matrix files (RMF) we use are calculated for the MOS cameras using the SAS task “rmfgen”. For the PN camera we adopted the canned matrix [epn[\_]{}ff20[\_]{}sY9[\_]{}v6.8.rmf]{}. The cluster temperature is 3.8 $\pm$ 0.2 keV, which is consistent with the value of $\sim$ 3.6 keV estimated from the L-T relation (Govoni et al. 2001). The overall cluster metallicity is 0.22 $\pm$ 0.07 in solar units. [^2]\ To study the temperature and metallicity distribution in detail, we divide the cluster into four regions and extract a spectrum in each one. This is done for all three observations for all cameras. Again, the resulting spectra are fitted in Xspec with a [MeKaL]{} model. Fig. \[RegionsTmap\] shows the regions where the spectra were extracted. The regions are chosen to contain a comparable photon signal and also give comparable statistics. The region numbers are defined in the following way: region 1 = outer region, region 2 = box around NW peak, region 3 = area between the two peaks, region 4 = box around SE peak. The final values for temperature and abundance do not change if those areas are moved around, as long as they cover the area around the NW peak, the region between the two peaks, the SE peak and the outskirts of the cluster. With the regions we give here, we are able to collect most photons per area and get better statistics then e.g. choosing circles as regions.\ [![The four regions where temperature and abundance were estimated. The numbers correspond to the region numbers given in the temperature and metallicity diagrams.[]{data-label="RegionsTmap"}](where_Tmap.eps "fig:"){width="\columnwidth"}]{} By comparing the temperature and metallicity distribution we are able to study the dynamical state of the cluster. In Fig. \[Tmap\] the temperature map which is calculated using spectra in different regions of the cluster is shown. Three regions with different temperature along the axis of the cluster can be seen, as well as a cooler outer region. [![The temperature map of the cluster. Overlayed are the contours of the X-ray surface map. The circle represents the region where a subpart of the cluster is visible in the raw and smoothed images (see Figs. 3 and 5). This area was excluded from the spectral analysis.[]{data-label="Tmap"}](Tmap+contours_final_invert.eps "fig:"){width="\columnwidth"}]{} The hottest region is the box number four which is located around the SE peak. It is also the one with the highest metallicity, as can be seen in the second diagram in Fig. \[TdisAbunDis\]. The right panel in Fig. \[TdisAbunDis\] shows the metallicity distribution in the cluster. We see that the SE peak has a higher metallicity than the rest of the cluster.\ Inside the error bars the temperatures derived for the NW region and the middle region can be seen as having the same temperature as the outside region. There is a trend in the cluster to have higher temperatures in the SE. The region around the SE peak is clearly the hottest of the whole cluster. [![The temperature and the metallicity distribution in the cluster. The region numbers are as defined in Fig. \[RegionsTmap\].[]{data-label="TdisAbunDis"}](TemDistr.eps "fig:"){width="4.45cm"}]{} ![The temperature and the metallicity distribution in the cluster. The region numbers are as defined in Fig. \[RegionsTmap\].[]{data-label="TdisAbunDis"}](AbunDistr.eps "fig:"){width="4.45cm"} The difference in metallicity between regions two and three compared to region four, can be seen as a sign that those parts of the clusters have not yet had the possibility to merge and are still infalling towards a common centre. As has been shown by Kapferer et al. (2006), a cluster has steeper gradients in metallicity before the merger process. When the subclusters have finally merged, their metallicity is smoothly distributed.\ Another way to study the temperature distribution is via hardness-ratio maps. Such a map is also produced for this cluster from four different energy bands (0.3-1, 1-2, 2-4.5, 4.5-8 keV). Only the MOS1 camera of the first observation could be used for this due to technical reasons. Therefore, the count rates are very low compared to the other method and only relative differences in temperature but no absolute values can be shown. The temperature map is presented to show that the temperature distribution is very inhomogeneous. This hints at a merger cluster which is not yet relaxed but in the first stages of merging (Fig. \[HRtmap\]).\ [![The temperature map created using the hardness ratio of images in four different energy bands. See text for details.[]{data-label="HRtmap"}](HR-tmap3.eps "fig:"){width="\columnwidth"}]{} The region of the brightest X-ray peak is cool, which is in good agreement with the spectral result that also gives a low temperature for this part of the cluster. The second brightest X-ray peak has a higher temperature, again corresponding to the spectral result that gives a higher temperature for the area around this peak. The region between the two peaks seems to be a mix of high and low temperatures, corresponding to the mean temperature of the spectral result. The seperate areas with different temperatures between the two X-ray clumps cannot be seen using the spectral method, since we do not have enough photons to produce a spectrum that can be fitted reliable. We therefore see the mixing of the different temperatures. The regions in the outer parts of the cluster have too low count rates to give reliable results. Mass determination ------------------ When we assume hydrostatic equilibrium and spherical symmetry, it is possible to calculate the mass of a galaxy cluster using the temperature and the density profiles. Although Abell 514 is a very active merger cluster and neither in a hydrostatic equilibrium nor has a spherical shape, we try to use these assumptions to calculate the mass of two subparts of the cluster. These two parts are the regions around the two X-ray brightest peaks. They show a separated emission and can be approximated as spherical symmetric in a first, rough step.\ The total mass is given by the equation: $$\label{hydroEquation} M_{\rm tot}(r) = -\frac{kT}{G\mu m_{\rm p}}r\Big[\frac{d \ln n_{\rm e}}{d \ln r} + \frac{d \ln T}{d \ln r}\Big]$$ where $k$ is the Boltzmann constant, $T$ the gas temperature, $G$ the gravitational constant, $\mu$ the mean molecular weight of the gas ($\mu$ $\approx$0.6), $m_{\rm p}$ the proton mass and $n_{\rm e}$ the electron density.\ If the ICM follows a $\beta$-model, the electron density can be written as: $$\label{electronDensity} n_{\rm e}(r) = n_{\rm e0}\Big[1+\Big(\frac{r}{r_{\rm c}}\Big)^2\Big]^{-\frac{3}{2}\beta}$$ The values for $\beta$ and $r_{\rm c}$ are the values obtained by fitting a $\beta$ profile to the surface brightness of the cluster.\ Inserting equation \[electronDensity\] into equation \[hydroEquation\], yields: $$\label{masseNonIsothermal} M_{\rm tot}(r) =-\frac{kr^2}{G\mu m_{\rm p}}\Big[\frac{dT}{dr} - 3\beta T\frac{r}{r^2+r_{\rm c}^2}\Big]$$ Assuming that the cluster is isothermal inside a certain radius, $\frac{dT}{dr}$ is zero. The final equation to calculate the mass inside a certain radius is therefore: $$\label{masseIsothermal} M_{tot}(r) = \frac{3k\beta}{G\mu m_p} T\frac{r^3}{r^2+r_c^2}$$ With the values we obtain by trying to fit a single $\beta$ model to the surface brightness profiles of the two brightest peaks, we are able to give at least a very rough first estimate of the masses. Since we can extract the profile of the second brightest peak only out to 5.7 arcmin ($\sim$ 490 kpc), we use this radius to calculate the mass for both regions. Using equation \[masseIsothermal\] and the results from the spectral analysis for the temperature in the different parts of the cluster (region 2 and 4, see below) the mass of the X-ray brightest part inside a radius of $\sim$490 kpc is about 3.0 10$^{14}$ M$_{\odot}$, while the second clump has a mass of about 6.5 10$^{13}$ M$_{\odot}$. This can only be seen as a crude first guess of the masses. The X-ray brightest part also seems to be the most massive one. This result can be expected from the L$_X$ - Mass relation. Discussion ========== Candidate for a cold front or a shock? -------------------------------------- A prominent morphological structure of Abell 514 is a steep decline in X-ray surface brightness towards the Northwest region. This can be seen as a sharp edge in the image (see Fig. \[smoIma\]), as well as a quick drop of the surface brightness profile outside $\sim$ 2 arcmin (see Fig. \[NW\_peak\]). Possible explanations for such a feature can be either a cold front or a shock caused by the merger process. Similar features were found by Markevitch et al. (2000) and Vikhlinin et al. (2002) in the clusters Abell 2142 and Abell 3667. Another example for a similar structure was also found in Abell 2256 by Sun et al. (2002). During a cluster merger, a cool core of a subpart of a cluster can survive the merging process. This is characterised by the fact that the temperature inside a brightness edge is lower than in the surrounding region. The other explanation for a feature like the one seen in Abell 514 would be a shock where the material is compressed.\ To test if the edge in Abell 514 is caused by a cold front or a shock we study two regions, one inside and one outside the edge visible in the smoothed X-ray image (Fig. \[smoIma\]), with respect of their density and temperature. The regions used for this analysis are shown in Fig. \[where\_depr\]. [![Regions considered for the deprojection analysis in order to investigate the nature of the drop in surface brightness.[]{data-label="where_depr"}](where_depr.eps "fig:"){width="\columnwidth"}]{} Region 1 is the region inside the “edge”, while region 2 is the area in the outer part. We apply the deprojection method by using the Xspec model [projct]{} to calculate densities inside and outside of this border. Also the temperatures in both regions were calculated and compared with each other. The results are shown in Table \[ColdFront\_Shock\]. \[ColdFront\_Shock\] Region 1 Region 2 -------------------------------- ----------------- ----------------- Density \[10$^{-3}$cm$^{-3}$\] 0.91 $\pm$ 0.11 0.51 $\pm$ 0.06 Temperature \[keV\] 4.5 $\pm$ 0.8 3.6 $\pm$ 0.5 : The density and temperature inside (Region 1) and outside (Region 2) the brightness edge. The temperature inside the border is slightly higher than outside, but no jump in temperature can be deduced from our data, especially not a jump from a cool core to a warmer surrounding. Inside the errorbars, both temperatures can be seen as the same. Therefore the discontinuity in surface brightness cannot be caused by a cold front. The density however shows a clear discontinuity. It is therefore possible that the brightness jump is due to a shock. Such a shock can be the result from an earlier merger, with the different structures not distinguishable by eye any more.\ The visible interaction between the SE peak and the NW one is most likely not responsible for this feature. We see that the metallicities between the two peaks are very different. It is therefore plausible that they have not merged yet and cannot cause the feature in the surface brightness seen in the NW peak.\ Another possibility could be an interaction of the main X-ray peak with the small blob from the south east part. But since this structure is still 500 kpc away from the main cluster and no connection between the two parts can be seen we do not expect to see any interaction effects yet between those parts.\ The $S_{\rm X}$ - $\sigma_{\rm RM}$ relation and the magnetic field ------------------------------------------------------------------- According to theory (Tribble 1993, Dolag et al. 1999), the magnetic field is amplified in a hot merger cluster. The $S_{\rm X}$ - $\sigma_{\rm RM}$ relation is clearly dependent on the temperature of the cluster (see Sect. 1.1). For Abell 514, this general trend can be studied. Although Abell 514 is a merger cluster, its magnetic field is still quite low. This can be seen in good agreement with the low overall temperature of the cluster. Still, compared to other cool clusters, Abell 514 shows a slightly higher $\sigma_{RM}$ which is most likely due to the ongoing merger that already enhanced the magnetic field.\ One main aim of the *XMM–Newton* observations was to get new values for the X-ray flux in the regions where the radio sources are. It has to be mentioned that the true location of these radio sources inside the cluster is not known. This fact is taken into account in the errors given for the $\sigma_{\rm RM}$ value. The error bars cover the range of values between a source located in the cluster center and one behind the cluster. [![The X-ray flux - $\sigma_{\rm RM}$ relation with the new data points for Abell 514. The new values are in good agreement with the general slope of the relation[]{data-label="RMS_neu"}](RMS_XrayFlux_neu.eps "fig:"){width="\columnwidth"}]{} . The new values are then compared to the results from measurements of the magnetic field (via the $S_{\rm X}$ - $\sigma_{\rm RM}$). They fit well with other measurements from Coma, A119 etc. (see Fig. \[RMS\_neu\]). Fig. \[RMS\_neu\] shows the results from the new measurements, with the data point for the other clusters (Coma, A119, etc.) being converted to the same energy band. In general, the data points obtained for A514 are in good agreement with the relation found from the rest of the clusters. The lines in fig. \[RMS\_neu\] represent the correlations for the distinct clusters. We see that the data points of Abell 514 lie above the correlations of all the other clusters. This can be seen as an indication for an amplification of the magnetic field due to the ongoing merger in Abell 514. Overall, the points from Abell 514 make the whole correlation (if we use the observational data) less steep. Without the data points from Abell 514, the slope parameter is 1.19, while it is 0.98 with them. Inside an error of 10% both values agree. To avoid any instrumental bias in this study in the future, we plan to obtain [*XMM–Newton*]{} data for the other clusters in this sample as well.\ Additionally, with the creation of a temperature map, it is possible to compare the strength of the RMS scatter $\sigma_{\rm RM}$ from the rotation measures with the temperature of the ICM in the area of the radio source. Table \[RMT2\] shows the results. [|p[3cm]{}|c|p[2.3cm]{}|]{} Region Nr. & T (keV) & $\sigma_{RM} (rad/m^{2})$\ Region 2 (includes Radio source B2) & 3.2 $\pm$ 0.2 & 63 $^{+16}_{-41}$\ Region 4 (includes Radio sources D$_{north}$ and D$_{south}$) & 4.9 $\pm$ 0.4 & 54 $^{+12}_{-21}$ (north) 38 $^{+10}_{-23}$(south)\ Here, $\sigma_{\rm RM}$ is lower in the hottest region and higher in the cool, X-ray brightest part, that seems to be the most relaxed part of the cluster. However, this is not in contradiction with the above relation. Inside the cluster, more complicated effects take place additionally to the overall properties, that cannot yet be resolved with the current observations. Also, the RMS measurements are taken from a smaller area than the spectra we use to deduce the temperature. Small scale fluctuations inside these regions are therefore possible and not taken into account in table \[RMT2\]. Summary ======= We performed a detailed study of the X-ray emission of the merger cluster Abell 514. Three pointings by the *XMM–Newton* telescope were analysed to study the properties of this cluster, especially the dynamical state and the relation between the X-ray flux and the RMS of the rotation measure produced by the magnetic field inside the cluster.\ The image of Abell 514 shows the rich substructure of the cluster, a clear sign for an ongoing merger. Two main X-ray bright peaks can be seen with a connection between them. The brightest peak also shows signs for a shock, most likely caused by a recent merger.\ We found the overall cluster temperature to be 3.8 $\pm$ 0.2 keV. This value is in good agreement with the one from the L-T relation (3.6 keV). The cluster metallicity is 0.22 $\pm$ 0.07 solar units.\ Additionally to the calculation of overall values for the temperature and the metallicity we are able to produce rough temperature and metallicity maps. To achieve this, we divide the cluster in four different regions and extracted spectra therein. With the help of these maps, we can study the dynamical state of the cluster in more detail. It appears that the two main visible subclumps have not had time to merge yet. Their temperatures and metallicities have significantly different values. The brightest part in the Northeast shows a steep decline that could be caused by a shock due to an earlier merger. We divide this area into two regions to calculate the density and temperature inside and outside the visible edge. The obtained values indicate that the brightness edge is indeed caused by a shock.\ The X-ray flux is determined in the regions where extended radio sources are. These radio sources enable the measurement of the scatter of the Faraday Rotation measures which is due to the strength of the magnetic field. They are related with the X-ray flux. With the *XMM–Newton* observations we are able to add new points to this $S_{\rm X}$ - $\sigma_{\rm RM}$ relation. The new data points fit well in the model predicted by Dolag et al. (2001).\ The low overall temperature also confirms the relation between the ICM temperature and the magnetic field strength (lower temperature clusters have generally smaller magnetic fields). This can also be seen as a sign that the cluster is still in an early stage of the merger and has not been heated up yet, nor has the magnetic field been enhanced by the merger. Acknowledgments {#acknowledgments .unnumbered} =============== We wish to thank the referee for helpful comments. We thank E. Pointecouteau for the help with the spectral temperature map in Fig. \[Tmap\] and S. Ettori for providing the software required to produce the hardness ratio map in Fig. \[HRtmap\]. We also thank C. Sarazin for fruitful discussions and help with the topic. M. Gitti acknowledges support by grant ASI-INAF I/088/06/0. J. Weratschnig thanks the European Science Foundation (ESF). S. Schindler acknowledges the Austrian Science Foundation FWF grants P19300-N16 and P18523-N16. Abell, G. O. 1958, , 3, 211 Abell, G. O., Corwin, H. G., Jr., & Olowin, R. P. 1989, , 70, 1 Arnaud, M., Neumann, D. M., Aghanim, N., Gastaud, R., Majerowicz, S., & Hughes, J. P.2001, , 365, L80 Carilli, C. L., & Taylor, G. B. 2002, , 40, 319 Clarke, T. E., Kronberg, P. P., & Böhringer, H. 2001, , 547, L111 Dolag, K., Bartelmann, M., & Lesch, H. 1999, , 348, 351 Dolag, K., Schindler, S., Govoni, F., & Feretti, L. 2001, , 378, 777 Feretti, L., Dallacasa, D., Govoni, F., Giovannini, G., Taylor, G. B., & Klein, U. 1999, , 344, 472 Fomalont, E. B., & Rogstad, D. H. 1966, , 146, 528 Giovannini, G., Feretti, L., & Stanghellini, C. 1991, , 252, 528 Giovannini, G., Feretti, L., Venturi, T., Kim, K.-T., & Kronberg, P. P. 1993, , 406, 399 Govoni, F., Taylor, G. B., Dallacasa, D., Feretti, L., & Giovannini, G. 2001, , 379, 807 Feretti, L., & Giovannini, G. 2007, ArXiv Astrophysics e-prints, arXiv:astro-ph/0703494 Kapferer, W., Ferrari, C., Domainko, W., et al.2006, , 447, 827 Markevitch, M., Gonzalez, A. H., David, L., Vikhlinin, A., et al. 2000, , 541, 542 Sun, M., Murray, S. S., Markevitch, M., & Vikhlinin, A. 2002, , 565, 867 Tribble, P. C. 1993, , 263, 31 Vikhlinin,s A. A., & Markevitch, M. L. 2002, Astronomy Letters, 28, 495 Waldthausen, H., Haslam, C. G. T., Wielebinski, R., & Kronberg, P. P. 1979, , 36, 237 [^1]: Based on observations obtained with *XMM–Newton*, an ESA science mission with instruments and contributions directly funded by ESA member states and NASA. [^2]: The [MeKaL]{} fit gives a reduced $\chi^2 \sim 1.8$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Relying only on the standard model of elementary particles and gravity, we study the details of a new source of gravitational waves whose origin is in quantum physics. Namely, it is well known that massless fields in curved backgrounds suffer from the so-called “trace anomaly". This anomaly can be cast in terms of new scalar degrees of freedom which take account of macroscopic effects of quantum matter in gravitational fields. The linearized effective action for these fields describes scalar (as opposed to transverse) gravitational waves, which are absent in Einstein’s theory. Since these new degrees of freedom couple directly to the gauge field scalars in QCD, the epoch of the QCD phase transition in early universe is a possible source of primordial cosmological gravitational radiation. While the anomaly is most likely fully unsuppressed at the QCD densities (temperature is much higher than the u and d quark masses), just to be careful we introduced the window function which cuts-off very low frequencies where the anomaly effect might be suppressed. We then calculated the characteristic strain of the properly adjusted gravitational waves signal today. The region of the parameter space with no window function gives a stronger signal, and both the strain and the frequencies fall within the sensitivity of the near future gravitational wave experiments (e.g. LISA and The Big Bang Observer). The possibility that one can study quantum physics with gravitational wave astronomy even in principle is exciting, and will be of value for future endeavors in this field.' author: - 'De-Chang Dai$^{1,2}$[^1], Dejan Stojkovic$^3$' title: Primordial scalar gravitational waves produced at the QCD phase transition due to the trace anomaly --- Introduction ============ Recent detection of gravitational waves opened a new window for exploration of our universe [@Abbott:2016blz]. For the first time we can directly study violent events like black holes (and other compact objects) mergers [@Mandic:2016lcn; @Bhagwat:2016ntk; @Clesse:2016ajp], or collapse of massive stars [@Crocker:2017agi]. What is perhaps even more important, primordial gravitational waves can give us information about the early universe that is impossible to obtain from photons even in theory. For example, gravitons emitted during Hawking evaporation of primordial black holes should be observed as (appropriately redshifted) gravitational waves today [@Dong:2015yjs; @Anantua:2008am]. This is perhaps our best bet to ever observe effects of Hawking radiation from astrophysical black holes. We can also learn about the high energy fundamental physics above the electroweak phase transition. Namely, if the dimensionality of the space-time changes at high temperatures, then the physics of the propagation of gravitational waves might change. In the context of the so-called “vanishing dimensions" models, the solution to the standard model hierarchy problem requires the reduction of number of dimensions just above the electroweak scale[@Anchordoqui:2010er]. Since there are no propagating degrees of freedom in Einstein’s gravity in less than three spatial dimensions, that would imply a cut-off at some frequency in the spectrum of primordial gravitational waves [@Mureika:2011bv; @Stojkovic:2014lha]. Alternative theories of gravity have been analyzed in [@Yunes:2013dva]. For other applications, see review in [@Yunes:2016jcc]. The goal of this paper is to study some unique predictions of the standard model of elementary particles coupled to gravity. It is well known that massless fields in curved backgrounds suffer from the so-called “trace anomaly". This anomaly induces the non-local effective action which however can be cast into local form with the help of some additional scalar degrees of freedom [@Mottola:2006ew; @Giannotti:2008cv]. These fields take account of macroscopic effects of quantum matter in gravitational fields, which are not contained in the local metric description of Einstein’s theory. Despite the fact that the existence of these fields follow straight from the standard model and general relativity (with no exotic physics), their consequences and phenomenology have not been extensively studied so far. The linearized effective action for these fields describes scalar gravitational waves, which are absent in Einstein’s theory. Since they couple directly to the gauge field scalars, such as $G_{\mu \nu}^a G^{a \mu \nu}$ in the quantum chromodynamics (QCD), mergers of dense sources like neutron stars can give rise to these scalar gravitational waves. Some rough estimates for dense sources were given in [@Mottola:2016mpl]. In this paper we study an alternative source of the scalar gravitational waves. Namely, in early universe at temperatures higher than $150$MeV, the QCD anomaly becomes unsuppressed, at least in some frequency range. This epoch of the QCD phase transition is a possible source of primordial cosmological (scalar) gravitational radiation, in addition to the standard tensor gravitational waves [@Witten:1984rs]. . Homogeneous QCD phase transition ================================ We first give a brief overview of the the QCD phase transition with the relevant numbers, which will be relevant for calculating the characteristics of the gravitational waves signal. At temperatures above the QCD phase transition temperature ($T_c \approx 150 MeV$), the universe is full of free quarks, gluons and photons. At these temperatures the first two generations of quarks (u and d) are highly relativistic, and can be treated as massless since the temperature of the environment is much higher than their masses. The Hubble time at the QCD phase transition ($t_{QCD}\approx 10^{-5}s$) is much longer than the relaxation time scale for particle interactions, so the these particles are in thermal and chemical equilibrium. As the temperature of the universe decreases, some quarks and gluons condense to create hadronic matter. It takes about $0.1 \, t_{QCD} \approx 10^{-6}s$ for this phase transition to be completed. If the QCD phase transition is a first order transition, it proceeds via bubble nucleation [@Hogan:1984hx; @DeGrand:1984uq; @Boyanovsky:2006bf]. If there is no impure matter in the universe to create an early nucleation core, the QCD phase transition will not happen immediately when the temperature drops to $T=T_{QCD}$. Instead, the hadronic bubbles nucleate after a short period of supercooling, $t_{sc}\approx 10^{-3}t_{QCD}$. Once small hadronic bubbles are formed, their bubble walls expand by weak deflagration [@DeGrand:1984uq; @Ignatius:1993qn; @Ignatius:1994fr; @KurkiSuonio:1995pp; @Kajantie:1986hq]. The deflagration fronts move at the speed $v_{def}$. The bubble volume grows very quickly with time $$\label{Vb} V_{bubble}=\frac{4\pi}{3} \Big( v_{def}\Delta t\Big)^3 ,$$ where $\Delta t$ is time elapsed since the bubble formation. The period of bubble deflagrating growth is finished after $\Delta t_{nuc}\approx 10^{-6}t_{QCD}$. The phase transition releases latent heat and reheats the nearby region. The heat is transferred with the speed $v_{heat}$. The latent heat prevents any additional nucleation in these regions. Therefore, the average distance between the bubbles is $d_{nuc}\approx 2v_{heat}\Delta t_{nuc}\approx 1\text{cm}$ (this period is labeled as $t_2$ in Fig. \[phase\]). However, $d_{nuc}$ is about $1$m in [@Kajantie:1986hq], so we will use both values to explore the whole parameter space. The bubble radius is about $R_{bubble}\approx v_{def} \Delta t_{nuc}$. The supercooled regions cover about $1\%$ of the volume of the universe, so their volume is about $10^{-2}\frac{4\pi}{3}(\frac{d_{nuc}}{2})^3$. The bubble growth rate after deflagration slows down and is dominated by the universe expansion until the bubble grows to the size of $d_{nuc}/2$ (this period is labeled as $t_3$ in Fig. \[phase\]). At time $t_4$, the bubbles merge and leave very few free quark-gluon drops. After the deflagration phase, the hadron bubble grows because the universe is cooling down. If the supercooling is neglected, the volume fraction of matter in the hadron phase can be written as [@Kajantie:1986hq] $$f(t)=1-\frac{1}{4(r-1)}\Bigg(\tan^2\Big(\arctan\sqrt{4r-1}+\frac{3\chi(t_i-t)}{2\sqrt{r-1}}\Big)-3\Bigg) ,$$ based on the bag model. $t_i$ is the initial time when the QCD phase transition started, $\chi=\sqrt{8\pi G B}=\frac{1}{36\mu \text{sec}}(\frac{T_c}{200MeV})^2$, and $r$ is set to be $3$ in [@Kajantie:1986hq]. Here, $B$ is the bag energy. In this period, the single bubble’s volume increases with time $$\label{Vb1} V_{bubble}=V_0 f(t)$$ where, $V_0\approx \frac{4\pi}{3}(\frac{d_{nuc}}{2})^3$. In this formula, we neglect the contribution to the volume from the deflagration period because it is much smaller. This is a basic picture of the QCD phase transition. One may also consider temperature fluctuations which can cause inhomogeneous nucleation [@Ignatius:2000gv]. However, this will not change the formation process of the hadronic bubbles. ![ Early universe is dominated by radiation. Before the QCD phase transition, the universe is full of free quark-gluon matter (labeled by $Q$ in the figure), while hadrons are absent. During the first order phase transition, at some early time, $t_2$, some hadronic bubbles (labeled by $H$ in the figure) appear after a brief period of supercooling. These bubbles appear suddenly and release their latent heat to reheat the space outside of the bubbles. These small bubbles cover about $1\%$ of volume of the universe and then quench. The average distance between the bubbles is $d_{nuc}$. After that, they grow following adiabatic expansion of the universe. At time $t_3$, the bubbles grow to a radius of about $d_{nuc}$. At that time, hadronic bubbles (H) occupy most of the space. At $t_4$, the hadronic bubbles merge, and only very few free quark droplets are found in the hot spots. []{data-label="phase"}](phase){width="8cm"} Scalar gravitational waves from the trace anomaly {#sgv} ================================================= Any cosmological first order phase transition can produce gravitational waves in three different ways — through the bubble collisions [@Kosowsky:1992rz; @Kosowsky:1992vn; @Kamionkowski:1993fg], production of sound waves [@Hindmarsh:2013xza], and magnetohydrodynamic turbulence [@Kosowsky:2001xp]. These production mechanisms have been previously studied in [@Caprini:2007xq; @Huber:2008hg; @Jinno:2015doa]. In some cases, gravitational waves are strong enough to be detected by the future gravitational wave detectors [@Ahmadvand:2017xrw; @Aoki:2017aws]. In addition, the International Pulsar Timing Array can detect the gravitational wave generated by QCD bubble collisions[@Caprini:2010xv]. Apart from gravitational wave created by the isotropic mass distribution, the QCD phase transition can also change the primordial gravitational waves power spectrum [@Schwarz:1997gv; @Schettler2011]. If primordial gravitational waves are detected, they could provide an evidence for inflation and/or phase transitions in the early universe. So far, tensor mode (or transverse) gravitational waves have been very well studied in the literature, unlike the scalar mode gravitational waves. One of the reasons is that it is not easy to generate scalar mode gravitational waves. Two possible sources are high energy/density QCD or QED states which at the quantum level suffer from the so-called “trace anomaly". These can be naturally achieved in cores of dense (neutron) stars or in the very early universe. Here we study the possibility of the scalar mode gravitational wave production during the QCD phase transition in the early universe. It is well known that quantum massless fields propagating in classical curved backgrounds suffer from the “gravitational trace anomaly". Simply, the trace of the stress energy tensor for the massless field, which vanishes in Minkowski space, acquires additional terms due to the curvature of the background and it does not vanish. The general form of this gravitational trace anomaly in four space-time dimensions, is given by [@Mottola:2016mpl] $$\label{tmunu} T^\mu_\mu = bC^2 +b' (E-\frac{2}{3} \Box R)+b'' \Box R +\sum_i \beta_i L_i ,$$ where, $$\begin{aligned} &E=R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}-R_{\alpha\beta}R^{\alpha\beta}+R^2\\ &C^2=R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}-2R_{\alpha\beta}R^{\alpha\beta}+\frac{1}{3}R^2 .\end{aligned}$$ Here, $L_i$ is the Lagrangian of a massless gauge field, while $E$ and $C$ are given in terms of curvature invariants. In the context of the standard model that we are concerned about here, $L_i$ is either the quantum electrodynamics (QED) or quantum chromodynamics (QCD) Lagrangian. Parameters $b$, $b'$, $b''$ and $\beta_i$ are some dimensionless constants. In particular, $$\begin{aligned} &b=\frac{\hbar}{120(4\pi)^2}(N_s+6N_f+12N_v)\\ &b'=-\frac{\hbar}{360(4\pi)^2}(N_s+11N_f+62N_v) ,\end{aligned}$$ where $N_s$, $N_f$ and $N_v$ represent the number of free conformal scalars, four-component Dirac fermions, and vectors respectively. The coefficients $b$ and $b'$ cannot be removed by any local counterterms and represent a true anomaly. The coefficient $b''$ can be adjusted or set to zero. The coefficients $\beta_i$ are the $\beta$-functions of the corresponding gauge couplings in the Lagrangians $L_i$. The anomalous terms on the right hand side of Eq. (\[tmunu\]) can be described by a non-local effective action. However the non-local action can be cast into local form with the help of an additional scalar degree of freedom, $\phi$. This field take account of macroscopic effects of quantum matter in gravitational fields, which are not contained in the local metric description of Einstein’s theory. The complete local semi-classical effective action for the gravity plus the anomaly is [@Mottola:2016mpl] $$S_{eff}=S_{EH}(g)+S_{anom}(g,\phi) ,$$ where $S_{EH}(g)$ is the Einstein-Hilbert term $$\label{eha} S_{EH}(g)=\frac{1}{16G}\int d^4 x \sqrt{-g}(R-2\Lambda) .$$ Here, the speed of light is taken to be $c=1$. $S_{anom}(g,\phi)$ is a local effective action $$\begin{aligned} &S_{anom}(g,\phi)=-\frac{b'}{2}\int d^4 x \sqrt{-g}\Big[(\Box \phi)^2-2(R^{\mu\nu}-\frac{1}{3}Rg^{\mu\nu})\triangledown_\mu\phi\triangledown_\nu\phi\Big]\\ &+\frac{1}{2}\int d^4 x \sqrt{-g}\Big[b'(E-\frac{2}{3}\Box R)+bC^2+\sum_i\beta_i L_i\Big]\phi\end{aligned}$$ In general, one should add the contribution from the scalar field to the total energy density of the universe. The energy momentum tensor for the auxiliary scalar field in early universe is (see e.g. [@Anderson:2009ci]) $$T^{anom}_{\alpha\beta}=6b' H^4 g_{\alpha\beta} ,$$ where $H$ is the early time Hubble parameter. This form is similar to the energy momentum in de Sitter spacetime. During the QCD phase transition period, $H\approx 1/t_{QCD}$, the energy density is $$\rho^{anom}= -6b' H^4 \sim 10^{-68}(MeV)^4 .$$ This values is far below the energy density of the ordinary radiation, $\sim T_c^4$, so we can safely neglect the scalar field’s thermal energy density. In addition, it was argued that coupling to the extra scalar field may cause infrared divergencies due to state dependent variations on the horizon scale [@Anderson:2009ci]. However, the effect that we study here is well inside the causal distance, so this divergence at the horizon scales may be neglected too. Finally, during the QCD phase transition, the universe is radiation dominated, so the cosmological constant (dark energy) effects can be neglected at that time. We therefore set $\Lambda =0$ in Eq. (\[eha\]). The exact form of the field $\phi$ depends on both the geometry and gauge fields that the scalar field couples to during the QCD phase transition. However, the process of bubble nucleation lasts for a very short period compared to the cosmological expansion rate. Thus, the geometric effect can be neglected, and we will focus on the effects of the QCD nucleation only. Since QCD phase transition happens after inflation, the spacetime is approximately flat. Small perturbations around flat spacetime can be written as $$g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu} .$$ The perturbation, $h_{\mu\nu}$, can be written in the standard Hodge decomposition as $$\begin{aligned} &h_{tt}=-2{\cal A}\\ &h_{ti}=\mathfrak{B}_i^\perp+\triangledown_i {\cal B}\\ &h_{ij}=\mathcal{H}^\perp_{ij}+\triangledown_i\mathcal{E}_j^\perp+\triangledown_j\mathcal{E}_i^\perp+2\eta_{ij}{\cal C} +2(\triangledown_i\triangledown_j-\frac{1}{3}\triangledown^2){\cal D} .\end{aligned}$$ The gauge invariant components are [@Mottola:2016mpl] $$\begin{aligned} &&\Upsilon_{\cal A}={\cal A}+\dot{{\cal B}}-\ddot{{\cal D}}\\ &&\Upsilon_{\cal C}={\cal C}-\frac{1}{3}\triangledown^2 {\cal D}\\ &&\psi_i^\perp=\mathfrak{B}^\perp_i-\dot{\mathcal{E}}_i^\perp\\ &&H_{ij}^\perp \rightarrow H_{ij}^\perp .\end{aligned}$$ The first two scalar variables satisfy [@Mottola:2016mpl] $$\Box \Upsilon_{\cal A} =\Box \Upsilon_{\cal C} =\frac{8\pi G b'}{3}\Box^2 \phi =0 ,$$ which describes two kinds of the scalar gravitational waves in the flat space. Around the flat space, the equation of motion of $\phi$ is [@Mottola:2016mpl] $$\begin{aligned} \Box^2 \phi=\frac{1}{2}\Big(E-\frac{2}{3}\Box R +\frac{b}{b'}C^2+\frac{1}{b'}\sum_i \beta_i L_i\Big)=8\pi J .\end{aligned}$$ Therefore, $\Upsilon_A$ and $\Upsilon_C$ are $$\Upsilon_{\cal A}=\Upsilon_{\cal C} =-\frac{16\pi Gb'}{3}\int d^3 \mathbf{x} \frac{1}{|\mathbf{r}-\mathbf{x}|}J(\tilde{t},\mathbf{x}) .$$ where ${\tilde t}$ accounts for the time delay in propagation of the signal. The far field approximation gives $$\label{wave} \Upsilon_{\cal A}=\Upsilon_{\cal C} \approx -\frac{G}{3r}\int d^3 \mathbf{x}A_{anom} .$$ In the effective QCD bag model with $\rho_{bag}=-p_{bag}=750MeV/fm^3$, $N_c=3$ and $N_f=2$, the value of the anomaly is [@Mottola:2016mpl] $$\label{anomaly} A_{anom} = (11 N_c - 2N_f ) \frac{\alpha_s}{24\pi}G_{\mu \nu}^a G^{a \mu \nu} = (11 N_c - 2N_f ) \frac{\alpha_s}{24\pi} (\rho_{bag}-3p_{bag}) \approx -4.8\times 10^{36} erg/cm^3 .$$ The cause of the anomaly is that both the vector and axial currents are classically conserved for massless fermions, but the axial is not conserved at the quantum level. If fermions are massive, then axial current is not conserved even at the classical level. For massive fermions, one loop calculations indicate that the anomaly is suppressed by the fermion mass squared. Since the quarks are not massless after the electroweak phase transition, we might have to take this suppression into account. Most likely, the anomaly is still fully unsuppressed at the QCD phase transition, since the relevant quark masses are much smaller than the temperature at the QCD phase transition. However, just to be on the safe side, we will introduce an optional cut-off in frequencies which preserves only the energy $\omega$ of the gravitational waves which is high enough so that the fermion mass can be neglected, i.e. $$\label{limit} m_{u,d}\ll \omega ,$$ where $m_{u,d}$ are $u$ and $d$ quarks masses (i.e. in the standard model they are $2$MeV and $5$MeV respectively). At gravitational waves frequencies lower than $m_{u,d}$, the effect of anomaly might be suppressed by a factor of $(\omega/2m_{u,d})^2$. In addition, we note that there are models in which quarks are still massless during the QCD phase transition [@Iso:2017uuu]. In that case the suppression given by Eq. (\[limit\]) will not be present. Scalar gravitational waves from the QCD phase transition ======================================================== We are finally ready to estimate the parameters for the scalar gravitational waves produced during the QCD phase transition. The QCD phase transition happens at the temperature $T_c\approx 150MeV$. This temperature is within the region of validity of the effective field theory that we used. The temperature today is $0.235meV$. Therefore the QCD phase transition happens at the redshift of $z\approx 6.3\times 10^{11}$. From Eq. (\[wave\]), the scalar gravitational wave amplitude from a single bubble is $$\label{wave1} \Upsilon_{\cal A}=\Upsilon_{\cal C} \approx \frac{G}{3rc^4}A_{anom}V_{bubble}=\frac{G}{3rc^4}A_{anom}V_0 f(t)$$ where the time parameter, $t$, starts at the moment $t=t_i$ when the temperature of the universe is equal to the QCD phase transition temperature $T=T_{QCD}$. $V_{bubble}$ is given by Eqs. (\[Vb\]) and (\[Vb1\]), depending on the period in question. The deflagration period will contribute more in the high frequency regime (smaller bubbles), but it turns out that the magnitude of the signal is too small to be observed, so we will proceed with Eq. (\[Vb1\]). Therefore, $V_0\approx \frac{4\pi}{3}(\frac{d_{nuc}}{2})^3$ is a single bubble’s final volume before it collides with another hadron bubble and merges with it. The anomaly $A_{anom}$ is given by Eq. (\[anomaly\]) with an overall negative sign because the process of bubble nucleation removes free gluons from the space instead of creating them. To introduce an optional cut-off in frequencies, we first perform a Fourier transform of the time domain function $f(t)$, $$\hat{f}(\omega) =\int_\infty ^{-\infty}f(t)\exp(-i \omega t) dt .$$ For more accurate results, one should consider the bubble’s spatial distribution. But for a slowly expanding bubble, the spatial structure will not significantly affect the result. At $t=t_f$, the phase transition ends, and the space does not have free quarks and gluons, so $f(t_f)=1$. Therefore, we take only the time interval $t_i<t<t_f$. As we explained at the end of section \[sgv\], we might need to cut-off the frequencies lower than quark masses, so the window function is $$\label{wf} W(\omega) = \left\{ \begin{array}{lr} \Big(\frac{\omega}{2MeV}\Big)^2 & , |\omega| < 2MeV\\ 1 & , 2MeV<|\omega| \end{array} \right. .$$ This function should be applied to $\hat{f}$ to remove the low energy modes as in Eq. (\[limit\]). However, this is not necessary if we believe that the anomaly is unsuppressed at QCD temperatures (which are much higher than the quark masses), and also in the models in which quarks are still massless during the QCD phase transition, so we will work both with and without it, i.e. $$\label{wf} \bar{f}(t) = \left\{ \begin{array}{lr} \frac{1}{2\pi}\int_{-\infty}^\infty\hat{f}W\exp(i \omega t) d\omega & \text{, if a window function is applied}\\ \frac{1}{2\pi}\int_{-\infty}^\infty\hat{f}\exp(i \omega t) d\omega & \text{, if a window function is not applied} \end{array} \right. ,$$ The scalar gravitational wave amplitude is now rewritten as $$\Upsilon_A=\Upsilon_C \approx \frac{G}{3rc^4}A_{anom}V_0 \bar{f}(t)$$ This is the gravitational amplitude from one single bubble. We will now include contribution from all of the bubbles, and the effect from the redshift. For stochastic gravitational waves, the characteristic strain $h_c$ can be obtain from the power spectral density, $S_h$[@Moore:2014lga], as $$h_c=\sqrt{S_h \, \nu}.$$ where $\nu$ is the gravitational wave frequency. Since $S_h$ is closely related to the energy density of gravitational waves, we will derive the energy density first and then find out the characteristic strain at the present time. The energy momentum tensor for gravitational waves is $$T_{\mu\nu}=\frac{c^4}{32\pi G}<\partial_\mu h_{\alpha\beta}\partial_\nu h^{\alpha\beta}>$$ where the angle brackets denote averaging over several wavelengths. The energy radiated by a single bubble can be estimated from energy flux, $T_{tr}$, as $$\begin{aligned} E_b &=& \frac{c^2}{32\pi G}\int \Upsilon_A \Upsilon_A k\omega dtdS\nonumber\\ &\approx &\frac{G}{72\pi c^5}A_{anom}^2V_0^2\int_0^\infty \hat{f}(\omega)\hat{f}^*(\omega)W^2(\omega)\omega^2d\omega ,\end{aligned}$$ where $k$ is the wavenumber. For the integrated signal, we have to take into account all the bubbles, and also an appropriate energy redshift from the time of the signal creation till today. The scalar gravitational waves energy density at the time of the QCD phase transition was $$\rho = nE_b ,$$ where $n=d_{nuc}^{-3}$ is the bubble number density. Since the gravitons are massless particles, their energy density decreases as the universe expanding. At the present time the energy density in gravitational waves, $\rho_0$, is $$\rho_0 = \frac{nE_b}{(1+z)^4} .$$ The power spectral density is $$\begin{aligned} S_h(f)&=&\frac{4G}{\pi c^2}\frac{\delta\rho_0}{f^2\delta f}\nonumber\\ &=&\frac{4\pi n G^2}{9 c^7 }A_{anom}^2V_0^2\frac{\hat{f}(\omega)\hat{f}^*(\omega)W^2(\omega)}{1+z}\end{aligned}$$ where we used $\nu =\frac{\omega}{2\pi (1+z)}$. To obtain numerical values, we set $z\approx 6.3\times 10^{11}$, which is the redshift at the QCD phase transition, as explained below Eq. (\[Vb\]). As we noted before, to cover all the cases in the literature, we use two possible $d_{nuc}$ values, $1$cm and $1$m. The value for $A_{anom}$ is given in Eq. (\[anomaly\]). ![The characteristic strain of the gravitational waves signal today, $h_c$, as a function of frequency, $\nu$. We set the value $d_{nuc} = 1$cm, which gives the smallest bubble volume and thus the weakest signal. The solid line is $h_c$ with the window function from Eq. (\[wf\]), while the dashed line is $h_c$ without this window function. The doted curves are the sensitivity regions of the detectors – from low to high frequencies are SKA, LISA and BBO respectively. []{data-label="strain2"}](strain2){width="12cm"} ![The characteristic strain of the gravitational waves signal today, $h_c$, as a function of frequency, $\nu$. We set the value $d_{nuc} = 1$m, which gives larger bubble volumes and thus stronger signal. The solid line is $h_c$ with the window function from Eq. (\[wf\]), while the dashed line is $h_c$ without this window function. The doted curves are the sensitivity regions of the detectors – from low to high frequencies are SKA, LISA and BBO respectively. Part of the signal is detectable by BBO. However, since the detector sensitivities are shown for the tensor modes, while it is known that LISA has an order of magnitude higher sensitivity to the scalar than to the tensor modes, the signal most likely falls within the LISA sensitivity region as well.[]{data-label="strain3"}](strain3){width="12cm"} We plot the characteristic strain of the gravitational waves signal today, $h_c$, as a function of frequency, $\nu$, of scalar gravitational waves in Fig. \[strain2\] and \[strain3\]. We give plots for two values of $d_{nuc}$, i.e. $1$cm and $1m$. The larger value of $d_{nuc}$ gives larger bubble volumes which in turn amplifies the anomaly effect, but reduces the bubble density. It turns out that the first effect is more important, so the the larger value of $d_{nuc}$ gives a stronger signal (Fig. \[strain3\]). It is notable that our signal is weaker than than the signal from the standard tensor modes [@Caprini:2007xq; @Huber:2008hg; @Jinno:2015doa]. This is because the tensor mode gravitational waves are created by very sudden change in bubbles energies and momenta during the collision. In contrast, the strength of the scalar mode depends on the phase transition rate rather than the rate of change of the matter energy and momentum. During the bubble’s motion energy and momentum accumulate and get released at the moment of collision, however, one cannot accumulate the “amount" of the QCD phase transition in a similar way. Eventually, motion of the bubble could increase the signal frequency via Doppler shift, but here we neglected this effect. One may also notice that the spectrum of the scalar mode decreases more slowly than for the usual transverse-tensor modes. This is because the QCD phase transition last longer than the bubble collision time scale, so it produces more low frequency modes. We also show the case with the window function from Eq. (\[wf\]) which cuts off the frequencies lower than the quark masses, and also the case which includes all the frequencies (i.e. no window function). The region of the parameter space with no window function is much more likely to be observed, especially if $d_{nuc}$ is large enough, since both the strain and the frequencies fall within the sensitivity of the near future gravitational wave experiments (e.g. The Big Bang Observer) (see e.g. Fig. A1 in [@Moore:2014lga]). In addition, the detector sensitivities in fig. \[strain2\] and \[strain3\] are shown for the tensor modes. It is known that LISA has $10$ times higher sensitivity to the scalar mode than to the tensor modes[@Tinto:2010hz]. Thus, the signal most likely falls within the LISA sensitivity region as well. Conclusions =========== In this paper we tried to connect the gravitational wave astronomy with fundamental particle physics. The standard model of particle physics in the presence of gravity suffers from the well known trace anomaly. The origin of anomaly is purely quantum. In the QCD sector, the anomaly gives rise to the new kind of (scalar) gravitational waves which are not present in the pure gravitational regime. Quantum anomaly was originally derived for massless fermions, while the standard model quarks are massive. During the QCD phase transition, at temperatures higher than $150$ MeV, one can effectively neglect the u- and d-quark masses, and anomaly effects should become fully unsuppressed. Using the details of the first order phase transition, in particular the mechanism of the homogenous bubble nucleation, we were able to calculate the parameters relevant for the produced gravitational waves. As the final result, we found the characteristic strain of the gravitational waves signal as it should look like today. To remain on the safe side, we introduced the window function which cuts-off very low frequencies of the produced gravitational waves, where the anomaly calculations might not be completely trusted. For comparison, in Fig. \[strain2\] and \[strain3\] we show the characteristic strain both with and without the window function. The region with no window function (i.e. no suppression in frequencies) is much more likely to be observed in near future gravitational wave experiments (e.g. LISA and The Big Bang Observer). The interesting bottom line is that we could in principle learn something about the obscure quantum aspects of the standard model of particle physics using gravitational wave astronomy. D.C Dai was supported by the National Science Foundation of China (Grant No. 11433001 and 11775140), National Basic Research Program of China (973 Program 2015CB857001) and the Program of Shanghai Academic/Technology Research Leader under Grant No. 16XD1401600. DS was partially supported by the US NSF grant PHY 1820738. [99]{} B. P. Abbott [*et al.*]{} \[LIGO Scientific and Virgo Collaborations\], Phys. Rev. Lett.  [**116**]{}, no. 6, 061102 (2016) doi:10.1103/PhysRevLett.116.061102 \[arXiv:1602.03837 \[gr-qc\]\]. V. Mandic, S. Bird and I. Cholis, Phys. Rev. Lett.  [**117**]{}, no. 20, 201102 (2016) doi:10.1103/PhysRevLett.117.201102 \[arXiv:1608.06699 \[astro-ph.CO\]\]. S. Bhagwat, D. A. Brown and S. W. Ballmer, Phys. Rev. D [**94**]{}, no. 8, 084024 (2016) doi:10.1103/PhysRevD.94.084024 \[arXiv:1607.07845 \[gr-qc\]\]. S. Clesse and J. García-Bellido, arXiv:1610.08479 \[astro-ph.CO\]. K. Crocker, T. Prestegard, V. Mandic, T. Regimbau, K. Olive and E. Vangioni, arXiv:1701.02638 \[astro-ph.CO\]. R. Dong, W. H. Kinney and D. Stojkovic, JCAP [**1610**]{}, no. 10, 034 (2016) doi:10.1088/1475-7516/2016/10/034 \[arXiv:1511.05642 \[astro-ph.CO\]\]. R. Anantua, R. Easther and J. T. Giblin, Phys. Rev. Lett.  [**103**]{}, 111303 (2009) doi:10.1103/PhysRevLett.103.111303 \[arXiv:0812.0825 \[astro-ph\]\]. L. Anchordoqui, D. C. Dai, M. Fairbairn, G. Landsberg and D. Stojkovic, Mod. Phys. Lett. A [**27**]{}, 1250021 (2012) doi:10.1142/S0217732312500216 \[arXiv:1003.5914 \[hep-ph\]\]. J. R. Mureika and D. Stojkovic, Phys. Rev. Lett.  [**106**]{}, 101101 (2011) doi:10.1103/PhysRevLett.106.101101 \[arXiv:1102.3434 \[gr-qc\]\]; Phys. Rev. Lett.  [**107**]{}, 169002 (2011) doi:10.1103/PhysRevLett.107.169002 \[arXiv:1109.3506 \[gr-qc\]\]. D. Stojkovic, Mod. Phys. Lett. A [**28**]{}, 1330034 (2013) doi:10.1142/S0217732313300346 \[arXiv:1406.2696 \[gr-qc\]\]. N. Yunes, K. Yagi and F. Pretorius, Phys. Rev. D [**94**]{}, no. 8, 084002 (2016) doi:10.1103/PhysRevD.94.084002 \[arXiv:1603.08955 \[gr-qc\]\]. E. Mottola and R. Vaulin, Phys. Rev. D [**74**]{}, 064004 (2006) doi:10.1103/PhysRevD.74.064004 \[gr-qc/0604051\]. M. Giannotti and E. Mottola, Phys. Rev. D [**79**]{}, 045014 (2009) doi:10.1103/PhysRevD.79.045014 \[arXiv:0812.0351 \[hep-th\]\]. E. Witten, Phys. Rev. D [**30**]{}, 272 (1984). doi:10.1103/PhysRevD.30.272 E. Mottola, arXiv:1606.09220 \[gr-qc\]. N. Yunes and X. Siemens, Living Rev. Rel.  [**16**]{}, 9 (2013) doi:10.12942/lrr-2013-9 \[arXiv:1304.3473 \[gr-qc\]\]. C. J. Hogan, Phys. Lett.  [**133B**]{}, 172 (1983). doi:10.1016/0370-2693(83)90553-1 T. A. DeGrand and K. Kajantie, Phys. Lett.  [**147B**]{}, 273 (1984). doi:10.1016/0370-2693(84)90115-1 D. Boyanovsky, H. J. de Vega and D. J. Schwarz, Ann. Rev. Nucl. Part. Sci.  [**56**]{}, 441 (2006) doi:10.1146/annurev.nucl.56.080805.140539 \[hep-ph/0602002\]. J. Ignatius, K. Kajantie, H. Kurki-Suonio and M. Laine, Phys. Rev. D [**49**]{}, 3854 (1994) doi:10.1103/PhysRevD.49.3854 \[astro-ph/9309059\]. H. Kurki-Suonio and M. Laine, Phys. Rev. D [**51**]{}, 5431 (1995) doi:10.1103/PhysRevD.51.5431 \[hep-ph/9501216\]. J. Ignatius, K. Kajantie, H. Kurki-Suonio and M. Laine, Phys. Rev. D [**50**]{}, 3738 (1994) doi:10.1103/PhysRevD.50.3738 \[hep-ph/9405336\]. K. Kajantie and H. Kurki-Suonio, Phys. Rev. D [**34**]{}, 1719 (1986). doi:10.1103/PhysRevD.34.1719 J. Ignatius and D. J. Schwarz, astro-ph/0011036. A. Kosowsky, M. S. Turner and R. Watkins, Phys. Rev. Lett.  [**69**]{}, 2026 (1992). doi:10.1103/PhysRevLett.69.2026 A. Kosowsky and M. S. Turner, Phys. Rev. D [**47**]{}, 4372 (1993) doi:10.1103/PhysRevD.47.4372 \[astro-ph/9211004\]. M. Kamionkowski, A. Kosowsky and M. S. Turner, Phys. Rev. D [**49**]{}, 2837 (1994) doi:10.1103/PhysRevD.49.2837 \[astro-ph/9310044\]. M. Hindmarsh, S. J. Huber, K. Rummukainen and D. J. Weir, Phys. Rev. Lett.  [**112**]{}, 041301 (2014) doi:10.1103/PhysRevLett.112.041301 \[arXiv:1304.2433 \[hep-ph\]\]. A. Kosowsky, A. Mack and T. Kahniashvili, Phys. Rev. D [**66**]{}, 024030 (2002) doi:10.1103/PhysRevD.66.024030 \[astro-ph/0111483\]. C. Caprini, R. Durrer and G. Servant, Phys. Rev. D [**77**]{}, 124015 (2008) doi:10.1103/PhysRevD.77.124015 \[arXiv:0711.2593 \[astro-ph\]\]. S. J. Huber and T. Konstandin, JCAP [**0809**]{}, 022 (2008) doi:10.1088/1475-7516/2008/09/022 \[arXiv:0806.1828 \[hep-ph\]\]. R. Jinno, K. Nakayama and M. Takimoto, Phys. Rev. D [**93**]{}, no. 4, 045024 (2016) doi:10.1103/PhysRevD.93.045024 \[arXiv:1510.02697 \[hep-ph\]\]. M. Ahmadvand and K. Bitaghsir Fadafan, Phys. Lett. B [**772**]{}, 747 (2017) doi:10.1016/j.physletb.2017.07.039 \[arXiv:1703.02801 \[hep-th\]\]. M. Aoki, H. Goto and J. Kubo, Phys. Rev. D [**96**]{}, no. 7, 075045 (2017) doi:10.1103/PhysRevD.96.075045 \[arXiv:1709.07572 \[hep-ph\]\]. C. Caprini, R. Durrer and X. Siemens, Phys. Rev. D [**82**]{}, 063511 (2010) doi:10.1103/PhysRevD.82.063511 \[arXiv:1007.1218 \[astro-ph.CO\]\]. D. J. Schwarz, Mod. Phys. Lett. A [**13**]{}, 2771 (1998) doi:10.1142/S0217732398002941 \[gr-qc/9709027\]. S. Schettler, T. Boeckel, T. & J. Schaffner-Bielich Phys. Rev. D, [**83**]{}, 064030 (2011) C. J. Moore, R. H. Cole and C. P. L. Berry, Class. Quant. Grav.  [**32**]{}, no. 1, 015014 (2015) doi:10.1088/0264-9381/32/1/015014 \[arXiv:1408.0740 \[gr-qc\]\]. P. R. Anderson, C. Molina-Paris and E. Mottola, Phys. Rev. D [**80**]{}, 084005 (2009) doi:10.1103/PhysRevD.80.084005 \[arXiv:0907.0823 \[gr-qc\]\]. S. Iso, P. D. Serpico and K. Shimada, Phys. Rev. Lett.  [**119**]{}, no. 14, 141301 (2017) doi:10.1103/PhysRevLett.119.141301 \[arXiv:1704.04955 \[hep-ph\]\]. M. Tinto and M. E. da Silva Alves, Phys. Rev. D [**82**]{}, 122003 (2010) doi:10.1103/PhysRevD.82.122003 \[arXiv:1010.1302 \[gr-qc\]\]. [^1]: corresponding authors: D. Stojkovic, D. Dai,\ email: [email protected] [email protected] \[fnlabel\]
{ "pile_set_name": "ArXiv" }
--- abstract: 'Systems incorporating Artificial Intelligence (AI) and machine learning (ML) techniques are increasingly used to guide decision-making in the healthcare sector. While AI-based systems provide powerful and promising results with regard to their classification and prediction accuracy ( in differentiating between different disorders in human gait), most share a central limitation, namely their black-box character. Understanding which features classification models learn, whether they are meaningful and consequently whether their decisions are trustworthy is difficult and often impossible to comprehend. This severely hampers their applicability as decision-support systems in clinical practice. There is a strong need for AI-based systems to provide transparency and justification of predictions, which are necessary also for ethical and legal compliance. As a consequence, in recent years the field of *explainable AI* (XAI) has gained increasing importance. XAI focuses on the development of methods that enhance transparency and interpretability of complex ML models, such as Deep (Convolutional) Neural Networks. The primary aim of this article is to investigate whether XAI methods can enhance transparency, explainability and interpretability of predictions in automated clinical gait classification. We utilize a dataset comprising bilateral three-dimensional ground reaction force measurements from 132 patients with different lower-body gait disorders and 62 healthy controls. In our experiments, we included several gait classification tasks, employed a representative set of classification methods, and a well-established XAI method – Layer-wise Relevance Propagation (LRP) – to explain decisions at the signal (input) level. The classification results are analyzed, compared and interpreted in terms of classification accuracy and relevance of input values for specific decisions. The decomposed input relevance information are evaluated from a statistical (using Statistical Parameter Mapping) and clinical (by an expert) viewpoint. There are three dimensions in our comparison: (i) different classification tasks, (ii) different classification methods, and (iii) data normalization. The presented approach exemplifies how XAI can be used to understand and interpret state-of-the-art ML models trained for gait classification tasks, and shows that the features that are considered relevant for machine learning models can be attributed to meaningful and clinically relevant biomechanical gait characteristics.' author: - bibliography: - 'references.bib' title: On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence --- Introduction ============ Artificial Intelligence (AI) and machine learning (ML) techniques have become almost ubiquitous in our daily lives by supporting or guiding our decisions and providing recommendations. Impressively, there are certain tasks, such as playing complex board games like chess and Go, or classifying images, that AI has already been solving more efficiently and effectively than humans do [@Ciresan2012; @Esteva2017]. It is not surprising that AI-based approaches are currently becoming increasingly popular in the healthcare sector [@topol_medicine_2019]. This trend has also been well recognized by the field of clinical gait analysis (CGA). CGA focuses on the quantitative description and analysis of human gait from a kinematic ( joint angles) and kinetic ( ground reaction forces and joint moments) point of view. Thereby, CGA produces a vast amount of data [@phinyomark_analysis_2018; @halilaj_machine_2018], which are difficult to comprehend due to their multi-dimensional and multi-correlated nature [@chau_review_2001; @wolf_automated_2006]. In the last years, ML techniques and AI-based decision-support systems have been successfully employed to guide decision-making in CGA for various patient groups [@schollhorn_applications_2004; @figueiredo_automatic_2018] such as stroke [@lau_support_2009], Parkinson’s disease [@wahid_classification_2015], cerebral palsy [@van_gestel_probabilistic_2011], multiple sclerosis [@alaqtash2011automatic], osteoarthritis [@nuesch_gait_2012], and patients suffering from different functional gait disorders [@slijepcevic2017automatic]. While AI-based systems offer powerful and promising results in regard to classification accuracy and prediction, most share a central limitation, which is their black-box character [@adadi_peeking_2018]. This means that even if the underlying mathematical principles in these systems are comprehensible, it is still very hard to understand if meaningful patterns and dependencies ( causalities) were learned and what the classification model has actually learned. In addition, the black-box character also hinders AI-based systems to provide justifications of their decisions. This is, however, necessary for compliance with legislation such as the General Data Protection Regulation (GDPR, EU 2016/679) [@regulation2016regulation; @adadi_peeking_2018; @He_practical_2019]. These factors currently limit the application of AI-based decision-support systems in medical practice [@holzinger_what_2017; @samek_explaining_2017]. Due to the aforementioned reasons, the field of *explainable Artificial Intelligence* (XAI) gained increasing attention in recent years. XAI aims to develop methods that improve the transparency, interpretability and explainability of complex ML models. The main goal is that (medical) professionals understand how and why a machine learning model resulted in a certain decision [@holzinger_causability_2019]. Even though research in XAI is still in an early stage, the application of such approaches to AI-based systems in medicine has already raised considerable interest [@holzinger_what_2017; @tjoa_survey_2019]. Similar to other domains, a trade-off between classification accuracy and transparency of the classification models has to be made in the medical field. One reason is that potentially more accurate models, such as Deep Neural Networks (DNNs) are more complex and thus lack interpretability, whereas simpler models like decision trees exhibit a higher level of transparency but often achieve lower prediction accuracy [@turner2016model]. Consequently, simpler models have often been preferred for clinical applications in the past. Due to the recent success of XAI methods, the application of more complex classification models is currently gaining momentum in the clinical field. Even though a variety of XAI methods are available for classification tasks (Section \[sec:relatedwork\]: Related work), their application to the field of CGA and motor rehabilitation has yet to be established. As [@horst2019explaining] demonstrated by distinguishing unique gait patterns of different individuals, the field of XAI offers great potential for future research and developments in CGA. Especially, the explanation of particular model decisions represents an indispensable form of transparency required by clinicians to build trust in AI-based systems [@tonekaboni2019clinicians]. Therefore, the primary aim of this article is to investigate to which degree XAI can make automatic decisions more explainable in the context of CGA. We aim at introducing XAI to clinical gait classification by facilitating explainability and traceability of automatically derived decisions. In detail, this study exemplifies how XAI can be used to make clinical gait classification and prediction results understandable and traceable for clinical experts. For this purpose, we define different gait classification tasks, employ a representative set of classification methods, and a well-established XAI method – Layer-wise Relevance Propagation (LRP) – to explain decisions at the signal level. Since there is no ground truth for automatically generated explanations in this context, we evaluate the quality of explanations from a clinical point of view by a clinical expert. In addition, as a second reference, we propose the use of Statistical Parameter Mapping (SPM) to verify the obtained results from a statistical point of view. Our investigation follows three leading research directions: - To what extent does a machine learning model for gait classification base its predictions on meaningful and clinically relevant biomechanical gait features? - What is the role of data normalization and how does it affect machine learning-based gait classification and automatically generated prediction explanations? - To what extent do XAI methods contribute to the improvement of transparency, understanding and interpretability of different classification methods? To answer these questions we pursue an empirical approach. Based on a dataset containing ground reaction force measurements from clinical practice, we train classification models for different gait classification tasks and demonstrate the capabilities of XAI for the explanation of particular model decisions. Furthermore, we investigate their robustness to data normalization and incorporate a clinical expert to interpret and verify the results. The presented approach exemplifies how XAI can be used to better understand and interpret state-of-the-art machine learning models trained for gait classification tasks and shows that the features of gait patterns that are considered relevant for machine learning models can be attributed to meaningful and clinically relevant biomechanical gait characteristics. Related Work {#sec:relatedwork} ============ Methods from XAI can be classified according to the type of explanation they provide. We distinguish between XAI approaches for (i) **data exploration**, (ii) **decision explanation** and (iii) **model explanation** based on an adaptation of the taxonomy introduced by [@arya2019one]. In the following we briefly introduce the three different types of approaches and their capabilities. **Data exploration** includes methods from the fields of visual analytics, statistics and unsupervised machine learning. As such, the methods are not capable of explaining a model but rather the data on which the model is trained. These methods focus on projecting the data into a space where it is possible to find meaningful structures or clusters and thus understand the data in more detail. A popular approach for data exploration introduced by [@maaten2008visualizing] is T-distributed Stochastic Neighbor Embedding (t-SNE), which projects high-dimensional data into a lower-dimensional and visualizable space. The projection is performed in a way that the cluster structure in the original data space is optimally exposed. Thereby, an understanding of the data and the identification of typical patterns and clusters in the data is facilitated. Other approaches in this category are visual analytics approaches that employ advanced techniques for the interactive visualization of data to support data exploration,  finding characteristic patterns or dependencies within data  [@wagner_KAVAG; @wilhelm2015furyexplorer]. **Decision explanation** aims at providing an explanation for the local behavior of a model,  the prediction for a given input instance. For a classification task, these methods can provide, for example, explanations about which part of the input influenced the classifier’s decision most. For classification of gait data, the explanation should highlight all relevant signal regions and characteristic signal shapes in the input data, which are associated with a particular gait disorder. Two main categories can be distinguished for explaining the local behavior of a machine learning model: i) *self-explaining* models and ii) *post-hoc* methods. Self-explaining models consist of components that learn relationships between input data and predictions during training. Simultaneously, they learn how these relationships relate to terms from a predefined dictionary and consequently generate explanations from them. A self-explaining approach which does not visually highlight relevant regions in input data but generates textual explanations was proposed by [@hendricks2016generating]. This self-explaining model combines a Convolutional Neuronal Network (CNN) and a Recurrent Neuronal Network (RNN). The CNN learns discriminative features to perform a classification task, while the RNN generates textual explanations of the prediction. This approach cannot be applied to a previously trained model in a post-hoc manner. This limits the practical applicability of such approaches. Post-hoc models provide much greater applicability as they can be applied to already trained models. Post-hoc methods can be further sub-divided into i) propagation-based, ii) perturbation-based, and iii) Shapley-value-based methods. Propagation-based methods determine the contributions of each input feature by (back-)propagating some quantity of interest from the model’s output layer to the input layer. Sensitivity Analysis [@zurada1994sensitivity] has been introduced to Support Vector Machines (SVM) [@baehrens2010explain] and CNNs [@simonyan2013deep] in form of saliency maps. Layer-wise Relevance Propagation (LRP) [@bach2015pixel; @montavon2019layer] and Deep Learning Important FeaTures (DeepLIFT) [@shrikumar2017learning] are methods that propagate importance scores from the output layer back to the input, thereby enabling the identification of positive and negative evidences for a specific prediction. Sensitivity Analysis and the therewith obtained explanations in general suffer from the effects of shattered gradients [@balduzzi2017shattered], especially so in more complex (deeper) networks. Modern approaches to DNN interpretability such as LRP or DeepLift do not have this problem and work well for a wider range of network architectures, and models in general [@montavon2018methods; @kohlbrenner2019towards]. Perturbation-based methods, such as those introduced by [@fong2017interpretable] or [@zintgraf2017visualizing], treat the model as a black box and are applied to pairs of occluded input and the respective output values. While some methods produce explanations directly from a perturbation process, others employ a learning component –  the Interpretable Model-agnostic Explanations (LIME) method [@ribeiro2016model] – to estimate locally interpretable surrogate models mimicking the decision process of the black-box model. Perturbation-based methods can be considered to be model-agnostic, as they do not require access to internal model parameters or structures to operate. However, this model-agnosticism is bought at a considerable computational cost, compared to propagation-based approaches. Shapley-value-based methods attempt to approximate the Shapley values of a given prediction. For this purpose, the effect of omitting an input feature is examined, taking into account all possible combinations of other input features, that can be included or excluded [@vstrumbelj2014explaining]. [@lundberg2017unified] proposed the SHapley Additive exPlanations (SHAP) method, which is a unified approach building upon the theory of Shapley values and existing propagation-based and perturbation-based methods,  LIME, DeepLIFT, and LRP. **Model explanation** provides an interpretation of what a trained model has learned,  the most characteristic representations or prototypes for an entire class are visualized ( a class of gait disorders in CGA). These methods can indicate which classes overlap and point out ambiguous input features. In addition to saliency maps, [@simonyan2013deep] proposed a method for generating a representative visualization for a specific class that was learned by a CNN. For this purpose, they applied activation maximization,  starting with a blank image, each pixel is changed by means of backpropagation so that the activity of a neuron is increased. The resulting visualizations give a first impression about the patterns learned but are highly abstract and can only be interpreted to a limited extent. To generate visualizations that are easier to interpret, [@nguyen2016synthesizing] proposed a method to constrain the optimization process by image priors that were learned automatically. [@lapuschkin2019unmasking] proposed the Spectral Relevance Analysis (SpRAy) which summarizes a model’s learned strategies by analyzing similarities and dissimilarities over large quantities of input relevance maps computed with respect to a category of interest. Methods ======= Data Recording and Dataset -------------------------- For the gait classification task we utilized a subset of a large-scale dataset which is currently prepared for publication in an open-source online repository as the [GaitRec]{} dataset (reference to the dataset will be made public upon publication). This dataset is part of an existing clinical gait database maintained by a local Austrian rehabilitation center. Prior to all of our experiments approval was obtained from the local Ethics Committee (\#GS1-EK-4/299-2014). The employed dataset contains bilateral three-dimensional ground reaction force (GRF) recordings of patients and healthy controls walking unassisted at self-selected walking speed on an approximately 10 m walkway with two centrally-embedded force plates (Kistler, Type 9281B12, Winterthur, CH). Data were recorded at 2000 Hz, filtered with a zero-lag Butterworth filter of 2nd order with a cut-off frequency of 20 Hz, time-normalized to 101 points (100% stance), and amplitude-normalized to 100% body weight. During one session subjects walked barefoot or in socks until a minimum number of 5 valid recordings were available. Recordings were defined as valid by an experienced assessor. In total, the dataset comprises GRF measurements from 132 patients with lower-body gait disorders ($GD$) and data from 62 healthy controls ($HC$), both of various physical composition and gender. The dataset includes three classes of orthopaedic gait disorders associated with the hip ($H$, N=37), knee ($K$, N=52), and ankle ($A$, N=43). For class-specific demographic details of the data refer to Table \[table:dataset\]. The dataset is balanced regarding the number of recorded sessions per person and the number of trials per person. Figure \[img:waveforms\] shows an overview of all GRF measurements of the affected side (except for healthy controls where each step is visualized) per class and the associated mean and standard deviation. The $GD$ classes ($A$, $H$, and $K$) include patients after joint replacement surgery, fractures, ligament ruptures, and related disorders associated with the above-mentioned anatomical areas. A well-experienced physical therapist with more than a decade of clinical experience manually labeled the dataset based on the available medical diagnosis of each patient. [lccccc]{} Classes & N & ------------ Age (yrs.) Mean (SD) ------------ & ---------------- Body Mass (kg) Mean (SD) ---------------- & ------- Sex (m/f) ------- & -------- Num. Trials -------- \ Healthy Control & 62 & 36.0 (10.8) & 72.3 (15.0) & 28/34 & 310\ Hip & 37 & 44.2 (12.5) & 81.4 (14.1) & 31/6 & 185\ Knee & 52 & 43.5 (13.8) & 85.6 (16.4) & 37/15 & 260\ Ankle & 43 & 42.6 (10.9) & 91.6 (20.4) & 36/7 & 215\ **Total** & **194** & **41.1 (12.4)** & **81.9 (18.0)** & **132/62** & **970**\ ![Visualization of vertical (left panel), anterior-posterior (central panel), and medio-lateral (right panel) force components of the body weight-normalized GRF measurements of the affected side available per subject and class. For healthy controls all available measurements are visualized. Mean and standard deviation signals (calculated per class) are highlighted as solid and dashed colored lines.[]{data-label="img:waveforms"}](figures/affected.pdf){width="1\linewidth"} Decision Explanation -------------------- As proposed by [@bach2015pixel], we employed Layer-wise Relevance Propagation (LRP) as a method for decision explanation. LRP decomposes the prediction $f(x)$ of a learned function $f$ given an input vector $x$ into time- and channel-resolved input relevance values $R_i$ for each discrete input unit $x_i$. This enables to explain the prediction of a machine learning model as partial contributions of an individual input value (see Figure \[img:lrp-overview\] for an overview of the approach). LRP indicates which information a model uses to predict in favor or against an output class. Thereby, it enables the interpretation of input relevance values and their dynamics as representation for a certain class ( healthy controls or functional disorders in ankle, knee or hip). Given that the models investigated in this study are comparatively shallow and are largely unaffected by detrimental effects such as gradient shattering [@balduzzi2017shattered; @montavon2019layer], we performed relevance decompositions according to  with $\varepsilon=10^{-5}$ in all layers across the different model architectures [@kohlbrenner2019towards]. ![Exemplary overview of our proposed workflow for data acquisition, prediction and decision explanation in automated gait classification, showing the data of subject 46 belonging to the knee disorder class. (A) The clinical gait analysis consists of five recordings of each subject walking barefoot (unassisted) a distance of 10 m at a self-selected walking speed. Two centrally-embedded force plates capture the three-dimensional ground reaction forces (GRFs) during the stance phase of the right and left foot. (B) The GRF comprising the medio-lateral ($F_{ML}$), anterior-posterior ($F_{AP}$), and vertical ($F_{V}$) force components of the affected and unaffected side are used as time-normalized and concatenated input vector $x$ (1$\times$606-dimensional) for the prediction of the knee disorder class using a classifier ( CNN). (C) Decomposition of input relevance values using LRP. The color spectrum for the visualization of input relevance values of the model predictions is shown in the bottom right corner. Black line segments are irrelevant to the model’s prediction. Warm hues identify input segments causing a prediction corresponding to the class label, while cool hues are features contradicting the class label.[]{data-label="img:lrp-overview"}](figures/overview_46.pdf){width="1\linewidth"} Another approach which recently faced increased attention in the gait analysis community is the application of Statistical Parameter Mapping (SPM). While standard inference statistical approaches tend to reduce time-continuous signals to single time-discrete values for statistical testing, SPM allows to use the entire 1D time-continuous signals to make probabilistic conclusions based on the random behavior of a 1D observational unit. It follows the same notion and logic as classical inference statistics. The main advantages of SPM are that the statistical results are presented in the original sampling space and that there is no need for a (potentially biasing) parameterization technique [@pataky_one-dimensional_2012; @pataky_generalized_2010]. Therefore, SPM can serve as a valuable and statistically-based verification method in the context of XAI in clinical biomechanics. We used independent *t*-tests from the [SPM1D]{}[^1] package provided by [@pataky_one-dimensional_2012] for Matlab to investigate differences between the concatenated GRF signals between each class. The alpha level was set a priori to 0.05. The output of SPM provides *t*-values for each point of the investigated time series and the threshold corresponding to the chosen alpha level. The *t*-values exceeding this threshold (marked as gray-shaded areas in our results in Figures \[img:cnn-nonorm-NGD\], \[img:cnn-norm-NGD\], and \[img:cnn-svm-mlp-norm-NGD\]) indicate statistically significant differences in the corresponding sections of the time series. Additionally, we computed the effect size by transforming the resulting *t*-values to Pearson’s correlation coefficient *r* using the definition by [@rosenthal_meta-analytic_1986]. In this context, we do not expect LRP and SPM to produce identical results as they assess the data from different perspectives, but they should reveal similar trends as we assume that discriminatory information learned by a classifier should also be statistically significant. Experimental Setup ------------------ The following classification tasks represent the basis of our investigation: - binary classification between healthy controls and all gait disorders ($HC/GD$), - binary classification between healthy controls and each gait disorder separately ( $HC/H$, $HC/K$, and $HC/A$), - multi-class classification between healthy controls and each gait disorder separately ($HC/H/K/A$), - and multi-class classification between each gait disorder separately ($H/K/A$). The six classification tasks are based on a concatenated input vector of the three-dimensional GRF signals from both force plates and resulted in 1$\times$606-dimensional input vectors per gait trial. The three-dimensional GRF signals are the medio-lateral shear force ($F_{ML}$), anterior-posterior shear force ($F_{AP}$), and vertical force ($F_{V}$). The dataset included only unilateral gait disorders and the data of the affected side (input features: 1 to 303) was concatenated before the data of the unaffected side (input features: 304 to 606) in the input vector. For the healthy controls the order was randomly assigned, while ensuring an equal distribution, in order to avoid any bias regarding the side (as there is no affected and unaffected side in the data of this class). Normalization of input vectors is commonly applied to ensure an equal contribution of all six GRF signals to the classification models and thus avoids that signals with larger numeric ranges dominate those with smaller numeric ranges [@Hsu.2016; @francois2017deep]. In order to evaluate the robustness of our models’ predictions and input relevance estimates with respect to normalization, we firstly conducted experiments without normalization. In a second step, we conducted the same experiments applying a min-max normalization and thereby scaled each signal to the range $[-1,1]$. The global minimum and maximum values were determined separately for each GRF signal and over all trials. Although feature extraction is an established step in the (gait) classification pipeline [@halilaj_machine_2018; @slijepcevic2018p], the improved applicability and interpretation of the XAI methods have led us to refrain from using methods such as Principle Component Analysis (PCA) in the present work. In our experiments, three representative machine learning approaches,  (linear) Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Convolutional Neural Network (CNN) were compared in terms of prediction accuracy and learned input relevance patterns. The SVM models were trained using a standard quadratic optimization algorithm, with an error penalty parameter $C=0.1$ and $\ell_2$-constrained regularization of the learned weight vector $w$. The MLP models comprised of three consecutive fully connected layers with ReLU non-linearities activating the hidden neurons and a final SoftMax activation in the output layer. The size of both hidden layers is 768 whereas the size of the output layer is $c$, where $c$ is the number of target classes. The CNN models process the given data via three consecutive convolutional layers, with a *$<$filter size$>$*-*$<$stride$>$*-*$<$output channel$>$* configuration of 8-2-24, 8-2-24 and 6-3-48, and ReLUs for non-linear neuron activation. The resulting 48$\times$48 feature mapping is then unrolled into a 2304-dimensional vector, and fed into a fully connected layer. This fully connected layer is topped with a SoftMax output activation, which is acting as an multi-class predictor output towards the $c$ target classes. Both, the MLP and CNN models, have been trained via standard error back-propagation using stochastic gradient descent [@lecun2012efficient] and a mean absolute ($\ell_1$) loss function. The training procedure was executed for $3\cdot10^{4}$ iterations of mini batches of five randomly selected training samples and an initial learning rate of $5\cdot10^{-3}$. The learning rate was gradually decreased after every $10^{4}$-th training iteration to $10^{-3}$ by a factor of $0.2$ and then to $5\cdot10^{-4}$ by a factor of $0.5$. Model weights were initialized with random values drawn from a normal distribution with $\mu=0$ and $\sigma=m^{-\frac{1}{2}}$, where $m$ is the number of inputs to each output neuron of the layer [@lecun2012efficient]. Since the CNN receives as input a 1$\times$606-dimensional input vector, its convolution operations can be understood as 1D convolutions, moving over the time axis only. We used 1D convolutions to maintain comparability with the two other classification methods (MLP and SVM). However, preliminary experiments demonstrated negligible differences between 1D and 2D CNNs. The prediction accuracies were reported over a stratified ten-fold cross-validation configuration, where eight parts of the data are used for training, one part is used as validation set and the remaining part is reserved for testing. The samples from each class were distributed evenly while ensuring that all gait trials from an individual subject are placed in the same partition of the data to rule out subject-related information influencing the measured model performance during testing. All results are reported as mean with standard deviation (SD), unless otherwise stated. Additionally, we calculated the Zero Rule baseline (ZRB) for each classification task. The ZRB refers to the theoretical accuracy obtained by assigning class labels according to the prior probabilities of the classes,  the target labels are always set to the class with the greatest cardinality in the training dataset. With respect to the decision explanations for a specific classification task, we decomposed the input relevance values of each gait trial with LRP. This was possible because each trial was tested exactly once within the cross-validation configuration described above. For the purpose of our analysis, however, we used LRP only to decompose the prediction value corresponding to the class label of the trial. Thus, for the visualizations, we averaged the underlying GRF signals as well as the resulting input relevance values over all trials of the corresponding class within a given classification task. The data analysis was conducted within the software frameworks of Matlab 2017b (MathWorks, USA) and Python 3.7 (Python Software Foundation, USA). Results ======= The mean prediction accuracy showed a clear superiority over the ZRB for all three classification methods (CNN, SVM, and MLP) and all classification tasks except for task $H/K/A$ (see Figure \[img:classacc\]). A 2$\times$2 repeated measures analysis of variance (ANOVA) (classification method: CNN, SVM, and MLP; normalization: min-max and no normalization) conducted for each classification task indicated a significant difference in classification accuracy between the three classifiers for tasks $HC/GD$ ($F_{2,18}$ = 3.949, p = 0.038, $\eta_{p}^{2}$ = 0.305), $HC/K$ ($F_{2,18}$ = 16.803, p $<$ 0.001, $\eta_{p}^{2}$ = 0.651), and $HC/H/K/A$ ($F_{2,18}$ = 3.718, p = 0.045, $\eta_{p}^{2}$ = 0.292). Additional pairwise and Bonferroni-corrected post-hoc tests revealed that the CNN resulted in a marginally ($\sim$1%), but significantly (p = 0.046) lower accuracy than the MLP for task $HC/GD$, and that the accuracy of the CNN was significantly (p $<$ 0.04) lower compared to both other classifiers in task $HC/K$ ($\sim$3%). No other differences were found for the classifiers’ performances. Regarding the normalization, only for task $HC/H/K/A$ ($F_{1,9}$ = 5.281, p = 0.047, $\eta_{p}^{2} $ = 0.370) the ANOVA indicated that the accuracy was overall $\sim$2.8% higher with normalization than without. For other tasks no other effects and differences were indicated. ![Overview of the prediction accuracy obtained for the three employed classification methods (CNN, SVM and MLP) and all six classification tasks with min-max normalized and non-normalized input signals, reported in pairs of mean (standard deviation) over the ten-fold cross validation in percent.[]{data-label="img:classacc"}](figures/Table2-BarChart.pdf){width="0.5\linewidth"} As an example for decision explanation using LRP, Figure \[img:cnn-nonorm-NGD\] shows the averaged signals together with the color-coded averaged relevance values for each of the 606 input values for task $HC/GD$ with non-normalized GRF signals. The input relevance values point out which GRF characteristics were most relevant for (or contradictory to) the classification of a certain class ($HC$ or $GD$). For visualization, input values neutral to the prediction ($R_i \approx 0$) are shown in black color, while warm hues indicate input values supporting the prediction ($R_i \gg 0 $) of the analyzed class and cool hues identify contradictory input values ($R_i \ll 0 $). For binary classification tasks ($HC/GD$, $HC/H$, $HC/H$, $HC/K$, and $HC/A$), note that a high input relevance value for one class results in a contradictory input relevance value for the other class. Therefore, the total relevance is a good indicator for the overall relevance of an input value for a respective classification task. ![Results overview for the classification of healthy controls ($HC$) and the aggregated class of all three gait disorders ($GD$) based on non-normalized GRF signals using a CNN as classifier. (A) Averaged GRF signals for $HC$ and $GD$. The first three signals represent the three GRF components of the affected side and are followed by the three GRF components of the unaffected side. Note that the data for both sides is composed of three GRF components ( input features of the affected side: 1 to 101 ($F_{ML}$), 102 to 202 ($F_{AP}$), and 203 to 303 ($F_{V}$)). This means, for example, that input features 21 ($F_{ML}$), 122 ($F_{AP}$) and 233 ($F_{V}$) all correspond to the relative time of 20% of the same stance phase. The shaded areas highlight areas in the input signals where SPM resulted in a statistically significant difference between both classes ( $HC$ and $GD$). (B) Averaged GRF signals of all test trials as a line plot for the healthy controls class, with a band of one standard deviation, color coded via input relevance values for the class ($HC$) obtained using LRP. (C) Averaged GRF signals of all test trials as a line plot for the class of all the gait disorders ($GD$), in the same format as in (B). (D) Line plot showing the effect size obtained from SPM and total relevance based on the absolute sum of the input relevance values of both classes ($HC$ and $GD$). The total relevance indicates the common relevance of the input signal for the classification task.[]{data-label="img:cnn-nonorm-NGD"}](figures/nonorm_1-234_Cnn1DC8_and_spmFrontiers_final.pdf){width="1\linewidth"} The highest input relevance values were observed in $F_{V}$ of the affected side, as illustrated in Figure \[img:cnn-nonorm-NGD\]. In the same way, the mean GRF signals of both classes (see Figure \[img:cnn-nonorm-NGD\]A) as well as the SPM analysis (gray-shaded areas in Figure \[img:cnn-nonorm-NGD\]) highlighted statistically significant differences between both classes in the same regions of $F_{V}$ of the affected side. While LRP points out that the input values of both horizontal shear forces ($F_{AP}$ and $F_{ML}$) appear to be mostly neutral to the prediction, it is noticeable that the SPM analysis highlighted a lot more regions within $F_{AP}$ and $F_{ML}$ as statistically significantly different between the two classes. This contradictory behavior is subject to further analysis below. ![Same experiment as shown in Figure \[img:cnn-nonorm-NGD\] but using min-max normalized GRF signals.[]{data-label="img:cnn-norm-NGD"}](figures/atMM_1-234_Cnn1DC8_and_spmFrontiers_final.pdf){width="1\linewidth"} In a second experiment, the classification of ($HC/GD$) based on min-max normalized GRF signals confirmed the identified regions of high relevance in $F_{V}$ observed in Figure \[img:cnn-nonorm-NGD\], but additionally highlighted regions of high relevance in both horizontal shear forces (see Figure \[img:cnn-norm-NGD\]). The input relevance values highlight that the regions with the highest input relevance values for the prediction can be observed at approximately 20% of the stance phase in $F_{AP}$ of the unaffected side, at approximately 80% of the stance phase in $F_{AP}$ of the affected side and the $F_{V}$ throughout the stance phase from approximately 20% till 80% of the stance phase. In addition, high input relevance values can be observed in the $F_{V}$ during the initial and terminal contact of the affected and unaffected side. Decision explanation methods like LRP further allow to compare and better understand different classification methods. Figure \[img:cnn-svm-mlp-norm-NGD\] shows the results for all three employed classification methods (CNN, SVM, and MLP) and confirms general comparability between these for the task $HC/GD$ (with min-max normalized GRF signals as in Figure \[img:cnn-norm-NGD\]). However, with respect to $F_{V}$ the highest input relevance values can be observed in the peak regions for the CNN, while the highest input relevance values for SVM and MLP are present during the initial and terminal contact. ![Comparison of different methods (CNN, SVM, and MLP) for the classification of healthy controls and the class of all three gait disorders ($HC/GD$) based on min-max normalized GRF signals (only the signals of the affected side are shown). The comparison is based on the decomposed input relevance values for both classes ($HC$ and $GD$), their total relevance determined by LRP as well as statistically significant differences (gray-shaded areas) and effect sizes obtained by SPM. Note that the effect size (green curve) is the same for all three classifiers but the total relevance varies.[]{data-label="img:cnn-svm-mlp-norm-NGD"}](figures/comparison_4.pdf){width="1\linewidth"} Discussion ========== The primary aim of this article is to investigate whether XAI methods can enhance transparency, explainability and interpretability of predictions in automated clinical gait classification. There are three dimensions in our experimental setup: (i) different classification tasks, (ii) different classification methods, and (iii) data normalization. The classification results are analyzed, compared and interpreted in terms of classification accuracy and relevance of input values for specific decisions. These input relevance information is, furthermore, evaluated and compared from a statistical and clinical viewpoint. Classification Results {#subsec:classificationResults} ---------------------- The results expressed in terms of classification accuracy (presented in Figure \[img:classacc\]) demonstrate a comparable level of performance between the three different machine learning methods (CNN, SVM, and MLP). An objective analysis of the explainability is only meaningful if a classifier robustly differentiates between the target classes. Therefore, we excluded the tasks $HC/H/K/A$ and $H/K/A$ from our further investigation. The inability to distinguish the classes in both multi-class classification tasks indicate that gait disorders exhibit complex compensation across several joints, which are difficult to fathom in detail by a measure such as the GRF. Therefore, the tasks $HC/H/K/A$ and $H/K/A$ bear the potential risk that no robust and stable patterns can be found and influence of noise and spurious correlations biases the explainability analysis. For the binary classification tasks this risk is much lower, because the higher classification accuracies obtained (and deviations from the Zero Rule baseline) suggest that robust features can be found in the input data. Another aspect we assessed is the influence of normalization on the input data (see Figure \[img:classacc\]). The normalization of the input data is important for machine learning, since highly differing value ranges can have a negative influence on the classifier,  input variables with a higher value range have a stronger influence on the decision [@Hsu.2016; @francois2017deep]. In addition, non-normalized data can lead to unstable and non-convergent learning. A comparison between the amplitude value ranges of the non-normalized shear forces ($F_{AP}$ and $F_{ML}$) and $F_{V}$ clearly showed a significant difference (see Figure \[img:cnn-nonorm-NGD\]). Therefore, we min-max normalized the data to investigate the degree of influence of normalization on the classification results and the derived explanations. Surprisingly, min-max normalization does not significantly improve the classification results (see Figure \[img:classacc\]) for all investigated binary classification tasks ($HC/GD$, $HC/H$, $HC/K$, and $HC/A$). The absence of an increase in prediction accuracy raises the question whether the use of $F_{V}$ is already sufficient for the given classification tasks. Although normalization does not improve the classification accuracies, the explainability results are strongly affected. We discuss this divergent behavior in the following. Explainability Results {#subsec:explResults} ---------------------- In the following, we discuss the explainability results using the detailed example of the CNN as a classifier and $HC/GD$ as a classification task. Compared to this modality, we also summarize relevant differences in the results for other classifiers and classification tasks in this section. The visualizations for all classification tasks and classification methods can be found in the supplementary material (see Supplementary Figures S \[img:sup-cnn-nonorm-NGD\]–S \[img:sup-svm-norm-NA\]). **Which features are most relevant?** For the classification of non-normalized GRF signals with a CNN (see Figure \[img:cnn-nonorm-NGD\]), the most relevant input values are mainly located in $F_{V}$ of the affected side,  especially the two peaks and the valley in between are relevant for the decision. This shows that the CNN learned that classes $HC$ and $GD$ differ most in these three sections of the signals. These results were also confirmed in our earlier studies,  where the examination of the most frequently used discrete parameters has also shown high relevance for the peaks of $F_{V}$ and the valley in between [@slijepcevic2017automatic]. **Is the unaffected side important?** Identified relevant regions are considerably less pronounced in $F_{V}$ of the unaffected side, but they correlate to a large extent with those of the affected side, except that only the rear part of the valley and not the entire valley is relevant (best recognized from the total relevance curve in Figure \[img:cnn-nonorm-NGD\]D). In earlier studies [@slijepcevic2018p; @SLIJEPCEVIC2019], we showed that the omission of the unaffected side during classification negatively affected classification accuracy. The explainability results confirm this observation. Thus, the unaffected side seems to capture complementary information relevant to the classification task. **Are the shear forces relevant for the task?** A minimal degree of relevance can also be observed in the peaks of the affected and unaffected $F_{AP}$ signals. The absence of evident relevant regions from the shear forces ($F_{AP}$ and $F_{ML}$) does not confirm our results from previous studies where we showed that adding shear forces indeed improved classification performance (leading even to peak performance). From this experiment, the question whether or not shear forces are beneficial for the task cannot be answered conclusively. Interestingly, the statistical analysis via SPM highlights regions in the shear forces that differ statistically significantly between classes $HC$ and $GD$. **What is the impact of normalization on explainability?** The reason for the absence of relevant regions in the shear forces could be their small value range. The rather small value range compared to the $F_{V}$ component may have negligible influence on the training process of the classifiers. We applied normalization to the inputs to answer this question. The results for the same classification task and CNN architecture with min-max normalized input data show that with normalization numerous relevant regions can be found in the shear forces of the affected and unaffected side (see Figure \[img:cnn-norm-NGD\]). Normalization amplifies the relevance of values in the shear forces and thereby makes them comparably important as $F_{V}$. Thus, normalization is important to obtain unbiased explainability results. **Are all identified relevant regions necessary for the task?** In general, with min-max normalized input, many regions of the GRF signals appear relevant for the classification of a particular class. The classification performance with and without normalization does, however, not vary significantly (see Figure \[img:classacc\]). This raises the question whether all identified regions are actually necessary to achieve peak performance in classification or whether some of them are redundant. Note that the assumption of redundancy is supported by the fact that the three force components represent individual dimensions of the same three-dimensional physical process. Thus, strong correlation in the data is a priori given. To answer this question we occluded parts of the input vector in the classification experiment and evaluated the changes in classification performance. Occlusion is realized by replacing the shear forces ($F_{AP}$ and $F_{ML}$) of both sides with zero values and retraining the classifier. Table \[table:shearzero-classification-results\] shows the classification results for the occluded input. To enable easier comparison with the previous results, the deviation from the mean classification accuracy of the non-occluded experiments (from Figure \[img:classacc\]) are displayed for all binary classification tasks. In general, the results decrease on average when the shear forces are occluded, except for task $HC/A$ with min-max normalized input data. Furthermore, the decrease is more pronounced for min-max normalized input data than for non-normalized input data. This further corroborates our assumption that normalization is important to take information from shear forces into account. However, the classification results of the binary classification tasks are not statistically significantly influenced by the occlusion of shear forces. This was also confirmed by several dependent t-tests (p $>$ 0.05). Our results indicate that the relevant regions identified by LRP may represent an over-complete set which exhibits a certain degree of redundancy. Removing one section does not necessarily reduce classification performance. However, model predictions that are based on a higher number of features have been shown to be more robust to noise and possibly also outliers and missing data [@horst2019explaining]. **Are shear forces relevant for the task (question revisited)?** The effects of occluded shear forces are illustrated in Table \[table:shearzero-classification-results\]. Especially for experiments with min-max normalized data, this lead to a decrease in performance ( $HC/H$ and $HC/K$). Thus, the relevant regions in the shear forces cannot be completely redundant to those in $F_{V}$ and, therefore, represent also complementary information. This is also in line with our previous quantitative evaluations [@SLIJEPCEVIC2019]. Task Normalization ZRB SVM MLP CNN ------- --------------- ------ ------ ------ ------ HC/GD no norm. 68.0 -0.9 -1.0 -0.2 HC/GD min-max 68.0 -1.1 -2.0 -0.5 HC/H no norm. 62.6 -1.8 -2.1 -2.0 HC/H min-max 62.6 -2.2 -3.7 -3.2 HC/K no norm. 54.4 -1.8 -1.9 -0.7 HC/K min-max 54.4 -4.1 -5.0 -2.1 HC/A no norm. 59.0 -1.6 -1.8 -1.6 HC/A min-max 59.0 1.2 0.9 -0.1 **Do different classifiers rely on different patterns?** A condensed comparison of the three employed classification methods is depicted in Figure \[img:cnn-svm-mlp-norm-NGD\]. The LRP relevance values are consistent for both normalization modalities. For non-normalized data ( for task $HC/GD$ see Supplementary Figures S \[img:sup-cnn-nonorm-NGD\], S \[img:sup-mlp-nonorm-NGD\], and S \[img:sup-svm-nonorm-NGD\]), the relevant regions for SVM and MLP largely correspond across all binary classification tasks. The CNN matches the relevant regions of SVM and MLP in broad terms. The relevant regions in $F_V$ of the unaffected side for the CNN are considerably lower compared to SVM and MLP (compare Supplementary Figures S \[img:sup-cnn-nonorm-NGD\], S \[img:sup-mlp-nonorm-NGD\], and S \[img:sup-svm-nonorm-NGD\]),  for task $HC/GD$ the valley of the unaffected side is hardly relevant and the second peak of the unaffected side is considerably less relevant compared to SVM and MLP. For min-max normalized data (see Figure \[img:cnn-svm-mlp-norm-NGD\]), the relevant regions for SVM and MLP coincide also to a large extent. The relevant regions of CNN correspond to those of SVM and MLP with regard to their location, but are considerably more relevant (best visible in the total relevance curves in the right part of Figure \[img:cnn-svm-mlp-norm-NGD\]). The most pronounced difference between the classification methods can be observed in the input relevance curves at the beginning and the end of $F_{V}$. While LRP indicates that those regions are relevant for SVM and MLP, the total relevance curve of the CNN does not show any correspondence in those regions. The remaining binary classification tasks,  $HC/H$ (see Supplementary Figures S \[img:sup-cnn-nonorm-NH\]–S \[img:sup-svm-norm-NH\]), $HC/K$ (see Supplementary Figures S \[img:sup-cnn-nonorm-NK\]–S \[img:sup-svm-norm-NK\]) and $HC/A$ (see Supplementary Figures S \[img:sup-cnn-nonorm-NA\]–S \[img:sup-svm-norm-NA\]) generally confirm the discussed findings. Overall, with regard to our second research direction, XAI shows clearly the importance of data normalization for obtaining unbiased explanations. Furthermore, XAI allows to compare classifiers and the patterns they rely on, even though some patterns are difficult to interpret ( the strongly relevant patterns at the beginning and the end of $F_{V}$ for SVM and MLP). The beginning and the end of the stance phase are characterized by a higher degree of inter- and intra-subject variability [@BIZOVSKA2014399]. In the absence of ground truth information for automatically generated explanations, it is difficult to assess whether these relevant regions for SVM and MLP are related to meaningful gait characteristics (, which are particularly evident in the more “unstable” initial and final stance phases with respect to balance) or bias ( related to the recording or processing of the GRF signals). Although XAI cannot explain these patterns in detail, it enables to identify and compare the learning strategies of different classification methods and thus to point out potential degenerations, influences of noise and spurious correlations. Statistical Evaluation of Decision Explanations using Statistical Parameter Mapping {#subsec:statisticalResults} ----------------------------------------------------------------------------------- In the following, we compare the regions found to be statistically significantly different by SPM and those with high relevance estimates using LRP. Our expectation was that these regions are related to some degree, as statistically significantly different features are more likely to be beneficial for classification. In the vast majority of cases, the SPM analysis shows statistically significant differences in regions which are also highly relevant for classification. Thus, for binary classification tasks, machine learning models base their predictions primarily on features that are significantly different between these two classes. This can be observed in the $HC/GD$ classification for both, min-max normalized and non-normalized GRF signals,  as the total relevance increases, the effect size usually also increases (see Figure \[img:cnn-nonorm-NGD\]D and Figure \[img:cnn-norm-NGD\]D). However, it is again noticeable that especially for the classification of non-normalized GRF signals, considerably fewer features are relevant for machine learning models than the SPM analysis identified as statistically different. In this context, the SPM analysis proofs to be a reliable reference for XAI approaches, since it is invariant to input data normalization and value ranges. For non-normalized signals the SPM provided a valuable indication for the presence of a bias. While LRP was not able to declare certain regions in the shear forces as relevant (actually due to biased machine learning models, not an insufficiency of LRP), SPM clearly showed that there are statistically validated differences. As mentioned above, input normalization leads to an increased number of relevant regions. Furthermore, there is a greater overlap between the results of LRP and SPM for the classification of min-max normalized data. This also implies an advantage in the use of normalized input vectors for gait classification (even though for the present dataset the classification accuracy does not increase), which is in accordance with machine learning literature [@Hsu.2016; @francois2017deep]. Clinical Evaluation of Decision Explanations {#subsec:clinicalEval} -------------------------------------------- The visualizations of the derived classification results from min-max normalized data illustrate certain clinically meaningful patterns. For classification task $HC/A$ (see Supplementary Figure S \[img:sup-cnn-norm-NA\]) one can identify pronounced peaks in the total relevance curves of $F_{AP}$ of the affected and unaffected side. These regions are highly relevant for the classification purpose as indicated by LRP. From a clinical perspective this observation seems plausible as an impaired ankle joint is likely to impact forward step propulsion in the terminal stance (TS) phase due to a limited range of motion, reduced muscle strength, and/or the presence of pain. As a consequence, also the contralateral (unaffected) side shows aberrations in $F_{AP}$ regarding the initial contact (IC) of the foot caused by the lower impulse that was generated by the affected side. In contrast, for classification task $HC/K$ highest LRP relevance values are present in $F_{V}$ and $F_{AP}$ (see Supplementary Figure S \[img:sup-cnn-norm-NK\]). Changes in $F_{V}$ may result from a lessened knee flexibility that hinders typical knee dynamics over the entire course of the stance phase. More precisely, healthy walking requires a fully extended knee joint during IC and a slight knee flexion thereafter causing a relief of body weight during the mid stance (MS) phase by definition called loading response. Moreover, further knee extension is essential to enable forward propulsion in late MS and TS, which may be insufficient in case of a decreased range of motion in the knee joint and/or a lack of muscle strength. These altered dynamics also affect GRF measurements in $F_{AP}$ similar to those described above for the ankle joint, specifically observable during IC due to a possible flattened foot position and in TS caused by a reduced forward propulsion. Highest LRP relevance values for the classification task $HC/H$ are obtained during IC in $F_{V}$ of the affected side and in $F_{AP}$ of the unaffected side (see Supplementary Figure S \[img:sup-cnn-norm-NH\]). These results may be ascribed to lowered impact and weight bearing in the early stance phase (due to a more cautious walking strategy) in order to avoid excessive load on the affected hip joint. Furthermore, the lowered braking impulse in $F_{AP}$ on the unaffected side can be traced back to a lowered forward propulsion induced by a possible insufficient hip extension on the affected side. However, in contrast to LRP results of the knee joint, relevance values for MS in $F_V$ are low in this specific task. This particular observation could be explained by the role of the knee and ankle joint, which are essential to generate typical gait dynamics in MS, but should not be restricted to people with hip impairments. The classification task $HC/GD$ (see Figure \[img:cnn-norm-NGD\]) highlights once again the significance of the IC ($F_{V}$ of affected side and $F_{AP}$ of unaffected side), which is relevant for classification purposes across all groups. Other relevant areas – even if not as distinctive – reflect the general characteristics that were already presented within the pairwise comparisons described above, such as a pronounced second peak and altered gait dynamics in MS of $F_{V}$ as well as a lessened forward propulsion in $F_{AP}$ of the affected side. As the results presented above are based on min-max normalized data, the question arises whether similar observations can be derived without normalization. The decision explanations for non-normalized data (see Figure \[img:cnn-nonorm-NGD\] and Supplementary Figures S \[img:sup-cnn-nonorm-NH\], S \[img:sup-cnn-nonorm-NK\], S \[img:sup-cnn-nonorm-NA\]) clearly show a different picture, even if classification results are comparable to those obtained with min-max normalized data. Relevant regions can only be found in $F_{V}$ of the GRFs. These observations again highlight the fact that normalization affects explainability and, therefore, needs to be considered also from a clinical viewpoint. However, with regard to our first research direction it can be concluded that the employed classifiers, which are trained on min-max normalized data, in combination with LRP serve well for identifying clinically relevant features. Conclusion ========== The present findings highlight that complex machine learning models, such as CNNs, base their predictions on meaningful features of GRF signals in a clinical gait classification task (that are in accordance with a statistical and clinical evaluation). Hence, XAI methods which allow to explain the decisions of machine learning models, such as LRP, can be promising solutions to increase transparency of automatic classification predictions in CGA and can help to make the decision processes comprehensible to clinical and legal experts. Thereby, XAI may facilitate the application of AI-based decision-support systems in clinical practice. Within the scope of our analysis we were able to show that: - Although the three classification methods investigated – CNN, MLP and SVM – achieved similar classification accuracies, minor differences can be observed in the regions that are relevant for their predictions. In addition, CNNs showed the greatest agreement with the statistical and clinical evaluation. - Input data normalization allows machine learning models to consider features from various input signals for their predictions (especially if the value ranges differ as much as for the three force components of the GRF). Without normalization, only relevant regions in the $F_V$ were identified. - Highly relevant regions were identified in the signals of the affected and unaffected side. Thus, the unaffected side captures additional information which are relevant for automated gait classifications. - For the investigated binary gait classification tasks, machine learning models seem to learn an over-complete set of features that may contain redundant information. This might explain why the occlusion of shear forces had negligible influence on the classification accuracies and also why the classification accuracies for the classification of normalized and non-normalized GRF signals were comparable. However, the present paper can only be considered as a first step towards this direction. Further research is necessary in order to compare different decision explanation methods and rules [@kohlbrenner2019towards] for different classification tasks and datasets. In addition, quantitative methods are needed to assess the quality of decision explanations [@samek_2017_evaluating]. For time-series data such as GRFs, SPM proved to be a suitable statistical reference for XAI methods. Conflict of Interest Statement {#conflict-of-interest-statement .unnumbered} ============================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Funding {#funding .unnumbered} ======= This work was partly funded by the Austrian Research Promotion Agency (FFG) and the BMDW within the COIN-program (\#866855), the Lower Austrian Research and Education Company (NFB), the Provincial Government of Lower Austria (\#FTI17-014). Further support was received from the German Ministry for Education and Research as Berlin Big Data Centre (\#01IS14013A), Berlin Center for Machine Learning (\#01IS18037I) and TraMeExCo (\#01IS18056A). Acknowledgments {#acknowledgments .unnumbered} =============== We want to thank Marianne Worisch, Szava Zoltán, and Theresa Fischer for their great assistance in data preparation and their support in clinical and technical questions. Data Availability Statement {#data-availability-statement .unnumbered} =========================== For our analysis, we used a subset of a large dataset that is currently being prepared for an open-source publication as the [GaitRec]{} dataset in an online repository. The data and the experimental code will be made publicly available on GitHub after publication. Supplementary Material {#supplementary-material .unnumbered} ====================== The present Supplementary Material is intended to present additional results we generated for the paper **“On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence”**. The primary aim of this article is to investigate to which degree Explainable Artificial Intelligence (XAI) can increase the explainability and transparency of automatic decisions in the context of clinical gait analysis. In detail, this study exemplifies how XAI can be used to make clinical gait classification and prediction results understandable and traceable for clinical experts. For this purpose, we define several gait classification tasks, employ a representative set of classifiers – (linear) Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Convolutional Neural Network (CNN) – and a well-established XAI method – Layer-wise Relevance Propagation (LRP) – to explain decision at the signal (input) level. In addition to an evaluation of the explanations by a clinical expert, as a second reference, we propose the use of Statistical Parameter Mapping (SPM) to verify the obtained results from a statistical point of view. The dataset employed, comprises ground reaction force (GRF) measurements from 132 patients with gait disorders ($GD$) and data from 62 healthy controls ($HC$). The $GD$ class is furthermore differentiated into three classes of gait disorders associated with the hip ($H$), knee ($K$), and ankle ($A$). The classification tasks, which represent the basis of the XAI investigation, due to high classification accuracies obtained, include a binary classification between healthy controls and all gait disorders ($HC/GD$), and a binary classification between healthy controls and each gait disorder separately, , $HC/H$, $HC/K$, and $HC/A$. The following figures visualize decision explanation obtained with LRP. The input vector for the classifiers comprises concatenated affected and unaffected GRF signals. These GRF signals are time-normalized to 101 points (100% stance), thus the input vector contains 606 values. For each value LRP provides whether they are relevant or not for the classification. Sub-figure (A) shows mean GRF signals averaged over each class of the classification task. The shaded areas in all sub-figures highlight areas in the input signals where SPM resulted in a statistically significant difference between both classes. Sub-figure (B) shows mean GRF signals (including a band of one standard deviation) for the $HC$ class. The input relevance indicates which GRF characteristics were most relevant for (or contradictory to) the classification of a certain class. For visualization, input values neutral to the prediction ($R_i \approx 0$) are shown in black, while warm hues indicate input values supporting the prediction ($R_i \gg 0 $) of the analyzed class and cool hues identify contradictory input values ($R_i \ll 0 $). Sub-figure (C) depicts mean GRF signals averaged over a pathological class ($H$, $K$, or $A$) or all gait disorders ($GD$), in the same format as in sub-figure (B). Sub-figure (D) shows the effect size obtained from SPM and the total relevance, which is calculated as the sum of the absolute input relevance values of both classes. The total relevance indicates the common relevance of the input signal for the classification task. Classification Task: $HC/GD$ ============================ Classifier: CNN --------------- ![Result overview for the classification of healthy controls and the aggregated class of all three gait disorders ($HC/GD$) based on non-normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-nonorm-NGD"}](figures/nonorm_1-234_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls and the aggregated class of all three gait disorders ($HC/GD$) based on min-max normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-norm-NGD"}](figures/atMM_1-234_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: MLP --------------- ![Result overview for the classification of healthy controls and the aggregated class of all three gait disorders ($HC/GD$) based on non-normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-nonorm-NGD"}](figures/nonorm_1-234_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls and the aggregated class of all three gait disorders ($HC/GD$) based on min-max normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-norm-NGD"}](figures/atMM_1-234_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: SVM --------------- ![Result overview for the classification of healthy controls and the aggregated class of all three gait disorders ($HC/GD$) based on non-normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-nonorm-NGD"}](figures/nonorm_1-234_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls and the aggregated class of all three gait disorders ($HC/GD$) based on min-max normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-norm-NGD"}](figures/atMM_1-234_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classification Task: $HC/H$ =========================== Classifier: CNN --------------- ![Result overview for the classification of healthy controls ($HC$) and hip injury class ($H$) based on non-normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-nonorm-NH"}](figures/nonorm_1-4_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and hip injury class ($H$) based on min-max normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-norm-NH"}](figures/atMM_1-4_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: MLP --------------- ![Result overview for the classification of healthy controls ($HC$) and hip injury class ($H$) based on non-normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-nonorm-NH"}](figures/nonorm_1-4_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and hip injury class ($H$) based on min-max normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-norm-NH"}](figures/atMM_1-4_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: SVM --------------- ![Result overview for the classification of healthy controls ($HC$) and hip injury class ($H$) based on non-normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-nonorm-NH"}](figures/nonorm_1-4_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and hip injury class ($H$) based on min-max normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-norm-NH"}](figures/atMM_1-4_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classification Task: $HC/K$ =========================== Classifier: CNN --------------- ![Result overview for the classification of healthy controls ($HC$) and knee injury class ($K$) based on non-normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-nonorm-NK"}](figures/nonorm_1-3_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and knee injury class ($K$) based on min-max normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-norm-NK"}](figures/atMM_1-3_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: MLP --------------- ![Result overview for the classification of healthy controls ($HC$) and knee injury class ($K$) based on non-normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-nonorm-NK"}](figures/nonorm_1-3_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and knee injury class ($K$) based on min-max normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-norm-NK"}](figures/atMM_1-3_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: SVM --------------- ![Result overview for the classification of healthy controls ($HC$) and knee injury class ($K$) based on non-normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-nonorm-NK"}](figures/nonorm_1-3_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and knee injury class ($K$) based on min-max normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-norm-NK"}](figures/atMM_1-3_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classification Task: $HC/A$ =========================== Classifier: CNN --------------- ![Result overview for the classification of healthy controls ($HC$) and ankle injury class ($A$) based on non-normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-nonorm-NA"}](figures/nonorm_1-2_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and ankle injury class ($A$) based on min-max normalized GRF signals using a CNN as classifier.[]{data-label="img:sup-cnn-norm-NA"}](figures/atMM_1-2_Cnn1DC8_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: MLP --------------- ![Result overview for the classification of healthy controls ($HC$) and ankle injury class ($A$) based on non-normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-nonorm-NA"}](figures/nonorm_1-2_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and ankle injury class ($A$) based on min-max normalized GRF signals using a MLP as classifier.[]{data-label="img:sup-mlp-norm-NA"}](figures/atMM_1-2_Mlp3Layer768Unit_and_spmFrontiers_final.pdf){width="0.9\linewidth"} Classifier: SVM --------------- ![Result overview for the classification of healthy controls ($HC$) and ankle injury class ($A$) based on non-normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-nonorm-NA"}](figures/nonorm_1-2_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} ![Result overview for the classification of healthy controls ($HC$) and ankle injury class ($A$) based on min-max normalized GRF signals using a SVM as classifier.[]{data-label="img:sup-svm-norm-NA"}](figures/atMM_1-2_SvmLinearL2C1em1_and_spmFrontiers_final.pdf){width="0.9\linewidth"} [^1]: [SPM1D]{} v.0.4,
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report six   observations of  during and after the outburst of Sep 23 1998. The outburst flux is lower than the quiescent flux in the entire observed energy band (0.1–10 keV), in agreement with earlier observations. The  spectra are fitted with two-temperature plasma and cooling flow spectral models. These fits show a clear spectral evolution in  for the first time in : the hard  turn-up after the outburst is reflected in the emission measure and the temperature. Moreover, during outburst the 1.5–10 keV flux decreases significantly. We argue that this is not consistent with the constant flux during a  outburst observation made eight years earlier. We conclude from this observation that there are significant differences between outburst  lightcurves of .' author: - 'H.W. Hartmann$^1$' - 'P.J. Wheatley$^2$' - 'J. Heise$^1$' - 'J.A. Mattei$^3$' - 'F. Verbunt$^4$' date: 'Received ; accepted ' title: The  spectra of VW Hydri during the outburst cycle --- Introduction {#intro} ============  from dwarf novae arise very near the white dwarf, presumably in a boundary layer between the white dwarf and the accretion disk surrounding it. Information on the properties of the  emitting gas as a function of the mass transfer rate through the accretion disk is provided by observations through the outburst cycle of dwarf novae. It may be hoped that such observations help to elucidate the nature of the  emission in cataclysmic variables, and by extension in accretion disks in general.  is a dwarf nova that has been extensively studied during outbursts and in quiescence, at wavelengths from optical to hard . It is a dwarf nova of the SU UMa type, i.e. in addition to ordinary dwarf nova outbursts it occasionally shows brighter and longer outbursts, which are called superoutbursts. Ordinary outbursts of  occur every 20–30d and last 3–5 days; superoutbursts occur roughly every 180d and last 10–14d (Bateson [@bateson77]). A multi-wavelength campaign combining data obtained with , , the International Ultraviolet Explorer, and by ground based optical observers covered three ordinary outbursts, one superoutburst, and the three quiescent intervals between these outbursts (Pringle et al. [@pringle87], Van Amerongen et al. [@amerongen87], Verbunt et al. [@verbunt87], Polidan, Holberg [@polidan87], van der Woerd & Heise [@woerd87]). The   data show that the flux in the 0.05–1.8keV range decreases during the quiescent interval; the flux evolution at lower energies and at higher energies (1–6keV) are compatible with this, but the count rates provided by   are insufficient to show this independently. Folding the  data of three outbursts showed that a very soft component appears early in the outbursts and decays faster than the optical flux (Wheatley et al. [@wheatley96]). The  Position Sensitive Proportional Counter (PSPC) and Wide Field Camera (WFC) covered a dwarf nova outburst of  during the  All Sky Survey (Wheatley et al. [@wheatley96]). The PSPC data show that the flux in the 0.1–2.5keV range is lower during outburst. The  data showed no significant difference between outburst and quiescent  spectrum. The best spectral constraints are obtained for the quiescent  spectrum by combining  WFC from the All Sky Survey with data from  PSPC and   pointings. A single temperature fit is not acceptable, the sum of two optically thin plasma spectra, at temperatures of 6keV and 0.7keV is somewhat better. The spectrum of a plasma which cools from 11keV and has emission measures at lower temperatures proportional to the cooling time, provides an acceptable fit of the spectrum in the 0.05–10keV energy range (Wheatley et al. [@wheatley96]). In this paper we report on a series of  observations of , which cover an ordinary outburst and a substantial part of the subsequent quiescent interval. The observations and data reduction are described in Sect.2, the results in Sect.3 and a discussion and comparison with earlier work is given in Sect.4. Observations and data reduction {#obs} ===============================  is monitored at optical wavelengths by the American Association of Variable Star Observers (AAVSO). On Sep 23 1998 the optical magnitude of   started to decrease. The outburst lasted for 5–6 days and reached a peak magnitude of 9.2. This outburst served as a trigger for a sequence of six observations by  between Sep 24 and Oct 18. As a result we have obtained one  observation during outburst and five observations during quiescence. Since  appears as an on-axis source, the Low Energy Concentrator Spectrometer (LECS, Parmar et al. [@parmar97]) source counts are extracted from a circular region with a 35 pixel radius centered at the source. We use the Sep 1997 LECS response matrices centered at the mean raw pixel coordinates (130,124) for the channel-to-energy conversion and to fold the model spectra when fitted to the data. The combined Medium Energy Concentrator Spectrometer (MECS2 and MECS3, Boella et al. [@boella97]) source counts are extracted from a circular region with a 4 (30 pixel) radius. The September 1997 MECS2 and MECS3 response matrices have been used. These matrices are added together. The background has been subtracted using an annular region with inner and outer radii of 35 and 49.5 pixels for the LECS and 30 and 42.5 pixels for the MECS, around the source region. We ignore the data of the High Pressure Gas Scintillation Proportional Counter (HPGSPC, Manzo et al. [@manzo97]) and the Phoswitch Detection System (PDS, Frontera et al. [@frontera97]) since their background subtracted spectra have a very low signal to noise ratio. The LECS and MECS data products are obtained by running the  Data Analysis System pipeline (Fiore et al. [@fiore99]). We rebin the energy channels of all four instruments to ${1 \over 3}\times \mbox{FWHM}$ of the spectral resolution and require a minimum of 20 counts per energy bin to allow the use of the chi-squared statistic. The total LECS and MECS net exposure times are 82.5 ksec. and 181.4 ksec. respectively. The factor 2.2 between the LECS and MECS exposure times is due to non-operability of the LECS on the daytime side of the earth. Results ======= [crrrrr]{} & & &\ & & & & &\ & & & & &\ 1 & 24–26/09/1998 & 34.1 & 0.024(3) & 76.6 & 0.016(2)\ 2 & 27–28/09/1998 & 7.2 & 0.098(8) & 20.2 & 0.109(8)\ 3 & 3–4/10/1998 & 9.1 & 0.089(8) & 17.4 & 0.094(6)\ 4 & 10–11/10/1998 & 9.4 & 0.072(7) & 20.0 & 0.075(5)\ 5 & 12/10/1998 & 11.1 & 0.072(6) & 21.6 & 0.085(5)\ 6 & 17–18/10/1998 & 11.6 & 0.077(6) & 25.6 & 0.078(5)\ Total & & 82.5 & & 181.4 &\ In Fig. \[lc\] we show the optical lightcurve, provided by the [*American Association of Variable Star Observers*]{} and the [*Variable Star Network*]{}, of  at the time of our  observations. These optical observations show that our first  observation was obtained during an ordinary outburst that peaked on Sep 24, whereas observations 2–6 were obtained in quiescence. The last ordinary outbursts preceding our first  observation was observed by the AAVSO to peak on Sep 8; the first outburst observed after our last   observation was a superoutburst that started on Nov 5 and lasted until Nov 19. Lightcurve {#rate_evolution} ---------- In Fig. \[lc\] we also show the count rates detected with the  LECS and MECS. For the latter instrument we show the count rates separately for the full energy range 1.5-10 keV, and for the hard energies only in the range 5-10 keV. In both LECS and MECS the count rate is lower during the outburst than in quiescence. In quiescence the count rate decreases significantly between our second and third (only in the MECS data), and between the third and fourth observations (both LECS and MECS data), but is constant after that (see Table \[table1\]). The MECS count rate decreases during our first observation, when  was in outburst, as is shown in more detail in Fig. \[decay\]. This decrease can be described as exponential decline $N_{\rm ph}\propto e^{-t/\tau}$ with $\tau\simeq 1.1$d. The count rates in the LECS are compatible with the same decline, but the errors are too large for an independent confirmation. The count rates at lower energies, 0.1–1.5keV, are compatible with both a constant value and the exponential decay during our first observation. Spectral fits ------------- [cllllllll]{}\ Obs. & T$_1$ & T$_2$ & E.M.$_1$ & E.M.$_2$ & L$_1$ & L$_2$ & ${\rm n_H}$ & $\chi^2$ (d.o.f.)\ & keV & keV & $10^{52}\mbox{ cm}^{-3}$ & $10^{52}\mbox{ cm}^{-3}$ & $10^{30} \mbox{erg s}^{-1}$ & $10^{30} \mbox{erg s}^{-1}$ & $10^{19}\mbox{cm}^{-2}$ &\ 1 & $0.68_{-0.11}^{+0.10}$ & $3.2_{-0.4}^{+0.6}$ & $2.0_{-0.5}^{+0.5}$ & $6.6_{-0.7}^{+0.7}$ & $0.7_{-0.2}^{+0.2}$ & $1.4_{-0.2}^{+0.3}$ & $4^*$ & 65 (61)\ 2 & $ 0.9_{-0.3}^{+0.8}$ & $3.7_{-0.3}^{+1.0}$ & $3_{-2}^{+11}$ & $40_{-10}^{+3}$ & $1.0_{-0.7}^{+4.2}$ & $9_{-3}^{+2}$ & $4^*$ & 92 (70)\ 3 & $ 1.3_{-0.3}^{+0.4}$ & $6.1_{-1.3}^{+2.1}$ & $8_{-5}^{+6}$ & $27_{-5}^{+5}$ & $2.0_{-1.4}^{+1.8}$ & $7_{-2}^{+3}$ & $4^*$ & 103 (75)\ 4 & $ 1.2_{-0.3}^{+0.3}$ & $6.0_{-1.5}^{+2.3}$ & $7_{-4}^{+5}$ & $23_{-4}^{+5}$ & $1.9_{-1.3}^{+1.5}$ & $6_{-2}^{+2}$ & $4^*$ & 75 (66)\ 5 & $ 1.2_{-0.3}^{+0.3}$ & $6.5_{-1.1}^{+1.6}$ & $5_{-3}^{+3}$ & $25_{-3}^{+3}$ & $1.4_{-0.8}^{+1.0}$ & $6.7_{-1.4}^{+1.6}$ & $4^*$ & 84 (74)\ 6 & $ 1.6_{-0.4}^{+0.4}$ & $6.5_{-1.8}^{+2.8}$ & $12_{-7}^{+5}$ & $20_{-5}^{+6}$ & $2.5_{-1.8}^{+1.4}$ & $5_{-2}^{+3}$ & $4^*$ & 78 (77)\ 3–6 & $1.28_{-0.16}^{+0.16}$ & $6.0_{-0.7}^{+0.9}$ & $7_{-2}^{+3}$ & $25_{-2}^{+2}$ & $1.9_{-0.7}^{+0.8}$ & $6.3_{-0.9}^{+1.0}$ & $4_{-2}^{+3}$ & 149 (113)\ \ Obs. & ${\rm T}_{\rm low}$ & ${\rm T}_{\rm high}$ & & & ${\rm n_H}$ & $\chi^2$ (d.o.f.)\ & keV & keV & & & $10^{19}\mbox{cm}^{-2}$ &\ 1 & $<0.4$ & $4.5^{+0.4}_{-0.6}$ & & & $4^*$ & 76 (62)\ 2 & $1.0^{+0.5}_{-0.3}$ & $6.8^{+1.1}_{-0.5}$ & & & $4^*$ & 92 (71)\ 3–6 & $0.66^{+0.18}_{-0.08}$ & $9.9^{+0.8}_{-0.8}$ & & & $4^*$ & 153 (115)\ [$^*$Fixed parameter c.f. the value obtained from the combined observations 3–6]{} We have made spectral fits to the combined MECS and LECS data for each of the six separate  observations and computed the luminosities assuming a distance of 65 pc to  (see Warner [@warner87]). As expected on the basis of earlier work, described in the introduction, we find that the observed spectra cannot be fitted with a single-temperature plasma. The combination of spectra of optically thin plasmas at two different temperatures does provide acceptable fits. The parameters of these fits are listed in Table \[fitres2\], and their variation between the separate observations is illustrated in Fig. \[param\]. The need for a two-temperature fit is illustrated in Figs. \[2comp\_err\] and \[smooth\] for the outburst spectrum of observation 1 and for the quiescent spectrum of the combined observations 3–6: the low temperature component is required to explain the excess flux near 1 keV. The Fe-K emission line near $6.70\pm 0.05\mbox{ keV}$ is clearly present in our data, and is due to hydrogen or helium like iron from the hot component of the plasma. The LECS data in observations 3–6 are poorly fitted above $\sim 5$ keV which is probably due to calibration uncertainties of the instrument (Fiore et al. [@fiore99]). We fix $n_{\rm H}$ at $4\times 10^{19}\mbox{ cm}^{-2}$, the best-fit value of the combined observation 3–6. (Fixing $n_{\rm H}$ at $6\times 10^{17}\mbox{ cm}^{-2}$, which was found by Polidan et al. ([@polidan90]), does not change the fit parameters, except for the chi-squared values of observations 2, 3 and 3–6 which become slightly worse; 98, 111 and 158 respectively.) The temperature of both the cool and the hot component of the two-temperature plasma is higher during quiescence than during the outburst, increasing from respectively 0.7keV and 3.2keV in outburst to 1.3keV and 6keV in quiescence. The temperatures immediately after outburst – in our second observation – are intermediate between those of outburst and quiescence. The emission measure (i.e. the integral of the square of the electron density over the emission volume, $\int n_{\rm e}^2dV$) of both the cool and the hot component of the two-temperature plasma is also higher in quiescence; immediately after outburst the emission measure of the hot component is higher than during the later phases of quiescence. The temperatures and emission measures of the two-temperature plasma are constant, within the errors, in the later phases of quiescence of our observations 3–6. For that reason, we have also fitted the combined data of these four observations to obtain better constraints on the fit parameters (see Table \[fitres2\]). Note that the decrease of the count rate between observations 3 and 4, mentioned in Sect. \[rate\_evolution\], is significant even though it is not reflected in the emission measures and luminosities of the two components separately. This is due to the combined spectral fitting of the LECS and the MECS, since the decrease in count rate is less significant for the LECS. Moreover, the errors on the count rates are much smaller than those on the emission measures ($\la 10\%\mbox{ and }\ga 20\%$ respectively). [cllll]{} Obs. & T$_2$ & E.M.$_2$ & MECS & PSPC\ & keV & $10^{52}\mbox{ cm}^{-3}$ & cts s$^{-1}$ & cts s$^{-1}$\ 1a & $3.6_{-0.7}^{+1.3}$ & $8.4_{-1.3}^{+1.3}$ & $0.022_{-0.003}^{+0.003}$ & $0.35_{-0.03}^{+0.03}$\ 1b & $3.0_{-0.6}^{+1.3}$ & $5.4_{-0.8}^{+0.8}$ & $0.013_{-0.002}^{+0.002}$ & $0.23_{-0.02}^{+0.02}$\ We fit the first 31 ksec and the next 46 ksec of the outburst spectrum (1a and 1b) separately. Both fits are good with $\chi^2<1$. From the fit results we compute the MECS and  PSPC count rates. The results are shown in Table \[vwh\_splitobs1\]. We have only indicated the temperature and emission measure of the hot component since the cool component is responsible for the iron line emission outside the MECS bandwidth and does not have a large impact upon the continuum emission. Note from Table \[vwh\_splitobs1\] that the decay in count rate is entirely due to the decrease of the emission measure. To compare our observations with the results obtained by Wheatley et al. ([@wheatley96]) we consider next the cooling flow model (cf. Mushotzky, Szymkowiak [@mushotzky88]) for our observations 1, 2 and 3–6. In this model the emission measure for each temperature is restricted by the demand that it is proportional to the cooling time of the plasma. The results of the fits are shown in Table \[fitres2\]. Note that these results are not better than the two-temperature model fits. Due to the poor statistics of the LECS outburst observation we cannot constrain the lower temperature limit. The MECS is not sensitive to this temperature regime at all. A contour plot of the upper and lower temperature limits for the combined quiescent observations 3–6 is shown in Fig. \[contour36\]. The boundaries of the low temperature in Fig. \[contour36\] are entirely determined by the Fe-L and Fe-M line emission; for a low temperature of $\la0.35$ keV the contributions to the line flux integrated over all higher temperatures exceeds the observed line flux. For a low temperature of $\ga1.2$ keV there is not sufficient line flux left in the model. The boundaries of the high temperature are determined by the continuum slope; for a high temperature of $\la8.5$ and $\ga11.5$ keV the model spectrum is too soft and too hard respectively to fit the data. Comparison with previous  observations {#vwh_discussion} ====================================== Time variability {#vwh_timevar} ---------------- We predict the  count rates of  during outburst and quiescence with the observed  flux from the two-temperature fit (see Table \[fitres2\]). Here we do apply $n_{\rm H}=6\times 10^{17}\mbox{ cm}^{-2}$ (Polidan et al. [@polidan90]) since  is probably more sensitive to $n_{\rm H}$ than . The predicted count rates during outburst and quiescence are 0.31 and $0.87\mbox{ cts s}^{-1}$. The  observed count rates are 0.4 and $1.26\mbox{ cts s}^{-1}$ respectively (Belloni et al. [@belloni91]; Wheatley et al. [@wheatley96]). Both predictions appear to be different from the observations by a factor $\sim0.75$. From Fig. \[decay\], we observe a decrease in MECS count rate by a factor of $\ga 4$ during outburst. This is inconsistent with the constant 0.4 $\mbox{ cts s}^{-1}$ observed by the  PSPC during outburst (Wheatley et al. [@wheatley96]). Using the LECS data during the outburst in a bandwidth (0.1–1.5 keV) comparable to the  PSPC we cannot discriminate observationally between a constant flux and the exponential decay observed by the MECS. However, our spectral fits to the data require that the 0.1-2.5 keV flux decreases in tandem with the hard flux. Thus the difference between the  PSPC and the  MECS lightcurves during outburst may either be due to variations between individual outbursts or to the different spectral bandwidths of the observing instruments. The predicted decay of the count rate significantly exceeds the range allowed by the ROSAT observations of the Nov 1990 outburst. We interpret the time variability of the count rate shown in Figs. \[decay\] and \[param\], as a change mainly in the amount of gas in the inner disk that emits keV photons. At the end of the outburst, while the inner disk is still predominantly optically thick, the mass accretion rate onto the white dwarf is decreasing. As a result, the amount of hot optically thin gas drops gradually. This is observed in Fig. \[decay\]. The transition to a predominantly optically thin inner disk occurs just before observation 2. As a result the amount of optically thin emitting material in the disk increases strongly. This is shown by the increase of the emission measure of the hot component in Fig. \[param\], observation 2, which even peaks above the quiescent value. The settling of the accretion rate towards quiescence is shown in Fig. \[param\], observations 3–6 for both the temperature and the emission measure. In contrast to the emission measure, the temperature of the hot component increases only gradually throughout observations 1–6 as it reflects the slowly decreasing accretion rate rather than the amount of optically thin emitting material in the disk. Spectral variability {#vwh_specvar} -------------------- Both a two-temperature plasma model and a cooling flow model fit the spectrum of our  observations of  better than a one-temperature model. The contribution of the cool component lies mainly in the presence of strong Fe-L line emission around 1 keV. The hot component contributes the continuum and the Fe-K line emission at $\sim 6.7$ keV. Adding a soft atmospheric component in the form of a $\la 10$ eV blackbody model does not improve our fits. This blackbody component, reported by Van der Woerd et al. ([@woerd86]) and Van Teeseling et al. ([@teeseling93]), is too soft to be detected by  LECS. Based upon the $\chi^2$-values, the  observation of  does not discriminate between a continuous temperature distribution (the cooling flow model) and a discrete temperature distribution (the two-component model) of the  emitting region. Wheatley et al. ([@wheatley96]) derive a lower and upper temperature of $\la 0.53\mbox{ and }11^{+3}_{-2}$ keV respectively for a cooling flow fit to the combined  PSPC and  LAC data during quiescence. These temperatures are consistent with our cooling flow fits to  data during quiescence; there is a small overlap between the 2 and 3$\sigma$ contours shown in Fig. 6 by Wheatley et al. and the contours of our Fig. \[contour36\]. Conclusions {#vwh_conclusions} ===========  does not discriminate between a continuous (cooling flow) and a discrete temperature distribution. Our observation of a decreasing count rate, followed by a constant count rate during quiescence is in contradiction with the disk instability models. These models predict a slightly increasing mass transfer onto the white dwarf which must show up as an [*increase*]{} in the  flux. [*Ad hoc*]{} modifications to disk instability models, such as interaction of the inner disk with a magnetic field of the white dwarf (Livio, Pringle [@livio92]), evaporation of the inner disk (Meyer, Meyer-Hofmeister [@meyer94]), or irradiation of the inner disk by the white dwarf (King [@king97]), possibly are compatible with the decrease of ultraviolet flux (e.g. Van Amerongen et al. [@amerongen90]) and  flux during quiescence. If we assume a continuous temperature distribution the upper temperature limit of our quiescence spectrum is consistent with the observations by Wheatley et al. ([@wheatley96]). The cooling flow model requires an accretion rate of $3\times 10^{-12}\mbox{ M}_\odot\mbox{ yr}^{-1}$ to explain the  luminosity late in quiescence. A similar result is obtained when we convert the luminosity derived from the two-temperature model to an accretion rate. Any outburst model must accommodate this accretion rate.  MECS observes a significant decrease in the count rate during outburst. Our simulations show a similar decrease for the  PSPC which would have been significantly detected. The fact that the  count rate during outburst was constant (Wheatley et al. [@wheatley96]) and the results from our cooling flow model fits suggest that the outburst of Sep 24 1998 behaved differently from the outburst of Nov 3 1990. This work has been supported by funds of the Netherlands Organization for Scientific Research (NWO). Bateson F.M. 1977, N.Z.J.Sci 20, 73 Belloni T., Verbunt F., Beuermann K. et al., 1991, A&A 246, L44 Boella G., Chiappetti L., Conti G. et al., 1997, A&AS 122, 327 Fiore F., Guainazzi M., Grandi P. 1999, [*Cookbook for  NFI spectral analysis*]{} Frontera F., Costa E., Dal Fiume D. et al., 1997, A&AS 122, 357 King A. 1997, MNRAS 288, L16 Livio M., Pringle J.E. 1992, MNRAS 259, 23P Manzo G., Giarrusso S., Santangelo A. et al., 1997, A&AS 122, 341 Meyer F., Meyer-Hofmeister E. 1994, A&A 288, 175 Mushotzky R.F., Szymkowiak A.E. 1988, in: [*Cooling flows in clusters and galaxies*]{}, ed: Fabian A.C., Kluwer Dordrecht, the Netherlands, 53 Parmar A.N., Martin D.D.E., Bavdaz M. et al., 1997, A&AS 122, 309 Polidan R.S., Holberg J.B. 1987, MNRAS 225, 131 Polidan R.S., Mauche C.W., Wade R.A. 1990, ApJ 356, 211 Pringle J.E., Bateson F.M., Hassall B.J.M. et al., 1987, MNRAS 225, 73 Van Amerongen S., Damen E., Groot M., Kraakman H., Van Paradijs J. 1987, MNRAS 225, 93 Van Amerongen S., Kuulkers E., van Paradijs J. 1990, MNRAS 242, 522 Van der Woerd H., Heise J., Bateson F. 1986, A&A 156, 252 Van der Woerd H., Heise J. 1987, MNRAS 225, 141 Van Teeseling A., Verbunt F., Heise J. 1993, A&A 270, 159 Verbunt F., Hassall B.J.M., Pringle J.E., Warner B., Marang F. 1987, MNRAS 225, 113 Warner B. 1987, MNRAS 227, 23 Wheatley P.J., Verbunt F., Belloni T. et al., 1996, A&A 307, 137
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using a certain well-posed ODE problem introduced by Shilnikov in the sixties, G. Minervini proved in his PhD thesis [@M], among other things, the Harvey-Lawson Diagonal Theorem but without the restrictive tameness condition for Morse flows. Here we combine the same techniques with the ideas of Latschev in order to construct local resolutions for the flow of the graph of a section of a fiber bundle. This is endowed with a vertical vector field which is horizontally constant and Morse-Smale in every fiber. The resolution allows the removal of the tameness hypothesis from the homotopy formula in [@Ci2]. We give one finite and one infinite dimensional application. For that end, we introduce closed smooth forms of odd degree associated to any triple $(E,U,\nabla)$ composed of a hermitian vector bundle, unitary endomorphism and metric compatible connection.' address: - 'Universidade Federal do Ceará, Fortaleza, CE, Brazil' - 'Universidade Estadual do Ceará, Limoeiro do Norte, CE, Brazil' author: - Daniel Cibotaru - Wanderley Pereira title: 'Non-tame Morse-Smale flows and odd Chern-Weil Theory' --- [^1] Introduction ============ The well-known Morse Lemma gives the canonical form of a Morse function $f$ on a compact, Riemannian manifold $(M,g)$ around a critical point but does not provide information about the gradient flow. On the other hand, the Hartman-Grobman Theorem gives the *topological* conjugacy class of the gradient flow around the critical point. However, there are important situations where both of these classical results are insufficient to answer the relevant questions. We have in mind the following context. Suppose one is interested in taking a smooth submanifold $S$ and “flow it through” the critical point. Let us think that $S$ lies within a regular level $c-\epsilon$ of the Morse function right “before” a critical level and we look at its “trace” at a regular level $c+\epsilon$, meaning the intersection of the (forward) flow lines determined by $S$ with level $c+\epsilon$. Obviously, this “trace” can be empty if $S$ is contained in the stable manifold of the critical point. So a transversality condition with the stable manifold is naturally imposed. The natural question is whether one say anything about the structure of the *closure* of the “trace” at level $c+\epsilon$? One expects to get at least a rectifiable set because of transversality. One might even hope to prove something stronger, namely the existence manifold with corners of the same dimension as the submanifold and a proper “projection” which maps to the closure of the trace and is one-to-one almost everywhere. It turns out that in order to make this rigorous a tameness condition on the triple $(M,g,f)$ is helpful. One such condition was introduced in [@HL1]. A Morse function $f$ is called *tame* if around each critical point one can find coordinates for which two requirements are met, the metric is flat and the Morse function has the canonical form of the Morse Lemma. An immediate consequence of tameness is that the eigenvalues of the Hessian are $\pm 1$. This gives an idea of how restrictive tameness is. On the positive side, the flow has the simplest form possible and one can prove quite easily, by performing a blow-up of the intersection of the submanifold $S$ with the stable manifold of the critical point that a resolution of the closure of the “trace” is available. In fact, one can prove that such a manifold with corners resolution is available also for the closure of the entire “flow-out” of the submanifold between levels $c-\epsilon$ and $c+\epsilon$. More general situations are contemplated in [@Ci2; @La]. The existence of such resolutions have important consequences. The Harvey-Lawson Diagonal Theorem says that for the gradient flow $\varphi:{{\mathbb R}}\times M{\rightarrow}M$ induced by a tame $f$, which additionally satisfies Smale’s transversality condition there exists a rectifiable current $T$ on $M\times M$ such that $$dT= \Delta-\sum_{p\in\operatorname{Crit}(f)}U_p\times S_p,$$ where $\Delta$ is the diagonal and $U_p$ and $S_p$ are the unstable, resp. the stable manifold of the critical point $p$. To be a bit more precise the submanifold in this case is the diagonal and the flow is on $M\times M$ via $\varphi$ in the first component of the product and keeping fixed the second component. In his PhD thesis, J. Latschev [@La] also used the resolution idea and extended the Harvey-Lawson Diagonal Theorem to Morse-Bott-Smale flows. The first author developed this point of view further in [@Ci2] in order to extend the results to sections of fiber bundles, satisfying adequate transversality conditions. Even with the tameness condition in place, the rigorous details of the construction of the resolution are quite involved. Moreover, special care needs to be taken for those points mapped by the section to the critical points, e.g. the points $(p,p)\in \Delta$ when $p\in \operatorname{Crit}(f)$. Completely new ideas are necessary in order to deal with the *non-tame* case. In his PhD thesis, Minervini [@M] used a combination of results of Shilnikov [@Sh] on a certain type of ODE problems together with objects he introduced, called horned stratified spaces in order to prove the Harvey-Lawson Theorem without the tameness condition. Applications to Morse-Novikov theory were given by Harvey and Minervini in [@HM]. In this article, we take the next natural step and remove the tameness condition from the currential homotopy formula of [@Ci2]. With one caveat, the (model) flows in each fiber are assumed here Morse as opposed to Morse-Bott in [@Ci2]. We plan to return to the Morse-Bott case somewhere else. We implement a combination of the two main ideas from [@La] and [@M] in our present approach. On one hand we use Shilnikov-Minervini local analysis of the closure of the graph of the flow which gives a local resolution (see Theorem \[teo.subvde\]), but use induction on the critical levels ala Latschev for the proof of the next homotopy formula. \[tlimxi00\] Let $\pi: P\longrightarrow B$ be a fiber bundle with compact fiber. Let $X$ be a horizontally constant Morse-Smale vertical vector field and denote by $\Phi:{{\mathbb R}}\times P{\rightarrow}P$ the flow induced by $X$. Let $s:B\longrightarrow P$ be a section transverse to all the stable manifolds $\mathrm{S}(F)$ associated to the critical manifolds $F$ of $X$ and let $\xi_t(b):=(\Phi_t(s(b)),s(b))$, $b\in B$. Then $$T=\xi([0,+\infty)\times B)$$ defines a $ (n + 1) $-dimensional rectifiable current of locally finite mass and if $B$ is compact then $T$ is of finite mass. Moreover, the following equality of currents holds in $P\times_BP$: $$\begin{aligned} \label{bordo.T} \mathrm{d} T=\sum_{F}\mathrm{U}(F)\times_{F}s(s^{-1}(\mathrm{S}(F)))-(\xi_0)_*(B). \end{aligned}$$ where $U(F)$ are the unstable manifolds of $X$. Recall that a vertical vector field $X$ on the total space of a fiber bundle $P{\rightarrow}B$ is called horizontally constant if there exist local trivializations of the fiber bundle such that $X$ has a zero horizontal component in this trivialization. As a consequence, the flow induce by $X$ is, up to diffeomorphism, the same in every fiber. It also means that if the flow in the fiber is Morse-Bott, then the critical sets $F$ of $X$ are manifolds and so are the sets $S(F)$ and $U(F)$. An immediate consequence (see Corollary \[c.principal\]) of Theorem \[tlimxi00\] is the explicit computation of limits in the weak sense $$\lim_{t{\rightarrow}\infty} s_t^*\omega$$ where $s_t:=\Phi_t\circ s$ and $\omega\in \Omega^*(P)$ while also justifying a transgression formula for closed forms $\omega$ $$\label{0eq0}\lim_{t{\rightarrow}\infty} s_t^*\omega-s^*\omega=dT(\omega).$$ This Poincaré duality type of result is a source of many applications (see [@Ci4]) even in the tame case. In an early pre-print of [@Ci2] posted on arXiv an application to (\[0eq0\]) concerning certain odd degree forms on the unitary group was included. The flow used however did not satisfy the tameness hypothesis and the application was removed from the published version. We present it here in a more general context, but not before revisiting a classical topic and introducing some new objects which seem of independent interest. Chern-Weil theory is an important source of closed forms arising from geometric data. To any complex vector bundle $E{\rightarrow}B$ of rank $n$ endowed with a connection $\nabla$ and a $GL(n)$ invariant polynomial $P$ in the entries of an $n\times n$ matrix one has an associated closed form $P(F(\nabla))$. For homogeneous $P$ one gets that $P(F(\nabla))$ is of *even* degree, more precisely twice the degree of $P$. The deRham cohomology class of $P(F(\nabla))$ does not depend on $\nabla$. In order to get odd degree forms we endow $E{\rightarrow}B$ with an automorphism $A:E{\rightarrow}E$. Then we associate to the quadruple $ (E,A,\nabla,P)$ a closed form $\operatorname{\mathrm{TP}}(E,A,\nabla)$ which satisfies the following properties: it is natural with respect to pull-back, the cohomology class determined by $\operatorname{\mathrm{TP}}(E,A,\nabla)$ does not depend on the connection $\nabla$, the same cohomology class does not change under deformations of $A$ in the same homotopy class. We prove all these properties in Section \[OCW\] for hermitian vector bundles but the interested reader can adapt the results without difficulty to other structure groups. Let $P=c_k$ be the invariant polynomial induced by the $k$-th elementary symmetric polynomial. The following statement, which generalizes a result of Nicolaescu ([@Ni], Prop. 57) also gives a description of the Poincaré duals to $\operatorname{Tc}_k(E,g,\nabla)$. \[Nico0\] Let $E{\rightarrow}B$ be a trivializable hermitian vector bundle of rank $n$ over an oriented manifold with corners $B$ endowed with a compatible connection. Let $g:E{\rightarrow}E$ be a smooth gauge transform. Suppose that a complete flag $E=W_0\supset W_1\supset \ldots \supset W_n=\{0\}$ (equivalently a trivialization of $E$) has been fixed such that $g$ as a section of $\mathcal{U}(E)$ is completely transverse to certain (see (\[DefS\])) submanifolds $S(U_{I})$ determined by the flag. Then, for each $1\leq k\leq n$ there exists a flat current $T_k$ such that the following equality of currents of degree $2k-1$ holds: $$\label{TCkE0} \operatorname{Tc}_k(E, g,\nabla)-g^{-1}(S(U_{\{k\}}))=dT_k.$$ where $$\begin{aligned} g^{-1}(S(U_{\{k\}}))=\{b\in B~|~\dim{\operatorname{Ker}(1+g_b)}=\dim{\operatorname{Ker}{(1+g_b)\cap (W_{k-1})_b}}=1,\qquad\\ \dim{\operatorname{Ker}{(1+g_b)}\cap (W_{k})_b}=0\}.\end{aligned}$$ In particular, when $B$ is compact without boundary, then $\operatorname{Tc}_k(E, g,\nabla)$ and $g^{-1}(S(U_{\{k\}}))$ are Poincaré duals to each other and (\[TCkE0\]) is a spark equation ([@HLZ4; @CS]). The condition that $E{\rightarrow}B$ be trivializable is related to the non-tame flow used in the proof which requires the existence of a complete flag $E=W_0\supset\ldots \supset W_n=\{0\}$ of vector subbundles. It is an interesting question of how one can describe the Poincaré duals to $\operatorname{Tc}_k(E,g,\nabla)$ for a general $E$. The next application is to families of self-adjoint Fredholm operators. Fix $H$ a Hilbert space. The space of unitary operators $U\in \mathcal{U}(H)$ such that $1+U$ is Fredholm is a classifying space for odd $K$-theory. This space is a Banach manifold but is “too big” to build smooth differential forms. Restricting the attention to the Palais classifying spaces $\mathcal{U}^p$ which are unitary operators of type $1+S$ where $S$ belongs to some Schatten ideal, e.g. trace class or Hilbert-Schmidt operators then Quillen [@Qu] was able to construct several families of smooth forms all representing the components of the odd Chern character. When one has a smooth family of Dirac operators $\mathcal{D}_{b\in B}$ parametrized by a smooth and finite dimensional manifold $B$ then by taking the Cayley transforms one gets a smooth map $\varphi:B{\rightarrow}\mathcal{U}^p$. The pull-backs of the Quillen forms compute the cohomological analytic index determined of the family. Let us remark that in the finite dimensional case, the Quillen forms on $U(n)$ have explicit formulas in terms of the odd Chern-Weil forms arising from the trivial vector bundle ${{\mathbb C}}^n$ over $U(n)$ endowed with the tautological unitary endomorphism and trivial connection, i.e. in terms of the standard deRham generators of the cohomology ring of $U(n)$. On the other hand, in [@Ci1] we produced explicit representatives for the Poincaré duals of these classes using the infinite dimensional analogues of the stable manifolds $S(U_{\{k\}})$ which appear in Theorem \[Nico0\]. We used sheaf theory in [@Ci1] in order to be able to define cohomology classes arising from certain stratified spaces, called quasi-manifolds on an infinite dimensional Banach manifold. Here we exchange the sheaf theoretical approach from [@Ci1] with the currential approach and show that under the expected transversality hypothesis one can produce a transgression formula, strengthening thus the results from [@Ci1]. \[0thm71\] Let $\varphi:B{\rightarrow}\mathcal{U}^p$ be a smooth map from a compact, oriented manifold $B$, possibly with corners such that $\varphi\pitchfork Z_I^p$ for every $I$. Let $\Omega_k$ be a Quillen form of degree $2k-1$ that makes sense on $\mathcal{U}^p$. Then for every such $\Omega_k$, there exists a flat current $T_k$ such that: $$\label{lasteq} \varphi^{-1}Z_{\{k\}}-(-1)^{k-1}(k-1)!\varphi^*\Omega_k=dT_k.$$ In particular, when $B$ has no boundary, $\frac{(-1)^{k-1}}{(k-1)!}\varphi^{-1}Z_{\{k\}}^p$ represents the Poincaré dual of $\operatorname{ch}_{2k-1}([\varphi])$, where $[\varphi]\in K^{-1}(B)$ is the natural odd $K$ theory class determined by $\varphi$. The proof of this result reduces to Theorem \[Nico0\] via symplectic reduction. A few more comments about the structure of the article are in order. Section \[s.BVP\] which revisits Shilnikov theory, also adds some details to Minervini’s presentation in [@M]. In particular, Theorem \[teo.vizinhanca\] introduces some flow-convex neighborhoods that are fundamental later on. The main technical part of the proof of the main Theorem \[tlimxi00\] is contained in the rather long Section \[tec.tools\]. We felt it necessary to present many complete arguments. The proof is by induction and the amount of data one has to carry from one step to another is quite substantial. That is why we paid special care in proving properties like properness or injectivity of the flow-resolution map. To get a feel for the level of technicality the reader can take a quick glance at Proposition \[model.prop\] which is the key step in the induction. In essence, the main idea of the proof of the main Theorem is to follow the same steps as the induction proof presented in the Appendix of [@Ci2] but to substitute the oriented blow-up technique which takes care of the local picture in [@Ci2] with Minervini’s Theorem 1.3.21 which appears here as Theorem \[teo.subvde\]. The advantage of the presentation in [@Ci2] via blow-ups is that several maps are explicit and several properties come for free (e.g. a blow-down map is proper). This, of course is a consequence of tameness. On the negative side, one works hard in [@Ci2] to show that the relevant maps have regularity $C^1$ while here the regularity is $C^{\infty}$ and it is a consequence of the Minervini-Shilnikov theory. The models in the fiber are classical Morse-Smale flows associated to gradients of Morse functions. The results ought to hold also for the Morse-Smale quasi-gradients as defined in [@LM]. One point that made us cautious is contained in Remark \[Xtrans\] and is related to the properties of the flow-convex neighborhoods of Theorem \[teo.vizinhanca\]. A proof of the main Theorem appeared in the PhD thesis [@Oli] of the second author. Some arguments have been simplified in this presentation. Minervini-Shilnikov theory {#s.BVP} ========================== We review some results about certain well-posed ODE problems studied by Shilnikov in the 60’s. We borrowed the terminology that gives the title of this section from the main reference [@M]. Where the complete proofs were skipped, the reader will find the details in Chapter 1 of [@M]. Let $(x,y)$ be coordinates in $\mathbb{R}^s\times\mathbb{R}^u$. With respect to this decomposition, let $L=\begin{bmatrix} L^- & 0 \\ 0 & L^+ \end{bmatrix}$ be a constant, real coefficients matrix, in which the real parts of the eigenvalues of $L^-$ are strictly negative, say $-\lambda_s\leq\ldots\leq-\lambda_1<0$, and those of $L^+$ are strictly positive, say $0<\mu_1\leq \ldots \leq \mu_u$. For the situation we are interested in, $L$ is symmetric. Consider the ODE system in ${{\mathbb R}}^s\times {{\mathbb R}}^u$: $$\label{sist.original} \left\{\begin{array}{lll} \dot{x}=L^-x+f(x,y)\\ \dot{y}=L^+y+g(x,y)\\ \end{array} \right.$$ where $F=(f,g):\mathbb{R}^s\times\mathbb{R}^u\longrightarrow\mathbb{R}^s\times\mathbb{R}^u$ is a differentiable function satisfying $$F(0,0)=(0,0) \; \; \mbox{and} \; \; dF(0,0)=(0,0).$$ Given a triple $(x_0,y_1,\tau)\in {{\mathbb R}}^s\times {{\mathbb R}}^u \times [0,+\infty)$ a Boundary Value Problem (BVP) for the ODE has the following form $$\label{PVB} \left\{\begin{array}{lll} \dot{x}=L^-x+f(x,y)\\ \dot{y}=L^+y+g(x,y)\\ x^{*}(0)=x_0\\ y^*(\tau)=y_1. \end{array} \right.$$ where the solution $(x^*(t),y^*(t))$ is defined in the interval $[0,\tau]$. The solution at time $t$ to the BVP with data $(x_0,y_1,\tau)$ is denoted $$(x^*(t,x_0,y_1,\tau),y^*(t,x_0,y_1,\tau)).$$ The “end point” $(x^*_1, y_0^*)$ for the BVP solution is $$\label{aplic.final} \begin{array}{lll} x^{*}_1(x_0,y_1,\tau)=x^*(\tau,x_0,y_1,\tau)\\ y^*_0(x_0,y_1,\tau)=y^*(0,x_0,y_1,\tau). \end{array}$$ We compare this with the solution at time $t$ of the Initial Value Problem with data $(x_0; y_0, t=0)$ for which the following notation is used $$(x(t,x_0,y_0),y(t,x_0,y_0)).$$ Notice that $$\label{Identidades} \begin{array}{lll} x(t,x_0,y_0) = x^*(t,x_0,y(\tau,x_0,y_0),\tau) \\ y(t,x_0,y_0) = y^*(t,x_0,y(\tau,x_0,y_0),\tau) \\ x^*(t,x_0,y_1,\tau) = x(t,x_0,y^*(x_0,y_1,\tau))\\ y^*(t,x_0,y_1,\tau) = y(t,x_0,y_0^*(x_0,y_1,\tau)). \end{array}$$ Let $$\begin{aligned} \delta_{\varepsilon}^k:=\sup_{|x,y|\leq \varepsilon}\sum_{|m|\leq k}\left|\frac{\partial^{|m|}F}{\partial (x,y)^m}\right|<+\infty,\end{aligned}$$ where $|x,y|:=\mathrm{max}\{|x|,|y|\}$ and $| \; \cdot\; |$ denotes the euclidian norm. Quite similarly to the Cauchy problem for ODE and proceeding in the standard way, i.e. writing the BVP as a system of integral equations and using Banach Fixed Point Theorem, the following general result holds: \[teoremasolucao\]Suppose $\varepsilon>0$ is such that the the estimate $\delta^1_{2\varepsilon}<\mathrm{min}\{\lambda_1, \mu_1\}$ holds. Then the BVP for the system is solvable for any data $(x_0,y_1,\tau)$ in the “ball” $|x_0,y_1|<\varepsilon$. The solution is unique, it depends smoothly on all its arguments and satisfies: $$|x^*(t),y^*(t)|\leq 2|x_0,y_1|, \; \; \forall t\in [0,\tau].$$ ![[Boundary Value Problem]{}[]{data-label="fig"}](PVB1) It turns out that the integral equations equivalent to the BVP make sense also for $\tau=\infty$ in which the only given spatial coordinate is $x^*(0):=x_0$. The correspondence $x_0{\rightarrow}y^*(0)$ is smooth and its graph is an invariant manifold of the flow, tangent to $y=0$ at the origin. This is in fact the stable manifold of the origin and using the obvious change in coordinates that takes the graph diffeomorphically to the domain of definition one notices that the original vector field $$X:=(X_1,X_2)=(L^-x+f(x,y),L^+y+g(x,y))$$ gets conjugated to one for which the stable and unstable manifolds coincide with the $x$ and the $y$ axes at least locally. \[coord.straighten\] There are smooth coordinates centered at the origin such that can be written as $$\label{sist.dmcoord} \left\{\begin{array}{lll} \dot{x}=L^-x+\tilde{f}(x,y)x\\ \dot{y}=L^+y+\tilde{g}(x,y)y\\ \end{array} \right.$$ where $\tilde{f}:\mathbb{R}^{s+u}\longrightarrow \operatorname{End}(\mathbb{R}^s)$ and $\tilde{g}:\mathbb{R}^{s+u}\longrightarrow \operatorname{End}(\mathbb{R}^u)$ are square matrices of functions that vanish at the origin. Moreover, in a neighborhood of the origin in the new coordinates, the stable and unstable manifolds are given by $\mathrm{S}_{0}=\{y=0\}$ and $\mathrm{U}_{0}=\{x=0\}$. Gronwall Lemma is used to prove some useful estimates for solutions of BVP in straighten coordinates. \[teo.estimativa\] Let $\varepsilon, \alpha >0$ be such that $\delta:=\delta^{1}_{2\varepsilon}< \alpha<\mbox{max}\{\lambda_1,\mu_1\}$. Then the solution of the BVP defined by system with spatial data $|x_0,y_1|\leq \varepsilon$ satisfies for any $\tau\in[0,\infty)$ and $t\leq \tau$ the following inequality $$\label{desigual.sol} \left\{\begin{array}{lll} |x^{*}(t,x_0,y_1,\tau)|\leq |x_0|\mathrm{e}^{-(\alpha-\delta)t}\\ |y^{*}(t,x_0,y_1,\tau)|\leq|y_1|\mathrm{e}^{(\alpha-\delta)(t-\tau)} \end{array} \right.$$ In particular, $$\label{desigual.ext} \left\{\begin{array}{lll} |x^{*}_1(x_0,y_1,\tau)|\leq |x_0|\mathrm{e}^{-(\alpha-\delta)\tau}\\ |y^{*}_0(x_0,y_1,\tau)|\leq|y_1|\mathrm{e}^{-(\alpha-\delta)\tau}. \end{array} \right.$$ Estimates are available also for any partial derivative $\frac{\partial^kx_1^*}{\partial (x_0,y_1,\tau)}$ or $\frac{\partial^ky_0^*}{\partial (x_0,y_1,\tau)}$ of order $k$ in the form: $$\label{desigual.ext1}\left|\frac{\partial^kx_1^*}{\partial (x_0,y_1,\tau)}\bigr|_{(x_0,y_1,\tau)}\right|\leq C_k e^{-(\alpha-k\delta)\tau},\qquad \left|\frac{\partial^ky_0^*}{\partial (x_0,y_1,\tau)}\bigr|_{(x_0,y_1,\tau)}\right|\leq C_k e^{-(\alpha-k\delta)\tau}$$ for some constant $C_k$ which does not depend on $(x_0,y_1,\tau)$. Let $\Omega$ be a neighborhood around the origin and suppose the vector field $X$ has the form . We will choose a “cube” ${C}_{\varepsilon}=\{(x,y)\in \mathbb{R}^s\times \mathbb{R}^u;\; |x,y|\leq\varepsilon \}\subset \Omega$ of radius $\varepsilon>0$ which satisfies the hypothesis of Theorem \[teoremasolucao\]. The cube has the following boundary pieces: $$\partial^{+}{C}_{\varepsilon}=\{(x,y)\in \mathbb{R}^s\times \mathbb{R}^u;\; |x|=\varepsilon,|y|\leq \varepsilon\}$$ and $$\partial^{-}{C}_{\varepsilon}=\{(x,y)\in \mathbb{R}^s\times \mathbb{R}^u;\; |x|\leq \varepsilon, |y|=\varepsilon\}.$$ Denote $$V_0^{\varepsilon}=\{(x,y)\in {C}_{\varepsilon}~|~ |x|\cdot |y|=0\}=\mathrm{S}_{0}\cup\mathrm{U}_{0}.$$ In order to state the next result the partial order notation for the flow determined by $X$ is useful, i.e. $$p_1\prec p_2$$ will say that there exists a *forward* flow line from $p_1$ to $p_2$. \[teo.vizinhanca\] Suppose $L^-$ and $L^+$ are symmetric. Then for ${\varepsilon}$ small enough $X$ is transverse to $\partial^{+}{C}_{\varepsilon}$ and to $\partial^{-}{C}_{\varepsilon}$. In addition, the following properties hold for $C_{\varepsilon}$ with respect to $X$: 1. **Flow-convexity**: for every pair $q_1\prec q_2$ with $q_{1}, q_2\in {C}_{\varepsilon}$ and every $q_1\prec p\prec q_2$ one has $p\in C_{\varepsilon}$; 2. **Dulac map**: there exists a “first encounter” diffeomorphism $$\mu=(\mu_1, \mu_2):\partial^+{C}_{\varepsilon}\setminus \mathrm{S}_{0}\longrightarrow \partial^-{C}_{\varepsilon}\setminus \mathrm{U}_{0}$$ induced by the flow that satisfies $$\mu(x,y)=(x,y), \; \;\qquad \forall (x,y)\in \partial^+{C}_{\varepsilon}\cap \partial^- C_{\varepsilon}$$ 3. **Continuity of $\mu_1$ close to $S_{0}$**: for every $0<\gamma\leq \varepsilon$, there exists $0<\gamma_0\leq \varepsilon$ such that $$\forall (x,y)\in \partial^+ C_{\varepsilon}\; \mbox{with}\; |y|\leq \gamma_0\; \mbox {one has} \; |\mu_1(x,y)|\leq\gamma;$$ 4. **Fundamental neighborhoods**: let $0<\gamma\leq \varepsilon$ and $$V_{\gamma}^{\varepsilon}:=\{p\in C_{\varepsilon}~|~\exists\; q=(x_1,y_1) \in \partial^-C_{\varepsilon},\; |x_1|<\gamma,\; p\prec q \}\cup V_0^{\varepsilon}$$ Then $V_{\gamma}^{\varepsilon}$ is a flow-convex neighborhood of $V_0^{\varepsilon}$ in $C_{\varepsilon}$ such that $$V^{\varepsilon}_{\gamma}{\rightarrow}V^{\varepsilon}_0,$$ i.e. for every neighbohood $U$ of $V^{\varepsilon}_0$ there exists $\gamma_0>0$ such that $V^{\varepsilon}_{\gamma_0}\subset U$. The following figures illustrate the properties of the Theorem \[teo.vizinhanca\]. ![[Dulac map and the neighborhoods $V_{\gamma}^{\varepsilon}$]{}[]{data-label="fig02"}](Desenho1.pdf) \[Xtrans\] Without the symmetry property of $L=L^-\oplus L^+$ the claim about transversality of $X$ with $\partial^+C_{\varepsilon}$, $\partial^-C_{\varepsilon}$ fails even in the linear case. This seems to have been overlooked in [@M]. Take $s=0$, $u=2$, $L^+:=\left(\begin{array}{cc} 1& 2\\ 0&1\end{array}\right)$, $\tilde{g}=0$. Then $$\langle L^+(y_1,y_2),(y_1,y_2)\rangle=(y_1+y_2)^2.$$ Hence $X=L^+$ on ${{\mathbb R}}^2$ is hyperbolic but has points of tangency along the anti-diagonal with any coordinate sphere. The tangent spaces to the cylinders $\partial^{+}C_{{\varepsilon}}$ and $\partial^{-}C_{{\varepsilon}}$ at points $(x,y)$ are described via: $$T_{(x,y)}\partial^{+}C_{{\varepsilon}}= \{(v_1,v_2)\in {{\mathbb R}}^n~|~\;\langle v_1,x\rangle=0\}$$ $$T_{(x,y)}\partial^{-}C_{{\varepsilon}}= \{(v_1,v_2)\in {{\mathbb R}}^n~|~\;\langle v_2,y\rangle=0\}$$ Hence we need to look at $$\langle X_1(x,y),x\rangle=\langle L^-x,x\rangle+\langle \tilde{f}(x,y)x,x\rangle$$ and $$\langle X_2(x,y),y\rangle=\langle L^+y,y\rangle+\langle \tilde{g}(x,y)y,y\rangle.$$ We have that $\langle \tilde{f}(x,y)x,x\rangle= o(|x|^2)$ uniformly in $y$ and $\langle \tilde{g}(x,y)y,y\rangle=o(|y|^2)$ uniformly in $y$ since $\tilde{f}$ and $\tilde{g}$ are continuous and vanish at the origin. The symmetry and definiteness of $L^-$ and $L^+$ imply now that there exists ${\varepsilon}_1$ and ${\varepsilon}_2$ such that $$\langle L^-x,x\rangle+\langle \tilde{f}(x,y)x,x\rangle <0 \,\qquad \forall\;0<|x|\leq {\varepsilon}_1,\forall\; |y|\leq {\varepsilon}_1$$ and $$\langle L^+y,y\rangle+\langle \tilde{g}(x,y)y,y\rangle>0\,\qquad \forall\; 0<|y|\leq {\varepsilon}_2, \forall\; |x|\leq {\varepsilon}_2$$ For ${\varepsilon}\leq \min\{{\varepsilon}_1,{\varepsilon}_2\}$ we get the transversality of $X$ with both $\partial^{\pm}C_{\varepsilon}$. **Proof of (1)**. Let $q_1=(x_1,y_1)\prec q_2=(x_2,y_2),$ $q_1,q_2\in {C}_{\varepsilon}$. Let $\tau>0$ be the time needed to “travel” from $q_1$ to $q_2$, i.e. $(x_2,y_2)=(x(\tau,x_1,y_1),y(\tau,x_1,y_1))$. Now, consider the BVP defined by $X$ with data $(x_1, y_2,\tau)$. Since $|x_1,y_2|\leq\varepsilon$, it follows from the Theorem \[teo.estimativa\] that for all $0\leq t\leq\tau$, the following holds: $$\begin{aligned} |x^{*}(t,x_1,y_2,\tau)| &\leq& |x_1|\mathrm{e}^{-(\alpha-\delta)t}<|x_1|<\varepsilon\\ |y^{*}(t,x_1,y_2,\tau)|&\leq& |y_2|\mathrm{e}^{(\alpha-\delta)(t-\tau)}<|y_2|<\varepsilon. \end{aligned}$$ Therefore, the portion of the trajectory comprised between $q_1$ and $ q_2$ is contained in ${C}_{\varepsilon}$. **Proof of (2)**. Consider $(x_0,y_0)\in \partial^{+}C_{\varepsilon}$ with $|y_0|<\varepsilon$. The normal vectors $v$ at $(x_0,y_0)$ to $\partial^{+}C_{\varepsilon}$ that point to the interior of $C_{\varepsilon}$ are described by the inequality $\langle v_1,x_0\rangle<0$ and we already chose ${\varepsilon}$ so that the vector field $X$ satisfies such an inequality. Hence either the trajectory $(x(t,x_0,y_0),y(t,x_0,y_0))$ will belong to the interior of $C_{{\varepsilon}}$ for small $t>0$ or $(x_0,y_0)\in \partial^+C_{{\varepsilon}}\cap \partial^-C_{{\varepsilon}}$ i.e. $|y_0|=\varepsilon$, which is not the case. Suppose $t>0$. For $y_0\neq0$ we have $\displaystyle \lim_{t\rightarrow+\infty}(x(t),y(t))\neq(0,0)$ and the trajectory cuts again the boundary of $ C_{\varepsilon}$. This happens because in the cube $C_{{\varepsilon}}$ the function $t{\rightarrow}|x(t)|$ is decreasing while $t{\rightarrow}|y(t)|$ is increasing. Indeed the derivatives of the square of these functions are equal to $$\langle X_1(x,y),x\rangle \;\;\mbox{and}\;\; \langle X_2(x,y),y\rangle \;\;\mbox{respectively}$$ and by our choice of ${\varepsilon}$ above the first one is negative while the second one is positive as long as they are not $0$ which would happen for a stable or unstable flow line. Let $(x_1, y_1)\in \partial^+C_{{\varepsilon}}\cup\partial^-C_{{\varepsilon}}$ be the first point where this trajectory hits the boundary again and $ \tau>0$ the time needed to get from $(x_0,y_0)$ to $(x_1,y_1)$. Since $\langle X_1(x,y),x\rangle$ is a continuous function we see that $(x_1,y_1)\notin \partial^{+}C_{\varepsilon}\setminus\mathrm{S}_{0}$ and moreover $|x_1|<\varepsilon$, because $$\label{esteq}|x_1|=|x^{*}(\tau,x_0,y_1,\tau)| \leq |x_0|\mathrm{e}^{-(\alpha-\delta)\tau}<|x_0|\leq\varepsilon.$$ Define $\mu$, for the time being, as the map that associates to every $(x_0,y_0)\in \partial^{+}C_{\varepsilon}\setminus(\mathrm{S}_{0}\cup \partial ^-C_{{\varepsilon}})$ the point of “first encounter” $(x_1,y_1)\in \partial^{-}C_{\varepsilon}\setminus(\mathrm{U}_{0}\cup \partial^+C_{{\varepsilon}})$ which lies on the same trajectory. Suppose now that $|x_0|=|y_0|=\varepsilon$. If there exist $\tau_0>0$ such that $(x(\tau_0,x_0,y_0),y(\tau_0,x_0,y_0))=(x_0',y_0')\in C_{\varepsilon}$, define the BVP with data $(x_0,y_0', \tau_0)$. Then we get a contradiction from the estimates $$\begin{aligned} \varepsilon= |y_0|=|y^*(0,x_0,y_0',\tau_0)|\leq |y_0'|\mathrm{e}^{-(\alpha-\delta)\tau_0}<|y_0'|<\varepsilon.\end{aligned}$$ Hence for $|x_0|=|y_0|=\varepsilon$ the trajectory determined by $(x_0,y_0)$ only intersects $C_{{\varepsilon}}$ in $(x_0,y_0)$. Therefore the natural extension of $\mu$ to $\partial^+C_{{\varepsilon}}\cap \partial^-C_{{\varepsilon}}$ is equal to the identity on this set. The map $\mu$ is clearly bijective. The differentiability of $\mu$ is standard and proved along the following lines. The essential part is to prove the differentiability of the time-function that associates to each $(x_0,y_0)$ the time $t(x_0,y_0)$ it takes to get to $\mu(x_0,y_0)$. Fix one such $(x_0,y_0)$ and use the diffeomorphism of the flow that corresponds to time $t(x_0,y_0)$ to flow a small open neighborhood of $(x_0,y_0)$ inside $\partial^+C_{\varepsilon}$ which does not contain points in the stable manifold to an $n-1$ dimensional manifold $H$ which contains $\mu(x_0,y_0)$ and is still transverse to $X$. Since $\mu(x_0,y_0)$ is not a critical point put coordinates in order to turn $X$ into the generator of the first coordinate-translation by unit-time. One arrives thus at the problem of having to prove differentiability of the time-function obtained by going from one hypersurface to another hypersurface through the origin and both transverse to the first coordinate. That is simply the difference in height functions where the height is the first coordinate, hence a differentiable function. **Proof of (3)**. Suppose that there exists $0<\gamma\leq\varepsilon$ and a sequence $(x_{0n},y_{0n})$ such that $|x_{0n}|=\varepsilon$, $|y_{0n}|\longrightarrow 0$ ($y_{0n}\neq 0$) when $n\rightarrow\infty$ and $|x_{1n}|\geq\gamma$ for all $n$ where $\mu(x_{0n},y_{0n})=(x_{1n},y_{1n})\in \partial^-{C}_{\varepsilon}$. Let $\tau_n>0$ be the sequence of moments such that $$(x(\tau_n,x_{0n},y_{0n}),y(\tau_n,x_{0n},y_{0n}))=(x_{1n},y_{1n}).$$ It follows from the estimates (see ) $$\begin{aligned} \label{est.x1} \displaystyle\gamma\leq|x_{1n}|\leq |x_{0n}|\mathrm{e}^{-(\alpha-\delta)\tau_n}\leq\varepsilon\mathrm{e}^{-(\alpha-\delta)\tau_n}\end{aligned}$$ that $\displaystyle\tau_n\leq \ln \left(\frac{\varepsilon}{\gamma}\right)\frac{1}{\alpha-\delta}$. We can therefore select a convergent subsequence $(x_{0n_k}, \tau_{n_k})\longrightarrow (x_0',\tau')$. Note that $\tau'>0$, because if $\tau'=0$ then $|y_{0n_k}|\longrightarrow \varepsilon$. Indeed, the only points for which it takes $0$-time to go from $\partial^+C_{{\varepsilon}}$ to $\partial^-C_{{\varepsilon}}$ are the points on $\partial^+C_{{\varepsilon}}\cap \partial^-C_{{\varepsilon}}$ and it is not hard to see that, due to the continuity of the time-function, the shorter the time it takes to get to $\partial^-C_{{\varepsilon}}$ the closer to $\partial^+C_{{\varepsilon}}\cap \partial^-C_{\varepsilon}$ the starting point has to be. Since $|y(\tau_{n_k},x_{0n_k},y_{0n_k})|=|y_{1n_k}|={\varepsilon}$ we get that a contradiction with $$y(\tau_{n_k},x_{0n_k},y_{0n_k}){\rightarrow}y(\tau',x',0)=0.$$ The later holds because the trajectory determined by $(x',0)$ is stable. **Proof of (4)**. The flow-convex property is immediate from the analogous property of $C_{\epsilon}$. Suppose the $V^{\epsilon}_{\gamma}$ is not a neighborhood of $V_0$. Hence, there exists a sequence $(x_n,y_n)\in C^{\epsilon}\setminus V_{\gamma}^\varepsilon$ with $(x_n,y_n)\longrightarrow (v_1,v_2)\in V_0^{\varepsilon}$. Thus either $x_n\longrightarrow 0$ or $y_n\longrightarrow 0$. As none of the points $(x_n,y_n)$ is on $\mathrm{S}_0\cup \mathrm{U}_0$ there exist points of “first encounter” $(x_{0n},y_{0n})\in\partial^+C_{{\varepsilon}}$ and $(x_{1n},y_{1n})\in \partial^-C_{{\varepsilon}}$ such that $$(x_{0n},y_{0n})\prec (x_n,y_n)\prec(x_{1n},y_{1n}).$$ Since $(x_n,y_n)\notin V^{\epsilon}_{\gamma}$ it follows that $|x_{1n}|\geq \gamma$. The case $x_n{\rightarrow}0$ is disposed immediately by considering the BVP $(x_n,y_{1n},\tau_n)$ where $\tau_n$ is the time it takes to go from $(x_n,y_n)$ to $(x_{1n},y_{1n})$. Then we get a contradiction in $$\gamma\leq|x_{1n}|\leq|x_{n}|e^{-(\alpha-\delta)\tau_n}\leq |x_{n}|{\rightarrow}0.$$ For the case $y_n{\rightarrow}0$ consider the BVP with data $(x_{0n},y_{1n},\tau_n')$ where $\tau_n'$ now is the time it takes to get from $(x_{0n},y_{0n})$ to $(x_{1n},y_{1n})$. Recall that $|x_{0n}|=|y_{1n}|=\varepsilon$. We obtain $$\gamma\leq|x_{1n}|\leq|x_{0n}|e^{-(\alpha-\delta)\tau_n'} \; \Rightarrow \; \tau_n\leq \ln{\left(\frac{\varepsilon}{\delta}\right)}\frac{1}{\alpha-\delta}.$$ $$|y_{0n}|\leq |y_n|\mathrm{e}^{-(\alpha-\delta)s_n}\leq |y_n|,$$ where $0<s_n\leq \tau_n'$. Since $|y_n|\longrightarrow 0$, we have $y_{0n}\longrightarrow 0$. Now, since $|x_{0n}|=\varepsilon$ and $\tau_n'$ are bounded and $y_{0n}{\rightarrow}0$, we are in the same scenario that led to a contradiction in the proof of item (3). In order to prove that $\lim_{\gamma {\rightarrow}0} V^{{\varepsilon}}_{\gamma}=V_0^{{\varepsilon}}$ as defined in the statement it is enough to prove that $$\label{limvre}\bigcap V^{{\varepsilon}}_{\gamma_n}=V_0^{{\varepsilon}}$$ for any decreasing sequence $\gamma_{n}{\rightarrow}0$. Indeed if (\[limvre\]) holds then fix one such sequence $\gamma_n$. Suppose $\exists u_n\in V^{{\varepsilon}}_{\gamma_n}\setminus U$ for all $n$. Since $\overline{V^{{\varepsilon}}_{\gamma_{n+1}}}\subset V^{{\varepsilon}}_{\gamma_n}$ and $\overline{V^{{\varepsilon}}_{\gamma_{n+1}}}$ (the closure in $C_{{\varepsilon}}$) is compact we get that we can extract a convergent subsequence $u_{n_k}$ whose limit necessarily belongs to $\bigcap V^{{\varepsilon}}_{\gamma_n}=V_0^{{\varepsilon}}$. But this is a contradiction with $U$ being a neighborhood of $V_0^{{\varepsilon}}$. In order to prove (\[limvre\]) take a point $a=(x,y)\in \bigcap V^{{\varepsilon}}_{\gamma_n}$ such that $x\neq 0\neq y$. Since $y\neq 0$ it follows that $a\notin S_0$ and therefore $a$ lies on a trajectory which hits $\partial ^-C_{{\varepsilon}}$ in a point $b=(x',y')\notin U_0$, i.e. $x'\neq 0$. But $a\in \bigcap V^{{\varepsilon}}_{\gamma_n}$ implies that $|x'|<\gamma_n$ for all $n$, hence $x'=0$. Contradiction. The properties of these neighborhoods will enable us in Section \[Sec3\] to control the deformation of subsets by the flow of a vector field. The fundamental local tool we will use in the next sections is Theorem 1.3.21 in [@M] which we now state. \[teo.subvde\] Let $C_{\varepsilon}\subset \Omega$ be a cube as in Theorem \[teo.vizinhanca\] and let $\mathring{C}_{{\varepsilon}}$ be its interior. If $\psi_t$ denotes the flow of the vector field $X=(L^-+f,L^++g)$, then the closure of the submanifold $$\begin{aligned} \label{subvde} \hspace{1cm} W=\{(t,\psi_{\frac{t}{1-t}}(x,y),x,y);\; (x,y)\in {{\mathbb R}}^{s}\times {{\mathbb R}}^u, \; 0<t<1\}\cap({{\mathbb R}}\times\mathring{C}_{{\varepsilon}}\times\mathring{C}_{{\varepsilon}}) \end{aligned}$$ inside $(\mathring{C}_{{\varepsilon}}\times\mathring{C}_{{\varepsilon}}\times {{\mathbb R}})$ is a smooth submanifold with boundary $$\begin{aligned} \label{b.subvde} \partial \overline{W}= \{1\}\times (\mathrm{U}_{0}\cap \mathring{C}_{{\varepsilon}})\times (\mathrm{S}_{0}\cap\mathring{C}_{{\varepsilon}}) \bigcup \{0\}\times \triangle_{\mathring{C}_{\varepsilon}}, \end{aligned}$$ where $\triangle_{\mathring{C}_{\varepsilon}}=\{(p,p)~|~ p\in \mathring{C}_{\varepsilon}\}$ denotes the diagonal in $\mathring{C}_{\varepsilon}\times \mathring{C}_{\varepsilon}$. (Sketch) The idea is to turn to BVP coordinates via in $$W=\left\{\left(\tau, x_1^*\left(x_0,y_1,\frac{\tau}{1-\tau}\right),y_1,x_0,y_0^*\left(x_0,y_1,\frac{\tau}{1-\tau}\right)\right)~\biggr|~|x_0,y_1|<{\varepsilon}, \tau\in(0,1)\right\}.$$ Use now to conclude that $x_1^*$ and $y_0^*$ converge uniformly to $0$ when $\tau{\rightarrow}1$ together with their derivatives, obtaining thus that $W$ is the graph of a smooth function over $\mathring{C}_{\varepsilon}\times [0,1]$. \[rem12\] It is easy to see that the projection of $\overline{W}$ onto the $\mathring{C_{{\varepsilon}}}\times \mathring{C_{{\varepsilon}}}$ components coincides with the intersection $\overline{\bigcup_{t\geq 0}\tilde{\psi}_t(\Delta_{\mathring{C}_{\varepsilon}})}\cap(\mathring{C_{{\varepsilon}}}\times \mathring{C_{{\varepsilon}}})$ where $\tilde{\psi}$ is the flow on ${{\mathbb R}}^{s+u}\times{{\mathbb R}}^{s+u}$ that is equal to $\psi$ in the first component and leaves the points fixed in the second component. Take $(a,b)\in \overline{\bigcup_{t\geq 0}\tilde{\psi}_t(\Delta_{\mathring{C}_{\varepsilon}})}\cap(\mathring{C_{{\varepsilon}}}\times \mathring{C_{{\varepsilon}}})$. Then there exist $t_n\geq 0$ and $u_n\in {{\mathbb R}}^{s+u}$ such that $(\psi_{t_n}(u_n),u_n){\rightarrow}(a,b)$. Since $a,b\in \mathring{C_{{\varepsilon}}}$ one has that $u_n, \psi_{t_n}(u_n)\in \mathring{C_{{\varepsilon}}}$ for $n$ big enough. In fact, by passing to a subsequence either $t_n$ converges or $t_n{\rightarrow}\infty$. Let $s_n\in[0,1)$ be such that $t_n=\frac{s_n}{1-s_n}$ we get that $(a,b)=\lim z_n$ where $(z_n,s_n)\in W$ converges in $[0,1]\times\mathring{C_{{\varepsilon}}}\times \mathring{C_{{\varepsilon}}}$. The other inclusion is also obvious. \[rem001\] We will need a slight extension of this results in the simplest situation when there exists a central manifold. If the vector field $Y$ on ${{\mathbb R}}^s\times {{\mathbb R}}^u\times {{\mathbb R}}^c$ is of type $(X,0)$ with $X$ as before, not depending on $z\in {{\mathbb R}}^c$ then rather than taking the graph of the flow $\psi^Y$ of $Y$ in ${{\mathbb R}}^s\times {{\mathbb R}}^u\times {{\mathbb R}}^c\times {{\mathbb R}}^s\times {{\mathbb R}}^u\times {{\mathbb R}}^c$, it makes sense to forget about one stationary variable ${{\mathbb R}}^c$ and consider the corresponding $W^Y$ of Theorem \[teo.subvde\] to be $$W^Y:=\{(t,\psi^X_{\frac{t}{1-t}}(x,y),x,y,z)~|~t\in (0,1)\}\cap [0,1]\times\mathring{C}_{{\varepsilon}}\times \mathring{C}_{{\varepsilon}}\times {{\mathbb R}}^c.$$ The closure will be a manifold with boundary $$\partial \overline{W^Y}=\{1\}\times (\mathrm{U}_{0}\cap \mathring{C}_{{\varepsilon}})\times (\mathrm{S}_{0}\cap\mathring{C}_{{\varepsilon}})\times {{\mathbb R}}^c \bigcup \{0\}\times\triangle_{\mathring{C}_{\varepsilon}}\times {{\mathbb R}}^c$$ We do this rather than considering the graph of $\psi^Y$ in the full ambient space with an eye to keep the bookkeeping simpler and avoid using fiber products later on. We have a useful Corollary of Theorem \[teo.vizinhanca\]. \[cortv\] Let $(q,0), (0,r)\in C_{{\varepsilon}}$ be two points in the stable, respectively the unstable manifold of the origin. Let $B^u(q,\epsilon'):=\{(q,y)~|~|y|\leq \epsilon'\}$ be a small transverse slice to $S_0$ that passes through $(q,0)$. Then there exists two sequences of points $(q,y_n'), (x_n,y_n)\in C_{{\varepsilon}}$ such that - $(q,y_n')\in B^u(q,\epsilon')$ and $y_n{\rightarrow}0$; - $(x_n,y_n){\rightarrow}(0,r)$; - $(q,y_n')\prec (x_n,y_n)$. Finally, we will need the following consequence of the Flowout Theorem [@Le] which is proved with the same ideas as the differentiability of the Dulac map $\mu$ from Theorem \[teo.vizinhanca\]. Let $M$ be a compact smooth Riemannian manifold and let $\psi$ denote the gradient flow of a function $f$. \[FlB\] Let $N\subset M$ be a submanifold which does not contain any critical points of $f$ and is transverse to the gradient vector field $\nabla f$. Let $f^{-1}(\theta)$ be a regular level set of $f$ such that $N\cap f^{-1}(\theta)=\emptyset$ and suppose that for every $n\in N$ there exists a time $t_n\geq 0$ such that $\psi_{t_n}(n)\in f^{-1}(\theta)$. Then the function $n{\rightarrow}t_n$ is smooth and moreover the set $\{\psi_{t_n}(n)~|~n\in N\}$ is a smooth submanifold of $f^{-1}(\theta)$ diffeomorphic to $N$. A general homotopy formula {#Sec3} ========================== Consider $\pi:P\longrightarrow B$ a locally trivial fiber bundle with compact fiber $M$ over a $n$-dimensional, oriented manifold $B$. Let $X:P\longrightarrow VP$ be a vertical vector field on $P$ where the vertical tangent space $VP:=\operatorname{Ker}d\pi$ represents the collection of all the tangent spaces to the fibers. Suppose $X$ a horizontally constant Morse-Smale vector field. Recall that this means that for every $b\in B$ there exists an open set $b\in B_0\subset B$ and a local trivialization $\alpha:P|_{B_0}\longrightarrow M\times B_0$ such that $$\begin{aligned} \label{s3eq1}\alpha_*(X_p)=(\mathrm{grad}f_{\alpha_{1}(p)},0), \; \forall p\in P|_{B_0},\end{aligned}$$ where $f:M\longrightarrow {{\mathbb R}}$ is a Morse-Smale function for same Riemannian metric on $M$. This in particular implies that the critical set of $X$ is a fiber bundle over $B$ (with various components) and the same thing stays true about the stable and unstable manifolds. In fact the critical manifolds of $X$ in the local trivialization $\alpha$ are $F=\{p\}\times B_0$ with $p$ satisfying $\nabla_pf=0$ and the stable and unstable manifolds are $$S(F)=S(p)\times B_0,\qquad U(F)=U(p)\times B_0$$ Let $s: B\longrightarrow P$ be a transversal section to all the stable bundles $\mathrm{S}(F)$ relative to the critical manifold $F\subset P$. We follow the ideas in [@HL1] and [@Ci1]. Consider the fiber bundle $P\times_{B}P\longrightarrow B$ and the vertical vector field $\tilde{X}=(X,0)$ on $P\times_{B}P$. It is not difficult to verify that the vector field $ \tilde{X} $ is horizontally constant Morse-Bott-Smale and that its flow is $\Theta_t(v_1,v_2)=(\Phi_t(v_1),v_2)$, where $\Phi_t:P\longrightarrow P$ denotes the flow of the vector field $X$. Thus, the stable and unstable manifolds relative to a critical manifold $ \tilde{F} $ of $\tilde{X} $ are $$\begin{aligned} \mathrm{S} (\tilde{F}):=\mathrm{S}(F)\times_B P\; \; \mathrm{and}\; \; \mathrm{U} (\tilde{F}):=\mathrm{U}(F)\times_B P.\end{aligned}$$ Now define the family of sections $\xi:[0,\infty)\times B{\rightarrow}P\times_BP:$ $$\xi_t(b):= \Theta_t(s(b),s(b))=(\Phi_t(s(b)),s(b))$$ transverse to all stable manifolds of $\tilde{X}$. \[rem123\] The transversality of $s$ and $S(F)$ translates into the transversality of $\xi_0$ and $\mathrm{S} (\tilde{F})$ The question we will be concerned with in this section is whether the following family of currents in $P\times_BP$ has a limit: $$\displaystyle \lim_{t \rightarrow + \infty} (\xi_t)_*(B)?$$ For each $t>0$, Stokes Theorem and the commutativity of $d$ with push-forward implies: $$\begin{aligned} \label{eq.homo.xi} (\xi_t)_*(B)-(\xi_0)_*(B)=d[\xi_*([0,t]\times B)].\end{aligned}$$ From and of the continuity of the (exterior) differential operator of currents, we have reduced our analysis to the existence of the limit $\displaystyle \lim_{t\rightarrow+\infty} \xi_*([0, t] \times B)$. The following result presents a positive response to the existence of this limit without the tameness condition of [@Ci2]. \[tlimxi\] Let $\pi: P\longrightarrow B$ be a fiber bundle with compact fiber and let $X$ be a horizontally constant Morse-Smale vertical vector field. If $s:B\longrightarrow P$ is a section transversal to the stable manifolds $\mathrm{S}(F)$, then $$T=\xi([0,+\infty)\times B)$$ defines a $ (n + 1) $-dimensional current of locally finite mass and if $B$ is compact then $T$ is of finite mass. Moreover, the following equality of kernels holds: $$\begin{aligned} \label{bordo.T} \mathrm{d} T=\sum_{F}\mathrm{U}(F)\times_{F}s(s^{-1}(\mathrm{S}(F)))-\xi_*(B). \end{aligned}$$ We deduce from Theorem \[tlimxi\] that $$\begin{aligned} \displaystyle \xi_{\infty}(B):=\lim_{t\rightarrow+\infty}(\xi_t)_*(B)= \sum_{F}\mathrm{U}(F)\times_{F}s(s^{-1}(\mathrm{S}(F))).\end{aligned}$$ Another consequence of the Theorem \[tlimxi\] is expressed in terms of operators: \[c.principal\] Let $\pi:P\longrightarrow B$ be an fiber bundle with compact oriented fiber on a $n$-dimensional smooth, oriented $B$. Let $X: P\longrightarrow VP$ be a horizontally constant Morse-Smale vertical vector field and let $s: B\longrightarrow P$ be a section transversal to all stable manifolds $\mathrm{S}(F)$ of $ X $. Assume that for each critical manifold $F\subset P$ the bundle $\mathrm{U}(F)\longrightarrow F$ is oriented. Let $s_t=\Phi_t\circ s:B\longrightarrow P$ be the induced family of sections. Then for each closed form $\omega$ on $P$ of degree $k\leq n$, the following identity of flat currents in $ B $ is true: $$\begin{aligned} \displaystyle \lim_{t\rightarrow+\infty}s_t^{*}\omega=\sum_{\mathrm{codim}\mathrm{S}(F)\leq k}\mathrm{Res}^{u}_F(\omega)[s^{-1}(\mathrm{S}(F))], \end{aligned}$$ where $\displaystyle \mathrm{Res}^{u}_F(\omega)=\tau^{*}_F\left(\int_{\mathrm{U}(F)/F}\omega\right)$ and $\tau_F: s^{-1}(\mathrm{S}(F))\rightarrow F$ is the composition of $\pi^{s}_F:\mathrm{S}(F)\rightarrow F$ with $s:s^{-1}(\mathrm{S}(F))\longrightarrow S(F)$ . Moreover, there is a flat current $\mathcal{T}_{\infty}(\omega)$ such that $$\begin{aligned} \label{for.transgressao} \sum_{\mathrm{codim} \mathrm{S}(F)\leq k}\mathrm{Res}^{u}_F(\omega)[s^{-1}(\mathrm{S}(F))]-s^{*}\omega= d[\mathcal{T}_{\infty}(\omega)]. \end{aligned}$$ One can extend Theorem \[tlimxi\] and Corollary \[c.principal\] without extra effort to the case when $B$ is a manifold with corners and $s:B{\rightarrow}P$ is a smooth section completely transverse to all $S(F)$. This happens because any smooth map on a manifold with corners can, by definition, be extended locally in a neighborhood of the corner a smooth map defined on an open set in ${{\mathbb R}}^n$. Since the same applies to the trivialization maps, one can extend smoothly the whole fiber bundle structure and the section to be defined on an open set in ${{\mathbb R}}^n$. The transversality condition being open, it will hold in an open subset. Then one uses the corresponding results of Theorem \[tlimxi\] and Corollary \[c.principal\] in this open set, only to restrict it afterwards. In order to prove Theorem \[tlimxi\] we notice that due to the sheaf property of currents it is enough to prove it for a convenient open covering of $B$. Namely if there exists an open covering $B=\cup B_i$ such that $T_i:=T\bigr|_{P_i}$ exists, where $P_i:=P\bigr|_{B_i}\times_{B_i}P\bigr|_{B_i}$, it is of locally finite mass and holds then the same thing is true over $B$. This happens first because on the overlap $P_i\cap P_j$ the restriction of the limits $\xi_*([0,t]\times B_{i/j})$ are the same. This allows the patching of $(T_i)_{i\in I}$ to a single current. A similar argument works for the right hand side of . One has to worry always if an embedded oriented submanifold really determines a current especially when it is not *properly* embedded, i.e. the inclusion is not a proper map. This is the case of each $\mathrm{U}(F)\times_{F}s(s^{-1}(\mathrm{S}(F)))$ and is not clear apriori that they exist globally. But if holds on an open covering $P_i$ then the patching of $T_i$ to a single current implies the patching of $\mathrm{U}(F)\times_{F}s(s^{-1}(\mathrm{S}(F)))\bigr|_{P_i}$ to a single current globally and stays true everywhere. \[Fmax\] Notice that if $F_{\max}$ represents a “maximal” critical manifold, in other words the critical manifold for which $S(F_{\max})$ is an open subset of $P$ the transversality condition of $s\pitchfork S(F_{\max})$ is automatically satisfied for all points $b\in s^{-1}S(F_{\max})$. In the open set $s^{-1}(S(F_{\max}))\subset B$ relation is trivial to prove and the limit $$\lim_{t{\rightarrow}\infty}\xi_t(s^{-1}(S(F_{\max})))=F_{\max}\times_{F_{\max}} s(s^{-1}(S(F_{\max}))$$ holds even in the $C^{\infty}$ sense. The open sets $s^{-1}(S(F_{\max}))$ will always be part of the open coverings of $B$ and we will not mention them again. We will therefore work with a convenient covering of $B$, each of its members being contained into a trivializing neighborhood for $X$, by which we mean a neighborhood $U$ where holds. The strategy is now is *roughly* the following. We show that around each point $b_0\in B$ there exists a trivializing neighborhood $B_0$ and a finite open covering $M_j$ of the fiber $M:=P_{b_0}$ such that Theorem \[tlimxi\] holds for the restrictions of all the currents to the open sets: $$M_j\times M\times B_0\simeq(M_j\times B_0)\times_{B_0}(M\times B_0)$$ where we have already used a trivializing diffeomorphism $P\bigr|_{B_0}\simeq M\times B_0$. In order to achieve this on each of the open sets $M_j\times M\times B_0$ we construct a “resolution of the flow”, namely a *proper* map $$\Psi: N{\rightarrow}M_j\times M\times B_0$$ from a manifolds with corners $N$ of dimension $n+1$ such that - $\operatorname{Im}\Psi =\overline{\xi([0,\infty)\times B_0)}\cap (M_j\times M\times B_0)$ - $\Psi$ is a diffeomorphism from an open subset of full measure in $N$ to an open subset of full $\mathcal{H}^{n+1}$-measure in $\xi([0,\infty)\times B_0)\cap ( M_j\times M\times B_0)$. Point (b) makes sense in view of the fact that $\xi([0,\infty)\times B_0\setminus Z)$ is a smooth submanifold of $P\times_B P$ of dimension $n+1$ where $Z:=\displaystyle\bigcup_F s^{-1}(F)$ are the fixed points of the section with respect to the flow. We assume of course that $B_0$ is a small neighborhood of a point $b_0\notin F_{\max}$ and therefore $Z$ will have zero $\mathcal{H}^{n}$ measure. We then use the following \[lema.ideia\]Let $N^{n+1}$ be a manifold with corners and let $\Psi: N\longrightarrow X$ be a smooth (Lipschitz is enough) map to a smooth manifold $X$. If $\Psi$ is a proper map, then $\Psi(M)$ has locally finite $n+1$-dimensional Hausdorff measure and $$d(\Psi_{*}(N))=\Psi_*(\partial N).$$ This Lemma will allow not only to conclude that the restriction of $T\bigr|_{B_0\times M_j\times M}$ is of locally finite mass, but also to compute its boundary as the push-forward of $\partial N $. In order to do that we will need a full understanding of $\partial N$ and of the map $\Psi$. The local flow resolution {#tec.tools} ========================= This section contains the heart of the proof of Theorem \[tlimxi\]. As we discussed in the previous section it is enough to localize around each point $b_0$ in the base space $B$. We will therefore consider first an open neighborhood $B_0$ of $b_0$ where the fiber bundle $P\bigr|_{B_0}$ is trivial and the vector field $X$ is horizontally constant and given by the gradient of a function $f:P_{b_0}{\rightarrow}{{\mathbb R}}$. With the notation $M:=P_{b_0}$ already introduced we have the following data - a flow on $M$ induced by the gradient of $f$ and denoted $\psi$. - a family of (local) sections $s_t:B_0{\rightarrow}M\times B_0$ $$s_t(b)=(\psi_t(s_0(b)),b)$$ originating from $s\bigr|_{B_0}=(s_0,\operatorname{id}_{B_0}):B_0{\rightarrow}P\bigr |_{B_0}=M\times B_0$. We are interested in the closure of the forward flow-out of the graph of $s$. In other words let $\xi_t:B_0{\rightarrow}M\times M\times B_0$ be the family of sections: $$\xi_t(b)=(\psi_t(s_0(b)),s_0(b),b).$$ The function $\tilde{f}:M\times M\times B_0{\rightarrow}{{\mathbb R}}$, $\tilde{f}(m_1,m_2,b):=f(m_1)$ will be used occasionally. We will aim to construct a “manifold with corners resolution” of a piece of $\overline{\bigcup_{t\geq 0} \xi_t(B_0)}$ to be described momentarily. Let $${\bf{p}}_1:=\lim_{t{\rightarrow}\infty} \psi_t(s(b_0))\in M$$ be a critical point. We will assume next that ${\bf{p}_1}$ is not a point of local maximum for $f$, i.e. $\dim{U_{{\bf{p}_1}}}>0$. The case $\dim{U}_{{\bf{p}_1}}=0$ can be treated quite easily as locally everything flows when $t{\rightarrow}\infty$ to the critical manifold determined by $\{{\bf{p}_1}\}$ as already noticed in Remark \[Fmax\]. For simplicity, we will also assume that $f({\bf{p}_1})=0$. We allow the situation ${\bf{p}_1}=s_0(b_0)$. At the other extreme, $s_0(b_0)$ might be “far” from ${\bf{p}_1}$. If that is the case, we notice that nothing interesting happens with the flow-out of $\xi_0(B_0)$ before we get close to $\{{\bf{p}_1}\}\times M\times B_0$. So we might assume without restriction of the generality that $s_0(b_0)$ is in a neighborhood $D$ of ${\bf{p}_1}$ where the Straighten Coordinates Theorem is valid. Then we will also assume that $B_0$ was first chosen so that $s_0(B_0)\in \mathring{C_{\varepsilon}}$ for some fixed $\varepsilon$, where $C_{\varepsilon}\subset {{\mathbb R}}^s\times {{\mathbb R}}^{u}$ satisfies the hypothesis of Theorem \[teo.vizinhanca\]. Hence $\xi_0(B_0)\subset \mathring{C_{\varepsilon}}\times M\times B_0$. We will need to work with certain particular neighborhoods of $\{|x|\cdot |y|=0\} \times M\times B_0$ of type $V_{\gamma}^{\epsilon}\times M\times B_0$ where $V_{\gamma}^{\epsilon}$ is as in Theorem \[teo.vizinhanca\]. The next technical statement prepares the field for the next step of the induction, namely when we will go from the first critical level (that of ${\bf{p}_1}$) to the second critical level in the direction of the flow. \[prop.triplo\] - There exist $\gamma\leq \varepsilon$ and $\theta>0$ a regular value of $f$ such that the trajectory determined by any $p\in V^{\varepsilon}_{\gamma}\cap f^{-1}(-\infty,\theta]$ that intersects $f^{-1}\{\theta\}$ does so before intersecting $\partial C_{\varepsilon}^{-}$ when $t{\rightarrow}\infty$. - If ${\bf{p}_1}$ is not a point of minimum then we can take $\theta$ and $\gamma$ such that the trajectory determined by any $p\in V^{\varepsilon}_{\gamma}\cap f^{-1}[-\theta,\theta]$ that intersects $f^{-1}\{-\theta\}$ does so before intersecting $\partial C_{\varepsilon}^{+}$ when $t{\rightarrow}-\infty$. Moreover, after shrinking $B_0$ the following holds: - $s_0(B_0)\subset V_{\gamma}^{\varepsilon}\cap f^{-1}(-\infty,\theta')$ for some $\theta'$ with $0<\theta'<\theta$ and - $\left(\bigcup_{t\geq0}\xi_t(B_0)\right)\cap \tilde{f}^{-1}(\theta)\subset V_{\gamma}^{\varepsilon}\times M\times B_0$ In the cube $C_{\varepsilon}$ of Theorem \[teo.vizinhanca\] there exists $0<\gamma<\varepsilon$ such that $$\inf_{u\in T_{\gamma}}{f}(u)>0,$$ where $T_{\gamma}=\{(x,y)\in \partial^{-}{C}_{\varepsilon}; |x|<\gamma, |y|=\varepsilon\}$ since $f>0$ on the compact $\partial^{-}{C}_{\varepsilon}\cap U_{{\bf{p}_1}}$. Fix such a $\gamma$. Choose now $\theta>0$ with $$\label{defthe} 0<\theta< \inf_{u\in T_{\gamma}}f(u)$$ Notice that each trajectory that starts inside $V^{\varepsilon}_{\gamma}$ either ends up at the critical point or leaves $V^{\varepsilon}_{\gamma}$ through $T_{\gamma}$. Let $p\in V^{\varepsilon}_{\gamma}\cap f^{-1}(-\infty,\theta]$. On one hand $f$ is increasing and continuous along the trajectories and on the other hand $V^{\epsilon}_{\gamma}\cap\gamma_p$ [^2]is connected by the flow-convex property of $V^{\varepsilon}_{\gamma}$. It follows that $\gamma_p$ meets $f^{-1}(\theta)$ before reaching $T_{\gamma}$ by (\[defthe\]). Part (b) is analogous. Since $s_0(b_0)\in S_{{\bf{p}_1}}$ we have that $f(s(b_0))\leq 0$ and therefore one can choose $B_0$ with $s_1(B_0)\subset V_{\gamma}^{\epsilon}\cap f^{-1}(-\infty,\theta')$. Part (ii) is an immediate consequence of (i) and the first part of the proof. From now on the neighborhood $B_0$ of $b_0$ will satisfy the properties of Proposition \[prop.triplo\] for a certain $\theta$ and $\gamma$. We define now the first piece of the transverse intersection we will use later. Let $\tilde {C_{\varepsilon}}:=C_{\varepsilon}\times C_{{\varepsilon}}\times B_0$ and $\mathring{\tilde{C_{\varepsilon}}}:=\mathring{C_{\varepsilon}}\times \mathring{C_{\epsilon}}\times B_0$ and consider $W_1\subset [0,1]\times\mathring{\tilde {C_{\varepsilon}}} $ be the analogue of $W$ from Theorem \[teo.subvde\] for this context: $$\begin{aligned} \label{W1def} W_1 &=&\left\{\left(t,\psi_\frac{t}{1-t}(x,y),x,y,b\right); \; 0<t<1\right\}\cap \mathbb{R}\times\mathring{{C_{\varepsilon}}}\times\mathring{{C_{\varepsilon}}}\times B_0.\nonumber\end{aligned}$$ In other words, modulo a permutation of the last two coordinates we have: $$W_1= W\times B_0\subset ([0,1]\times\mathring{C_{\varepsilon}}\times \mathring{C_{\varepsilon}})\times B_0$$ where $W$ is as in Theorem \[teo.subvde\]. It follows then from Theorem \[teo.subvde\] that $\overline{W_1}$, the closure inside $[0,1]\times\mathring{ {C_{\varepsilon}}}\times \mathring{ {C_{\varepsilon}}}\times B_0$ is a smooth $(m + 1+n)$-dimensional with boundary: $$\begin{aligned} \partial\overline{W_1}=\underbrace{\{1\}\times({\mathrm{U}}_{{\bf{p}_1}}\cap \mathring{{C_{\varepsilon}}}) \times ({\mathrm{S}}_{{\bf{p}_1}}\cap\mathring{{C_{\varepsilon}}})\times B_0}_{\partial_1{\overline{W_1}}}\bigcup\underbrace{\{0\}\times \Delta_{\mathring{{C_{\varepsilon}}}}\times B_0}_{\partial_0\overline{W_1}}.\end{aligned}$$ For future reference, let $$\label{Wp1}\overline{W}_{{\bf{p}_1}}:=[0,1]\times\{{\bf{p}_1}\}\times\{{\bf{p}_1}\}\times B_0\subset \overline{W_1}.$$ be the set of fixed points in $\overline{W}_1$. We look now at the second piece of transverse intersection mentioned before. Let $$V_{\theta}:= \mathring{V}^{\epsilon}_{\gamma}\cap f^{-1}((-\infty,\theta])\subset \mathring{C_{\varepsilon}}$$ where $\epsilon$, $\gamma$ and $\theta$ are as in Proposition \[prop.triplo\]. ![[The neighborhoods $V_{\theta}$]{}[]{data-label="fig02"}](Desenho2.pdf) Consider the following set: $$Z_1:={{\mathbb R}}\times {V}_{\theta}\times s(B_0) \subset {{\mathbb R}}\times \mathring{{{C}_{\varepsilon}}}\times (\mathring{{{C}_{\varepsilon}}}\times B_0).$$ \[lem.Zbordo\] The set $Z_1$ is a manifold of dimension $m+n+1$ with boundary $$\partial Z_1= {{\mathbb R}}\times(\mathring{V}^{\epsilon}_{\gamma}\cap {f}^{-1}(\theta))\times s(B_0).$$ For any regular value $\theta$ the intersection $U\cap {f}^{-1}(-\infty, \theta]$ is a manifold with boundary for any open $U\subset M$ such that $f^{-1}(\theta)\cap U\neq \emptyset$. \[s4l1\] The manifolds with boundary $Z_1$ and $\overline{W_1}$ are completely transverse inside ${{\mathbb R}}\times\mathring{{{C}_{\varepsilon}}}\times \mathring{{{C}_{\varepsilon}}}\times B_0$, meaning that the different strata are all transverse. This implies that $Z_1\cap \overline{W_1}$ is a manifold with corners of dimension $\dim{B}+1$. Notice that $\mathring{\overline{W}_1}=W_1$. We start with the intersection ${W}_1\cap \mathring{Z}_{\theta}$. The transversality is immediate from the fact that $W_1$ is a graph over the first plus the last two variables, i.e. ${{\mathbb R}}\times\mathring{{{C}_{\varepsilon}}}\times B_0$ while the first component of $\mathring{Z}_{\theta}$ is an open subset of $\mathring{{{C}_{\varepsilon}}}$. A similar reasoning applies for $q\in \partial_0 \overline{W_1}\cap \mathring{Z}_{\theta}$. In fact due to the transversality of the flow to a regular level set $f^{-1}(\theta)$ this also proves the transversality of ${W_1}$ with $\partial {Z}_{\theta}$. Notice that $\partial_0{\overline{W_1}}\cap \partial {Z}_{\theta}=\emptyset$ since a point $q\in\xi_0(B_0)$ cannot satisfy $\tilde{f}(q)=\theta$ due to property (i) of Proposition \[prop.triplo\]. The transversality of $\partial_1\overline{W_1}$ with $\mathring{Z_1}$ follows from the transversality of $s(B_0)$ and $S_{{\bf{p}_1}}\times B_0$. Finally the transversality of $\partial_1\overline{W_1}$ with $\partial Z_1$ follows from the transversality of $\mathrm{U}_{{\bf{p}_1}}$ and $\tilde{f}^{-1}(\theta)$ together with the transversality of $s(B_0)$ and $S_{{\bf{p}_1}}\times B_0$. \[lema.transv\] Let $N_1$ and $N_2$ be submanifolds with corners of type $k$ and $l$ respectively, inside a manifold (with no corners) $N$. If they are completely transverse then $N_1\cap N_2$ is a manifold with corners of type at most $k+l$. We use a standard trick. Clearly, $N_1\times N_2$ is a manifold with corners of type $k+l$. Then the complete transversality of $N_1$ and $N_2$ is equivalent with the complete transversality of $N_1\times N_2$ and the diagonal submanifold $\Delta$ in $N\times N$. The conclusion then is that $(N_1\times N_2)\cap \Delta$ is a manifold with corners (for this see Proposition A.3 in [@Ci3]). Following Lemma \[lem.Zbordo\] denote $$\mathcal{A}_1:= Z_1\cap \overline{W_1}$$ The codimension $1$ boundary has the following decomposition in components: $$\begin{aligned} \partial^1\mathcal{A}_1&=&\underbrace{\{1\}\times(\mathrm{U}_{{\bf{p}_1}}\cap f^{-1}(-\infty,\theta])\times s(s^{-1}(\mathrm{S}_{{\bf{p}_1}}\times B_0))}_{\partial^1_1\mathcal{A}_1}\bigcup \underbrace{\{0\}\times {\xi_0(B_0)}}_{\partial^1_0\mathcal{A}_1}\bigcup\nonumber\\ &&\bigcup \underbrace{\overline{W_1}\cap ([0,1]\times{f}^{-1}(\theta)\times s(B_0))}_{\partial^1_2\mathcal{A}_1}\label{bordo.A}.\end{aligned}$$ where $$\partial^1_1\mathcal{A}_1:=\partial_1\overline{W_1}\cap Z_1, \quad \partial^1_0\mathcal{A}_1:=\partial_0\overline{W_1}\cap Z_1, \quad \partial^1_2\mathcal{A}_1:=\overline{W_1}\cap \partial Z_1.$$ The codimension $2$ stratum is given by $$\label{bordo.A3} \partial^2\mathcal{A}_1: = \partial^1_1\mathcal{A}_1\cap\partial^1_2\mathcal{A}_1=\{1\}\times(\mathrm{U}_{{\bf{p}_1}}\cap {f}^{-1}(\theta))\times s(s^{-1}(\mathrm{S}_{{\bf{p}_1}}\times B_0)).$$ We define the resolution map now. Let $$\mathcal{R}:\mathcal{A}_1{\rightarrow}V_{\theta}\times M\times B_0$$ be the restriction to $\mathcal{A}_1$ of the projection onto the three spacial coordinates. Notice that in fact the image of $\mathcal{R}$ is contained in $V_{\theta}\times V_{\theta}\times B_0$. \[prop.propria\] The map $\mathcal{R}$ is proper. Recall that for locally compact metric spaces $X,Y$, a map $F:X{\rightarrow}Y$ is proper if and only if for any sequence $(x_n)_{n\in {{{\mathbb N}}}}\in X$ such that $\displaystyle\lim_{n{\rightarrow}\infty}x_n=\infty$ one has $\displaystyle\lim_{n{\rightarrow}\infty} F(x_n)=\infty$. By definition, $$\lim_{n{\rightarrow}\infty} x_n=\infty$$ if for every $K\subset X$ compact there exists $n_0\in {{{\mathbb N}}}$ such that $x_n\in X\setminus K$ for all $n\geq n_0$. Notice that such a sequence does not have any convergent subsequence in $X$ and the converse is also true. It follows easily then that a map $F:X{\rightarrow}Y$ is [not]{} proper if and only if there exists $(x_n)_{n\in {{{\mathbb N}}}}\in X,$ $x_n{\rightarrow}\infty$ such that $F(x_n){\rightarrow}y\in Y$. Assume therefore that $(u_n)_{n\in {{{\mathbb N}}}}\in \mathcal{A}_1$ satisfies $u_{n}{\rightarrow}\infty$ while $\mathcal{R}(u_{n}){\rightarrow}\tilde{u}$. Now $u_n$ has $4$ components: $$\label{equn} u_n=(t_n,a_n',a_n,b_n)\in [0,1]\times V_{\theta}\times V_{\theta}\times B_0$$ By passing to a subsequence of $u_n$ we can assume $t_n$ converges. There are two possibilities. Either $t_n{\rightarrow}t'<1$ or $t_n{\rightarrow}1$. We analyze them separately. First, the triple $\mathcal{R}(u_{n})=(a_n',a_n,b_n)$ converges in $V_{\theta}\times V_{\theta}\times B_0$. From $u_n\in Z_1$ we get $(a_n,b_n)=(s_0(\beta_n),\beta_n)$ for some $\beta_n\in B_0$. Hence $\beta_n$ converges to $\beta\in B_0$ and $s_0(\beta_n){\rightarrow}s_0(\beta)$. If $t_n{\rightarrow}t'<1$ then for $n$ big enough we have that $u_n\in \overline{W_1}\setminus \{t=1\}$ and therefore $$a_n'=\psi_{\frac{t_n}{1-t_n}}(a_n)$$ Hence $a_n=s_0(\beta_n){\rightarrow}s_0(\beta)$ and $a_n'{\rightarrow}\psi_{\frac{t'}{1-t'}}(s_0(\beta))$ and by hypothesis this is in $V_{\theta}$. We conclude that $$u_n{\rightarrow}\left(t', \psi_{\frac{t'}{1-t'}}(s_0(\beta)),s_0(\beta),\beta\right)$$ and this limit belongs to $Z_1\cap \overline{W_1}=\mathcal{A}_1$. Contradiction with $u_n{\rightarrow}\infty$. If $t_n{\rightarrow}1$ we have that $u_n=(t_n,a_n',s_0(\beta_n),\beta_n)$ and the convergence of $(a_n',s_0(\beta_n),\beta_n)$ to a point in $V_{\theta}\times V_{\theta}\times B_0$ implies again that $\beta_n{\rightarrow}\beta\in B_0$. Since $s_0(B_0)\subset V_{\theta}$ we have that $s_0(\beta_n){\rightarrow}s_0(\beta)\in V_{\theta}$. We get therefore that $u_n$ converges in $[0,1]\times V_{\theta}\times V_{\theta}\times B_0 $ since all its coordinates converge. In order to reach a contradiction we need only check that it converges to some element of $Z_1\cap \overline{W_1}$. We have that $$u_n{\rightarrow}u\in {{\mathbb R}}\times\mathring{{{C}_{\varepsilon}}}\times \mathring{{{C}_{\varepsilon}}}\times B_0.$$ On the other hand, since $u_n\in\overline{W_1}$ and the closure of $W_1$ is taken within $ {{\mathbb R}}\times\mathring{{{C}_{\varepsilon}}}\times \mathring{{{C}_{\varepsilon}}}\times B_0$ we get that $u\in \overline{W_1}$. One sees easily that $u\in Z_1$ since $(a_n',s_0(\beta_n),\beta_n)=\mathcal{R}(u_n)$ converges to a point $(a',s_0(\beta),\beta)\in V_{\theta}\times V_{\theta}\times B_0$. Let $\tilde{V}_{\theta}:=V_{\theta}\times M\times B_0$ be the codomain of $\mathcal{R}$. We show two things: - the currential formula holds on $\mathring{\tilde{V}}_{\theta}$, the interior of $\tilde{V}_{\theta}$. - there exists a map from a manifold with corners $N$ to $\partial\tilde{V}_{\theta}=V^{\epsilon}_{\gamma}\cap f^{-1}(\theta)$ that allows us to continue the process. First we list some set-theoretic and differential properties of $\mathcal{R}$. \[prop.1\] The map $\mathcal{R}:\mathcal{A}_1\longrightarrow \tilde{V}_{\theta}$ satisfies: 1. $\mathcal{R}(\mathcal{A}_1)=\overline{\displaystyle\bigcup_{t\geq 0}\xi_t(B_0)}\bigcap \tilde{V}_{\theta}=:A_{\theta}$ where the closure is taken within $M\times M\times B_0$. 2. $\mathcal{R}(\partial^{1}_{1}\mathcal{A}_1)=(\mathrm{U}_{{\bf{p}_1}}\cap f^{-1}(-\infty,\theta])\times s(s^{-1}(\mathrm{S}_{{\bf{p}_1}}\times B_0))$; 3. $\mathcal{R}(\partial_0^1\mathcal{A}_1)=\xi_0(B_0)$; 4. $\mathcal{R}(\partial_2^1\mathcal{A}_1)=A_{\theta}\cap \tilde{f}^{-1}(\theta)$; 5. the restriction of $\mathcal{R}$ is a bijection from $\mathcal{A}_1\setminus \overline{W}_{{\bf{p}_1}} $ (see (\[Wp1\])) onto its image. 6. the restriction of $\mathcal{R}$ to $\partial^1_2\mathcal{A}_1$ is a bijection onto the image. Let $q\in \overline{W_1}\cap Z_1$. On one hand $$q=\lim_{n{\rightarrow}\infty}\left (t_n,\psi_{\frac{t_n}{1-t_n}}\left(x_n,y_n\right),x_n,y_n,b_n\right)$$ where $b_n\in B$, $(x_n,y_n)\in \mathring{C}_{{\varepsilon}}$, $t_n\in[0,1]$. From $q\in Z_1$ it follows that $(x_n,y_n,b_n){\rightarrow}(s_0(\beta),\beta)$. Suppose $t_n{\rightarrow}t'<1$. Then $$\psi_{\frac{t_n}{1-t_n}}(x_n,y_n){\rightarrow}\psi_{\frac{t'}{1-t}}(s_0(\beta))$$ Hence, in this case $\mathcal{R}(q)=\left (\psi_{\frac{t'}{1-t'}}(s_0(\beta)),s_0(\beta),\beta\right)\in \xi_{\frac{t'}{1-t'}}(B_0)$. When $t_n{\rightarrow}1$, $q\in \overline{W_1}$ implies that $(x_n,y_n,b_n){\rightarrow}(s_0(\beta),\beta)\in S_{{\bf{p}_1}}\times B_0$, i.e. $\beta\in s^{-1}(S_{{\bf{p}_1}}\times B_0)$ and $\psi_{\frac{t_n}{1-t_n}}\left(x_n,y_n\right){\rightarrow}q_1\in U_{{\bf{p}_1}}\cap V_{\theta}$. We argue that due to the transversality of $s$ with $S_{{\bf{p}_1}}\times B_0$ all points in $U_{{\bf{p}_1}}\cap (V_{\theta}\times s(s^{-1}(S_{{\bf{p}_1}}\times B_0)))$ are limits of type $\xi_{t_n}(s_0(\beta_n),s_0(\beta_n),\beta_n)$, $t_n\geq 0$. Fix first $q_2=(x_2,0,b_2)\in s(s^{-1}(S_{{\bf{p}_1}}\times B_0))$. Since $s\pitchfork B_0$ we can provide a submanifold $B_0'\subset B_0$ of dimension equal to $\dim{U}_{{\bf{p}_1}}$ such that $s_0\bigr|_{B_0'}\pitchfork S_{{\bf{p}_1}}$ and $q_2\in s(B_0')$. Take then a transverse small disk $D$ in $q_2+T_{q_2}s(B_0')$ centered at $q_2$. The trajectories originating in this disk will cut $s(B_0')$ exactly once due to the transversality of $s(B_0')$ to the flow. This stays true even if $q_2=(0,0,b_2)$ is critical. By Corollary \[cortv\] which can be applied also to “slanted” disks, given any point $(0,y_2)\in U_{{\bf{p}_1}}$ there exists a sequence of points $u_n\in D$ with $u_n{\rightarrow}q_2$ and a corresponding sequence of points on trajectories determined by $u_n$ that converges to $(0,y_2)$. This finishes the inclusion $\mathcal{R}(\mathcal{A}_1)\subset \overline{\bigcup_{t\geq 0}\xi_t(B_0)}$. Conversely, let $ (a,b,c)\in \overline{\bigcup_{t\geq 0}(\xi_t(B_0))}\bigcap \tilde{V}_{\theta}$. Then there exist $t_n\geq 0$ and $b_n\in B_0$ such that $(\psi_{t_n}(s_1(b_n)),s_1(b_n), b_n){\rightarrow}(a,b,c)$ with $c\in B_0$. By passing to a subsequence one can assume that $t_n{\rightarrow}t_0$ or $t_n{\rightarrow}\infty$. Since $a\in V_{\theta}\subset \mathring{V}^{{\varepsilon}}_{\gamma}$ and the latter is open in $M$ we can consider $\psi_{t_n}(s_1(b_n))\in \mathring{V}^{{\varepsilon}}_{\gamma}$ for $n$ big enough. However $\psi_{t_n}(s_1(b_n))$ might not be in $V_{\theta}$ for infinitely many $n$, since it could happen that $f(\psi_{t_n}(s_1(b_n)))>\theta$ for a subsequence. Let $r_n:=\frac{t_n}{1+t_n}$. We have that $$u_n:=\left(r_n,\psi_{t_n}(s_1(b_n)),s_1(b_n), b_n\right)\in W_1$$ Since $r_n$, $b_n$ and $\psi_{t_n}(s_1(b_n))$ all converge we have that in fact $u_n$ converges to a point $u\in [0,1]\times{V}_{\theta}\times\mathring{{C_{{\varepsilon}}}}$ that necessarily lies in $\overline{W_1}$. The limit $u=(r,a,b,c)$ will also be a point in $Z_1$ since $(b,c)\in s(B_0)$ and $a\in V_{\theta}$. Hence $(a,b,c)\in \mathcal{R}(\mathcal{A}_1)$. Statements (2) and (3) are trivial from the description of $\partial^{1}_{1}\mathcal{A}_1$ and $\partial_0^1\mathcal{A}_1$. For (4) the inclusion $\subset$ is straightforward. For the inclusion $\supset$ notice that if $(a,b,c)\in \mathcal{A}_1$ then either ${f}(a)<\theta$ in which case $a\in \mathring{{V_{\theta}}}$ and so $(a,b,c)\in \mathring{Z_1}\cap \overline{W_1}$ or ${f}(a)=\theta$ in which case $(a,b,c)\in \partial {Z_1}\cap \overline{W_1}=:\partial^1_2\mathcal{A}$. For (5) one notices that for $t\neq 1$ the (restriction of the) map $\mathcal{R}$ is injective away from the points corresponding to $s^{-1}(\{{\bf{p}_1}\}\times B_0)$. Moreover $\mathcal{R}$ is injective when $t=1$. For $p\neq q$ with $t_p\neq 1$ and $t_q=1$, $\mathcal{R}(p)\neq \mathcal{R}(q)$ unless $p,q\in \overline{W}_{{\bf{p}_1}}$. For (6) one notices that $\overline{W}_{{\bf{p}_1}}\cap \partial^1_2\mathcal{A}_1=\emptyset$. \[coreqfund\] The equality of currents (\[bordo.T\]) holds on the open set $\mathring{V}_{\theta}\times M\times B_0$. The intersection $\mathring{\mathcal{A}}_{1}:=\mathcal{A}_1\cap (\mathring{V}_{\theta}\times M\times B_0)$ is a manifold with boundary since $\partial^1_2\mathcal{A}_1$ gets eliminated in the intersection. Since $\mathcal{R}$ is proper we can push-forward any current. Use: $$d (\mathcal{R}_*(\mathring{\mathcal{A}}_{1}))=\mathcal{R}_*(d\mathring{\mathcal{A}}_{1})=\mathcal{R}_*(\partial^1_1\mathcal{A}_1)-\mathcal{R}_*(\partial^1_0\mathcal{A}_1).$$ Now $\mathcal{R}\bigr|_{\mathring{\mathcal{A}}_{1}}$ away from a set of zero measure[^3] is a bijection onto $$\xi([0,\infty)\times B_0)\cap \mathring{V}_{\theta}\times M\times B_0.$$ It follows from the area formula that $$\mathcal{R}_*(\mathring{\mathcal{A}}_{1})=T\bigr|_{\mathring{V}_{\theta}\times M\times B_0}$$ where $T$ is the current appearing in (\[bordo.T\]). This is the first step. In order to proceed further we will use the restriction $\mathcal{R}:\partial^1_2\mathcal{A}_1{\rightarrow}f^{-1}(\theta)\times M\times B_0=\tilde{f}^{-1}(\theta)$. We fix now another critical point ${\bf p}_2$ with $f({\bf p}_2)>f({\bf{p}_1})$. In order to implement the program we need the following. \[otranslem\] The restriction of $\mathcal{R}$ denoted $\sigma: \partial_{2}^{1}\mathcal{A}_1\longrightarrow \tilde{f}^{-1}(\theta)$ is completely transverse to $\mathrm{S}_{{\bf p}_2}\times M\times B_0$ within $\tilde{f}^{-1}(\theta)$ for all critical points ${\bf p}_2$. First a clarification. The complete transversality for the map $\sigma$ is meant here both within the ambient space $\tilde{f}^{-1}(\theta)$ (with $\mathrm{S}_{p_2}\times M\times B_0\cap \tilde{f}^{-1}(\theta)$) and within the ambient space $M\times M\times B_0$. The two statements are clearly equivalent due to the transversality of $S_{p_2}$ to ${f}^{-1}(\theta)$ for every regular $\theta$. Since $\partial_{2}^{1}\mathcal{A}_1$ is a manifold with boundary $\partial^2\mathcal{A}_1$, as defined in , we need to show transversality at points - $q\in \sigma(\partial^2\mathcal{A}_1)\cap (\mathrm{S}_{{\bf p}_2}\times M\times B_0)$ - $q\in \sigma(\partial_{2}^{1}\mathcal{A}_1\setminus \partial^2\mathcal{A}_1) \cap (\mathrm{S}_{{\bf p}_2}\times M\times B_0).$ For both situations, take the unique (by (6) of Prop.\[prop.1\] ) $q'\in \partial_{2}^{1}\mathcal{A}_1$ such that $\sigma(q')=q$. The two situations are distinguished by $t_{q'}=1$ (for (1)) or $t_{q'}\neq 1$. From the explicit expression of $\partial^2\mathcal{A}_1$ in (\[bordo.A3\]) we see that transversality for (1) is implied by the Smale property of the flow. For (2) since $t\neq 1$ we can give another description to $\overline{W_1}\setminus\{t=1\}\cap (f^{-1}(\theta)\times M\times B_0)$ as the graph of the time map defined over $\xi_0(B_0\setminus s^{-1}(S_{{\bf{p}_1}}\times B_0))$ that associates to a point $p$ the time $t_p$ it needs to reach $\tilde{f}^{-1}(\theta)$. We deduce that the intersection of this time map graph with the flow-invariant $S_{p_2}\times M\times B_0$ can be described as the flow-out of the intersection $\xi_0(B_0\setminus s^{-1}(S_{{\bf{p}_1}}\times B_0))\cap (S_{{\bf p}_2}\times M\times B_0)$ to the level set $\tilde{f}^{-1}(\theta)$. By the transversality of $s$ with $S_{p_2}$ we get that this intersection is transverse. Moreover, since the flow preserves transversality we get by Proposition \[FlB\] the transversality condition we are after within $\tilde{f}^{-1}(\theta)$. We resume what we did so far. We started with the proper submanifold $\xi_0(B_0)$ of $V_{\theta}\times M\times B_0$ and we constructed a resolution of its flow-out in the open set $\mathring{V}_{\theta}\times M\times B_0$. The resolution took the form of a map $\mathcal{R}$ from a manifolds with corners $\mathcal{A}_1$ to $V_{\theta}\times M\times B_0$. Of course this only solves the problem of the flow-out of $\xi_0(B_0)$ for the first critical point encountered. How do we go from here. The map $\sigma:\partial^1_2\mathcal{A}_1{\rightarrow}\tilde{f}^{-1}(\theta)$ will play now the role of $\xi_0:B_0{\rightarrow}M\times M\times B_0$ and we would like to apply the same ideas to $\sigma$. So we would like to see what properties of $\sigma$ can be preserved when going through the next critical level. Clearly by composing with the flow-diffeomorphisms we can assume that the image of $\sigma$ is contained in a regular level of $\tilde{f}$ close to the next critical level. There is no harm in assuming that the next critical level lies within $\tilde{f}^{-1}(0)$ by changing $f$ to $f+c$ for some constant $c$. Moreover we will be using certain neighborhoods of the critical sets. First, since the Shilnikov-Minervini results are local around the critical points we will use different neighborhoods around the critical sets of $\tilde{f}^{-1}(0)$. - We will denote $\tilde{C}_{{\varepsilon}}:= \bigcup_{p\in \operatorname{Crit}(f)\cap f^{-1}(0)} C_{{\varepsilon}}(p)\times M\times B_0$, where ${\varepsilon}$ is chosen so that the results of Section \[s.BVP\] hold for each cube $C_{{\varepsilon}}(p)$ with $p$ in the finite set $\operatorname{Crit}(f)\cap f^{-1}(0)$. - For each ${\bf p}\in \operatorname{Crit}(f)\cap f^{-1}(0)$ we will chose a $\theta_p$ and $\gamma_p$ small enough so that Proposition \[prop.triplo\] item \[(ii)\] is satisfied (observe that the points ${\bf p}\in \operatorname{Crit}(f)\cap f^{-1}(0)$ are not points of local minimum in our context). Then let $\theta_1:=\min {\theta_{\bf p}}$, $\gamma:=\min {\gamma_{\bf p}}$ $V_{\theta_{\bf p}}:=V^{{\varepsilon}}_{\gamma}({\bf p})\cap f^{-1}[-\theta_1,\theta_1]$, $V_{\theta_1}=\cup_{{\bf p}} V_{\theta_{\bf p}}$ and $$\tilde{V}_{\theta_1}:=V_{\theta_1}\times M\times B_0.$$ Here is the model result we are after. \[model.prop\] Let $N$ be an oriented manifold with corners of dimension $n$ e type $k\geq1 $[^4] and let $-\theta$ be a regular level for $\tilde{f}$ such that $0$ is a critical level and $\theta$ is small enough. Let $\theta'<\theta_1$ and $\sigma:N\rightarrow \tilde{f}^{-1}(-\theta' )\cap \tilde{V}_{\theta_1}$, $\sigma=(\sigma_1,\sigma_2,\sigma_3)$ be a smooth map such that - $\sigma$ is proper - $\sigma$ is completely transverse to all stable manifolds $\mathrm{S}_{\bf p}\times M\times B_0$ for ${\bf p}\in \operatorname{Crit}(f)$; equivalently $\sigma_1\pitchfork \mathrm{S}_{\bf p}$, for all ${\bf p}\in\operatorname{Crit}(f)$. - $\sigma\bigr|_{N\setminus \partial^2N}$ is injective where $\partial^2N$ is the collection of strata of codimension at least $2$. - $\sigma_1\bigr|_{N^0}= \alpha((\sigma_2,\sigma_3)\bigr|_{N^0})$ for some function $\alpha$ where $N^0$ is the top stratum of $N$. Then there exists a smooth map, called flow resolution $\mathcal{R}_{\sigma}: \mathcal{A}_\sigma\longrightarrow \tilde{V}_{\theta_1}$ defined over an oriented manifold with corners $\mathcal{A}_{\sigma}$ of dimension $n+1$ and type $k+2$ such that - $\mathcal{R}_{\sigma}$ is proper; - $\mathcal{R}_{\sigma}(\mathcal{A}_\sigma)=\overline{\left(\bigcup_{t\geq 0}\xi_t(\sigma(N))\right)}\cap \tilde{V}_{\theta_1}$ where the closure is taken in $M\times M\times B_0$. - $\mathcal{R}_{\sigma}$ is a bijection from an open subset of $\mathcal{A}_{\sigma}$ of full measure to an open subset of full measure of $\left(\bigcup_{t\geq 0}\xi_t(\sigma(N))\right)\cap \tilde{V}_{\theta_1}.$ - $\mathcal{A}_{\sigma}$ has a distinguished boundary $N':=\partial^1_{\theta_1}\mathcal{A}_{\sigma}$ of dimension $n$ and type $k+1$ such that $\sigma':=\mathcal{R}_{\sigma}|_{N'}: N' \longrightarrow \tilde{f}^{-1}(\theta_1)\cap \tilde{V}_{\theta_1}$ is completely transverse to all the stable manifolds $S_{\bf p}\times M\times B_0$, is injective on $N'\setminus\partial^2N'$ and the components of $\sigma'$ satisfy property of item $(d)$ when restricted to $(N')^0$. The word distinguished is related to the fact that $\partial^1_{\theta_1}\mathcal{A}_{\sigma}$ is not the full boundary of $\mathcal{A}_{\sigma}$ but one that has a collar neighborhood. It is important to specify the codomain of $\sigma$ in order to state the properness property. In the induction process we are using, the original map $\mathcal{R}\bigr|_{\partial^1_2\mathcal{A}_{\theta}}$ is proper when the codomain is $V_{\theta}\times M\times B_0\cap \tilde{f}^{-1}(\theta)$. The later is an open set inside $\tilde{f}^{-1}(\theta)$. Clearly the inclusion of an open set into the ambient space is not proper. The injectivity property stated in item (c) appears because we want the current $(\mathcal{R}_{\sigma})_*(\mathcal{A}_{\sigma})$ to pe determined by the image of $\mathcal{R}_{\sigma}$. Otherwise one could have multiplicities or worse things happening. One cannot expect injectivity to hold everywhere. The seemingly strange property (d) is to insure the “replication” of the injectivity property away from the codimension $2$ stratum. Property (d) is fulfilled for the initial $\sigma$, the restriction of $\mathcal{R}$ to $\partial^1_2\mathcal{A}$. In that case, $N^0$ is the graph of the time map $p{\rightarrow}t_p$ for $p\in \xi_0(B_0\setminus \xi_0^{-1}(S_{{\bf p}_1}\times M\times B_0))$ as described in the proof of Lemma \[otranslem\] while $\sigma$ projects $(t_p,p)$ to $p$. Since the first component of $\xi_0$ is dependent on the other two, we get the claim. In order to prove Proposition \[model.prop\] we also need to deal with the fact that $\sigma(N)$ is not necessarily a subspace of $V_{\theta_1}\times M\times B_0$. Hence rather than “flowing” $\sigma(N)$ we will consider the graph $\Gamma_{\sigma}$. It is convenient to consider first a proper embedding $N\hookrightarrow {{\mathbb R}}^j$ and look at the closure of the set $$W_2:=\left\{ \left(t,\psi_{\frac{t}{1-t}}(m_1),m_1,m_2,b,n\right)~|~ 0<t<1\right\}\cap {{\mathbb R}}\times\mathring{C}_{{\varepsilon}}\times \mathring{C}_{{\varepsilon}}\times M\times B_0\times {{\mathbb R}}^j$$ inside $ {{\mathbb R}}\times\mathring{C}_{{\varepsilon}}\times \mathring{C}_{{\varepsilon}}\times M\times B_0\times {{\mathbb R}}^j$. By Remark \[rem001\], this closure is a manifold of dimension $2m+n+j+1$ with boundary $$\begin{aligned} \partial^1_1 \overline{W_2}:= \bigcup_{{\bf p}\in f^{-1}(0)\cap \operatorname{Crit}(f)} \{1\}\times(U_{{\bf p}}\cap \mathring{C}_{{\varepsilon}}({\bf p}))\times (S_{{\bf p}}\cap \mathring{C}_{{\varepsilon}}({\bf p}))\times M\times B_0\times {{\mathbb R}}^j \bigcup \\ \partial^1_0 \overline{W_2}:= \{0\}\times \triangle_{\mathring{C}_{{\varepsilon}}}\times M\times B_0\times {{\mathbb R}}^j.\qquad\qquad\qquad\end{aligned}$$ Let $$Z_{\sigma}:= {{\mathbb R}}\times V_{\theta_1}\times \Gamma_{\sigma}\subset {{\mathbb R}}\times V_{\theta_1}\times V_{\theta_1}\times M\times B_0\times N.$$ Since $V_{\theta_1}$ is a manifold with boundary we get that $Z_{\sigma}$ is a manifold with corners of dimension $m+n+1$ and type $k+1$. The codimension $1$ boundary components of $Z_{\sigma}$ are $$\begin{aligned} \partial^1_0Z_{\sigma}:={{\mathbb R}}\times(V_{\theta_1}\cap f^{-1}(-\theta_1))\times \Gamma_{\sigma} \qquad \quad \qquad\\ \partial^1_1Z_{\sigma}:={{\mathbb R}}\times(V_{\theta_1}\cap f^{-1}(\theta_1))\times \Gamma_{\sigma} \qquad \qquad\qquad \\ \partial^1_{j+1}Z_{\sigma}:={{\mathbb R}}\times V_{\theta_1}\times \Gamma_{\sigma\bigr|_{\partial^1_{j} N}},\;\; 1\leq j\leq f_{N} \end{aligned}$$ where $f_N$ is the number of codimension $1$ boundary components of $N$, i.e. the number of connected components of $\partial^1 N\setminus \partial^2 N$, assumed finite. \[Lem\] The manifolds $\overline{W_2}$ and $Z_{\sigma}$ are completely transverse inside ${{\mathbb R}}\times\mathring{C}_{{\varepsilon}}\times \mathring{C}_{{\varepsilon}}\times M\times B_0\times {{\mathbb R}}^j$. This implies that $Z_{\sigma}\cap \overline{W_2}$ is a manifold with corners of dimension $n+1$ and type at most $k+2$. Moreover, the codimension $1$ boundary of $Z_{\sigma}\cap \overline{W_2}$ has the following components which are themselves manifolds with corners $$\begin{aligned} \quad \partial^1_1\overline{W_2}\cap Z_{\sigma}\quad&=&\bigcup_{{\bf p}\in \operatorname{Crit}(f)\cap f^{-1}(0)}\{1\}\times (U_{{\bf p}}\cap f^{-1}([0,\theta_1]))\times \Gamma_{\sigma_{|\sigma^{-1}(S_{\bf p}\times M\times B_0)}} \\ \label{levelthetaprim} \partial^1_0\overline{W_2}\cap Z_{\sigma}\quad&= &\{0\}\times\Gamma_{\tilde{\sigma}}\\ \label{boundtheta} \overline{W_2}\cap \partial^1_1Z_{\sigma}\quad&&\\ \overline{W_2}\cap \partial^1_{j+1}Z_{\sigma}&\mbox{for}&1\leq j\leq f_N.\end{aligned}$$ where $\tilde{\sigma}=(\sigma_1,\sigma_1,\sigma_2,\sigma_3)$ given that $\sigma=(\sigma_1,\sigma_2,\sigma_3)$. Finally, $$\label{emptyset}\overline{W_2}\cap \partial^1_0Z_{\sigma}\quad=\emptyset.$$ Analogous to the proof of Lemma \[s4l1\]. The reason for (\[emptyset\]) is that the $f$-value of the second component of $\overline{W_2}$ is at least as big as the $f$-value of the third component while this is not the case for an element of $\partial^1_0Z_{\sigma}$ due to $f \circ\sigma_1=-\theta'>-\theta_1$. The reason for which we chose $\operatorname{Im}\sigma\subset \tilde{f}^{-1}(-\theta')$ is because we did not want $\operatorname{Im}\sigma \subset\partial \tilde{V}_{\theta_1}$. That would render Lemma \[Lem\] false. An alternative approach, if $\operatorname{Im}\sigma\subset \tilde{f}^{-1}(-\theta_1)$, would be to replace $V_{\theta_1}$ in the definition of $Z_{\sigma}$ with $V^{\theta'}_{\theta_1}=f^{-1}([-\theta',\theta_1])\cap V^{\epsilon}_{\gamma}$ where $\theta'<\theta_1$. Then one has to be content with the construction of the resolution for the flow-out of $\sigma(N)$ in between levels $-\theta'$ and $\theta_1$. Let $\mathcal{A}_{\sigma}:=\overline{W_2}\cap Z_{\sigma}$. This is an oriented manifolds with corners because $\overline{W_2}$ and $Z_{\sigma}$ are both oriented. The convention here is that the components which correspond to graphs of smooth functions over a certain oriented base manifold $B'$ inherit the orientation of the manifold $B'$. Hence the direction of the flow gives the first vector of a positively oriented basis. Other than this, we respect the order of factors in a product. Consider now $$\mathcal{R}_{\sigma}:\mathcal{A}_{\sigma}{\rightarrow}V_{\theta_1}\times M\times B_0$$ to be the restriction of the projection onto the second, fourth and fifth components of the product ${{\mathbb R}}\times\mathring{C}_{{\varepsilon}}\times \mathring{C}_{{\varepsilon}}\times M\times B_0\times {{\mathbb R}}^j$. \[proper2\] The map $\mathcal{R}_{\sigma}$ is proper. Suppose just like in Proposition \[prop.propria\] that there exists a sequence $u_n\in\mathcal{A}_{\sigma} $ such that $u_n{\rightarrow}\infty$ and $\mathcal{R}_{\sigma}(u_n)$ converges. Now, $u_n$ has six components $$u_n=(t_n,a_n',a_n,m_n,b_n, z_n)$$ with $t_n\in [0,1]$. By passing to a subsequence we can assume that $t_n{\rightarrow}t'\in[0,1]$. We will show that in both cases $t'=1$ and $t'\neq 1$ one reaches a contradiction by showing that there exists a subsequence of $u_n$ which converges in $\mathcal{A}_{\sigma}$. For the case $t'=1$ we will use the fact that $\{t=1\}\cap \overline{W}\cap ([0,1]\times V_{\theta_1}\times V_{\theta_1})$ is compact and thus has a compact neighborhood. In fact, one notices that $\overline{W}$ is completely transverse to $V_{\theta_1}\times V_{\theta_1}\times {{\mathbb R}}$ and their intersection is a manifold with corners, with one of the codimension $1$ boundaries being contained in $\{t=1\}$ $$\bigcup_{{\bf p}\in\operatorname{Crit}(f)\cap f^{-1}(0)}\{1\}\times(U_{\bf p}\cap V_{\theta_1})\times(S_{\bf p}\cap V_{\theta_1}).$$ This is compact. We conclude that from $(a_n',a_n)$ we can extract a subsequence denoted again $(a_n',a_n)$ that converges to a point in $V_{\theta_1}\times V_{\theta_1}$. On the other hand, we have by hypothesis that $(a_n',m_n,b_n)$ converges. Hence by passing to a subsequence we conclude that $(a_n,m_n,b_n)$ converges to some point $(a,m,b)$. But $(a_n,m_n,b_n)=(\sigma_1(z_n),\sigma_2(z_n),\sigma_3(z_n))$ and $\sigma$ is proper. It follows that we can extract yet another subsequence this time from $z_n$ that converges to $z\in N$ (just take $\sigma^{-1}(K)$ where $K$ is a compact neighborhood of $(a,m,b)$). But then by the continuity of $\sigma$ we get that $(a_n,m_n,b_n, z_n)$ converges to $(a,m,b,z)\in\Gamma_{\sigma}$. Since $a_n'$ converges to a point in $V_{\theta_1}$ we conclude that $u_n$ converges to a point in $\mathcal{A}_{\sigma}$, contradiction with $u_n{\rightarrow}\infty$. For $t'\neq 1$ we use that $a_n'=\psi_{\frac{t_n}{1-t_n}}(a_n)$ and since $a_n'$ converges and $\frac{t_n}{1-t_n}{\rightarrow}\frac{t'}{1-t'}$ we conclude that $a_n$ also converges. We claim that $a_n$ converges to $a\in V_{\theta_1}$. First $a_n'{\rightarrow}a'\in V_{\theta_1}$ by hypothesis. Now $V_{\theta_1}$ is a flow-convex neighborhood and $a'=\psi_{\frac{t'}{1-t'}}(a)$. It follows that $a\in V_{\theta_1}$. Now the contradiction is obtained as before $(a_n,m_n,b_n)=(\sigma_1(z_n),\sigma_2(z_n),\sigma_3(z_n))$, etc. We now complete the proof of Proposition \[model.prop\]. We need only be concerned with items (ii)-(iv). The distinguished boundary is defined as: $$N':=\partial_{\theta_1}^1\mathcal{A}_{\sigma}:=\overline{W_2}\cap \partial^1_1Z_{\sigma}=\overline{W_2}\cap( [0,1]\times {f}^{-1}(\theta_1)\times \Gamma_{\sigma}).$$ Clearly $\mathcal{R}_{\sigma}({\partial_{\theta_1}^1\mathcal{A}_{\sigma}})\subset \tilde{f}^{-1}(\theta_1)$. In order to prove (ii) it is useful to consider the projection $\tilde{\mathcal{R}}_{\sigma}$ from $\mathcal{A}_{\sigma}$ onto the second, fourth, fifth and sixth components. Then the image of this map will give the closure of the flow-out of $\Gamma_{\sigma}$ inside $V_{\theta_1}\times M\times B_0\times {{\mathbb R}}^j$ and the proof of this fact follows the same lines as the proof of item (1) in Proposition \[prop.1\]. Then projecting the closure of the flow-out of $\Gamma_{\sigma}$ onto $V_{\theta_1}\times M\times B_0$ equals $\overline{\left(\bigcup_{t\geq 0}\xi_t(\sigma(N))\right)}\cap \tilde{V}_{\theta_1}$ and this takes care of (ii). The map $\tilde{R}_{\sigma}$ mentioned in the previous paragraph is injective on $\mathcal{A}_{\sigma}\setminus \{t=1\}$ since on this set all points are of type $$(t,\psi_{\frac{t}{1-t}}(\sigma_1(z)),\sigma_1(z),\sigma_2(z),\sigma_3(z),z),\qquad z\in N, t\in [0,1)$$ which get projected to $(\psi_{\frac{t}{1-t}}(\sigma_1(z)),\sigma_2(z),\sigma_3(z),z)$. Clearly all the points in the forward-flowout of $\Gamma_{\sigma}$ that are inside $V_{\theta_1}\times M\times B_0\times {{\mathbb R}}^j$ are also in the image of this projection. In order to obtain a set where $\mathcal{R}_{\sigma}$ is injective one needs to take out more points. Since $\sigma\bigr|_{\partial^2N}$ is completely transverse when restricted to all distinguished boundary pieces $\partial^2_i N$ of $\partial^2N$, it follows that we can define a natural subspace $\mathcal{A}_{\sigma\bigr|_{\partial^2N}}$ of $\mathcal{A}_{\sigma}$ by taking the union of the corresponding sets $ \overline{W_2}\cap Z_{\sigma\bigr|_{\partial^2_iN}}$. Since this $\mathcal{A}_{\sigma\bigr|_{\partial^2N}}$ will be a union of manifolds with corners of lower dimension it will have measure zero inside $\mathcal{A}_\sigma$. Then $\mathcal{R}_{\sigma}$ will be injective on $\mathcal{A}_{\sigma}\setminus \left(\{t=1\}\cup \mathcal{A}_{\sigma\bigr|_{\partial^2N}}\right)$ and this takes care of (iii). In order to prove transversality we can use Lemma \[transvlem\] in order to reduce to the proof of transversality of $\tilde{\mathcal{R}}_{\sigma}\bigr|_{\partial^1_{\theta_1}\mathcal{A}_{\sigma}}$ with $S_p\times M\times B_0\times {{\mathbb R}}^j$. This then follows the same scheme as Lemma \[otranslem\]. For injectivity we note that $ \partial^1_0\overline{W_2}\cap\partial^1_1Z_{\sigma}=\emptyset$ and we separate $ N'=\partial^1_{\theta_1}\mathcal{A}_{\sigma}$ into two parts: $$N_1:=(\overline{W_2}\setminus \{t=1\})\cap \partial^1_1Z_{\sigma} \;\;\mbox{and}\;\; N_2:=\partial^1_1\overline{W_2}\cap\partial^1_1Z_{\sigma}.$$ We observe that $\mathcal{R}_{\sigma}$ takes $N_1$ and $N_2$ to two disjoint sets in $\tilde{V}_{\theta}$ distinguished by the fact that the first component belongs to $\cup_{p\in \operatorname{Crit}(f)\cap f^{-1}(0)} U_p$ (in the case of points in $\mathcal{R}_{\sigma}(N_2)$) or does not belong to the same set (for $\mathcal{R}_{\sigma}(N_1)$). We have that $N_2$ is a distinguished boundary of $N'$ while $N_1$ contains the top stratum of $N'$. Hence it is enough to prove the injectivity of $\mathcal{R}_{\sigma}$ separately on $N_1\setminus \partial^2 N_1$ and on $N_2\setminus \partial^1N_2$. The points in $N_1\setminus \partial^2 N_1$ are of type $\left(t_{0},\psi_{\frac{t_0}{1-t_0}}(\sigma_1(z)),\sigma_1(z),\sigma_2(z),\sigma_3(z),z\right)$ with $z\in N\setminus \partial^2N$ and they get mapped to $(a,m,b):=\left(\psi_{\frac{t_0}{1-t_0}}(\sigma_1(z)),\sigma_2(z),\sigma_3(z)\right)\in f ^{-1}(\theta_1)\times M\times B_0$. Then $t_0$ is the unique time it takes to flow backwards from point $a\in f ^{-1}(\theta_1)$ to level $f^{-1}(-\theta')$, $\sigma_1(z)$ is the point of intersection with level $f^{-1}(-\theta')$ of the trajectory determined by $a$, and $z\in N\setminus \partial^2N$ is uniquely determined by $\sigma(z)$ due to the hypothesis. The description $N_2=\displaystyle\bigcup_{{\bf p}\in f^{-1}(0)\cap \operatorname{Crit}(f)}\{1\}\times (U_{\bf{p}}\cap f^{-1}(\theta_1))\times \Gamma_{\sigma\bigr|_{\sigma^{-1}(S_{\bf p}\times M\times B_0)}}$ is useful. For the top stratum $(N_2)^0$ of $N_2$, one restricts $\Gamma_{\sigma}$ to $\sigma^{-1}(S_{\bf p}\times M\times B_0)\cap N^0$. If $p=(u,\sigma_1(z),\sigma_2(z),\sigma_3(z),z)\in (N_2)^0$ then $u$ does not determine $\sigma_1(z)$ anymore, but property (d) says that $\sigma_1(z)$ is determined by $(\sigma_2(z),\sigma_3(z))$. Since $\sigma$ is also injective on $N^0$ we get that $p$ determines $z$. Hence the map on $(N_2)^0$ that takes $p$ to $(u,\sigma_2(z),\sigma_3(z))$ is injective and this finishes the proof of this issue. Finally, property (d) itself holds for $\sigma':N'{\rightarrow}\tilde{V}_{\theta_1}\cap ({f}^{-1}({\theta_1})\times M\times B_0)$. This follows from the description of $(N')^0=(N_1)^0$ above and the fact that $\sigma$ is injective on $N^0$ and also satisfies property (d). \[transvlem\] Let $\tilde{\sigma}:\tilde{N}{\rightarrow}\tilde{M}\times {{\mathbb R}}^j$ be a smooth map. Then the (complete) transversality of $\tilde{\sigma}$ with $\tilde{S}\times {{\mathbb R}}^j$, for some submanifold $\tilde{S}\subset \tilde{M}$ implies the (complete) transversality of $\tilde{\pi}\circ\sigma$ with $\tilde{S}$ where $\tilde{\pi}:\tilde{M}\times {{\mathbb R}}^j{\rightarrow}\tilde{M}$ is the projection. Straightforward. We now derive an important consequence of Proposition \[model.prop\] in terms of currents. Let $T_{\sigma}=\mathcal{R}_{\sigma}(\mathcal{A}_{\sigma})$ be the flow-out of $\sigma(N)$ between levels $-\theta'$ and $\theta_1$. It is a rectifiable current, once it is endowed, over the points where $\mathcal{R}_{\sigma}$ is a bijection, with the orientation induced by the direction of the flow and the orientation of $N^0$. Let $T_{\sigma\bigr|_{\partial^1_j N}}:=\mathcal{R}_{\sigma}(\overline{W_2}\cap \partial^1_{j+1} Z_{\sigma})$ be the flow-out of $\sigma(\partial^1_j N)$ for the $j$-th codimension $1$ boundary of $N$ also between levels $-\theta'$ and $\theta_1$. This is a rectifiable current of dimension $n$. \[Impcor\]Let $\sigma$ be a smooth map as in Proposition \[model.prop\]. The following currential equation holds in the open set $\tilde{V}_{\theta_1}\cap\tilde{f}^{-1}(-\theta',\theta_1)$. $$\begin{aligned} \quad \label{cordT}dT_{\sigma}=\sum_{{\bf p}\in \operatorname{Crit}(f)\cap f^{-1}(0)} (U_{\bf p}\cap f^{-1}[0,\theta_1))\times (\sigma_2,\sigma_3)(\sigma^{-1}_1(S_{\bf p})) +\sum_{j=1}^{f_N}T_{\sigma\bigr|_{\partial^1_j N}} \end{aligned}$$ One uses Lemma \[Lem\], Proposition \[model.prop\] and Stokes on the manifold with corners $\mathcal{A}_{\sigma}$ pushed-forward via the proper map $\mathcal{R}_{\sigma}$. The injectivity property identifies $(\mathcal{R}_{\sigma})_*(\mathcal{A}_{\sigma})$ with $T_{\sigma}$ and $(\mathcal{R}_{\sigma})_* (\overline{W_2}\cap \partial^1_{j+1}Z_{\sigma})$ with $T_{\sigma\bigr|_{\partial^1_j N}}$ for all $j$. It is important to understand why equation (\[cordT\]) was not stated as an identity directly in the open set $\tilde{f}^{-1}(-\theta',\theta_1)$. The conditions in the statement of Proposition \[model.prop\] do not exclude the possibility that the image of $\sigma$ oscillates wildly close to the topological boundary of $\tilde{V}_{\theta_1}\cap \tilde{f}^{-1}(-\theta')$ inside $\tilde{f}^{-1}(-\theta')$ so that $\sigma_*(N)$ might not be extendable as a current outside this neighborhood. This is of course not the case for the situation where we will apply Proposition \[model.prop\]. In the first step of the induction, $\sigma$ is the restriction of $\mathcal{R}$ to $\partial^1_2\mathcal{A}_{\theta}$ and the image of this map “away” from $U_{{\bf p}_1}\times S_{{\bf p}_1}\times B_0$ is simply the flow-out to level $\theta$ of $\xi_0(B_0\setminus B_0')$ where $B_0'\subset B_0$ is a smaller neighborhood around the point of interest $b_0$. Hence close to the topological boundary of $\tilde{V}_{\theta}$, or “away” from $U_{{\bf p}_1}\times S_{{\bf p}_1}\times B_0$ the image of the map is really an embedded submanifold, extendable beyond the topological boundary. For the other steps of the induction we make the following observation. Proposition \[model.prop\] is what allows us to cross critical levels at least if the image of the map lands close to the stable manifold(s) of the critical point(s) at level $0$. But when flowing between two consecutive critical levels, nothing guarantees that the flow-out of the image of the resolution at the first critical level will end-up within a neighborhood of type $\tilde{V}_{\theta_1}$ so as to satisfy the hypothesis of Proposition \[model.prop\]. One solution is to do the following. Suppose that in fact $\sigma:N{\rightarrow}\tilde{f}^{-1}(-\theta_1)$ and for each ${\bf p}\in \operatorname{Crit}(f)\cap f^{-1}(0)$ there are neighborhoods $D_p$ of $(f^{-1}(\theta_1)\cap S_{\bf p})\times M\times B_0$ such that $\sigma\bigr|_{\sigma^{-1}(D_{\bf p})}$ is proper. Then one gets a restriction map $\hat{\sigma}$ of $\sigma$ to an open set of $N$, which is proper and whose image is contained in $\tilde{V}_{\theta_1}$ as in Propostion \[model.prop\]. What about the rest of $N$? Take $D_{-\theta_1}$ to be *the complement* of $\bigcup_{{\bf p}\in \operatorname{Crit}(f)\cap f^{-1}(0)}(f^{-1}(\theta_1)\cap S_{\bf p})\times M\times B_0$ in $\tilde{f}^{-1}(\theta_1)$ and let $D_N:=\sigma^{-1}(D_{-\theta_1)}$. Look at $\sigma_1:=\sigma\bigr|_{D_N}$. Use the flow diffeomorphism to get from $\sigma_1$ a map $\tilde{\sigma}_1:D_N{\rightarrow}\tilde{f}^{-1}(\theta_1)$. We claim that there exists an open subset of $D_N$ and an open subset of $\partial^1_{\theta_1}\mathcal{A}_{\sigma}$ which are diffeomorphic via a diffeomorphism $\alpha$ such that $$\mathcal{R}_{\sigma}\circ\alpha=\tilde{\sigma}_1$$ Take $\sigma^{-1}(U)$ where $U:=D_{-\theta_1}\cap \tilde{V}_{\theta_1}$. This is obviously an open set diffeomorphic with a open subset of $D_N$. On the other hand by taking $\overline{W_2}\cap \left({{\mathbb R}}\times f^{-1}(\theta_1)\times \Gamma_{\sigma\bigr|_{\sigma^{-1}(U)}}\right)$ one obtains an open subset of $\partial^1_{\theta_1}\mathcal{A}_{\sigma}$ which is obviously diffeomorphic with $\sigma^{-1}(U)$. We can then use the diffeomorphism $\alpha$ in order to “glue” $\mathcal{R}_{\sigma}$ and $\sigma_1$ to a smooth map going from a manifold with corners $\tilde{N}$ to $\tilde{f}^{-1}(\theta_1)$ and flow to the next critical level and apply again Proposition \[model.prop\] and this Remark and so on. *of Theorem* \[tlimxi\]. We have already sketched the proof strategy at the end of Section \[Sec3\]. We need only explain what are the open sets $M_j\times M\times B_0$ that cover $M\times M\times B_0$ where the currential identity is true. We will discuss only the situations where $B_0$ is a small neighborhood around a point $b_0$ such that the forward trajectory determined by $s(b_0)$ ends at a non-maximal point. The remaining situation was already discussed in Remark \[Fmax\]. Let $c_0<\ldots <c_l$ be the consecutive critical levels of $f$ excluding the maximum with $c_0$ the first encountered critical level by the forward trajectory of $s(b_0)$. It might even be a minimal level of $f$, i.e. a level that contains a local minimum if $s(b_0)$ is a local minimum. Let $c_0<\delta_1<\delta_2<c_1<\delta_3<\delta_4<\ldots <c_{l-1}<\delta_{2l-1}<\delta_{2l}<c_l< \delta_{2l+1}$ be regular level sets such that $\delta_1=c_0+\theta_0$ and for $k\geq 1$, $\delta_{2k}=c_{k}-\theta_k$ and $\delta_{2k+1}=c_k+\theta_k$ where $\theta_k$ is chosen small enough so that we can chose neighborhood $V_{\theta_k}$ around the critical points of level $c_k$ satisfying the conditions of Proposition \[prop.triplo\]. Corollary \[coreqfund\] takes care of the first step of induction and implies the formula for $${f}^{-1}(-\infty,\delta_1)\times M\times B_0.$$ By property (4) of Proposition \[prop.1\] and Lemma \[Lem\] there exists a map for some $\epsilon_1>0$: $$\sigma:\partial^1_2\mathcal{A}_1{\rightarrow}{f}^{-1}(\delta_1-\epsilon_1)\times M\times B_0$$ whose image contains the closure of the forward-flow of $\xi_0(B_0)$ intersected with level $\delta_1-\epsilon_1$ and is transverse to all $S_{\bf {p}}\times M\times B_0$ for all critical ${\bf p}$. Moreover $\partial^1_2\mathcal{A}_1$ is a manifold with boundary $\partial^2\mathcal{A}_1=U_{{\bf p}_1}\cap {f}^{-1}(\delta_1-\epsilon_1)\times s(s^{-1}(S_{{\bf p}_1})\times B_0)$ and $\sigma$ satisfies the conditions of Proposition \[model.prop\]. If we let $T^1_{\sigma}$ to be the current determined by the flow-out of the image of $\sigma$ in the open set ${f}^{-1}(\delta_1-\epsilon_1,\delta_2+\epsilon_2)\times M\times B_0$ for some small $\epsilon_2$ where there are no critical points. Let $T^1_{\partial \sigma}$ be the flow-out of the image of $\sigma\bigr|_{\partial^2\mathcal{A}}$. Both are rectifiable currents with the obvious orientation. The following identity of currents holds on ${f}^{-1}(\delta_1-\epsilon_1,\delta_2+\epsilon_2)\times M\times B_0$: $$dT^1_{\sigma}=T_{\partial \sigma}^1=(U_{{\bf p}_1}\cap {f}^{-1}(\delta_1-\epsilon_1,\delta_2+\epsilon_2))\times s(s^{-1}(S_{{\bf p}_1})\times B_0)$$ proving thus the theorem on ${f}^{-1}(\delta_1-\epsilon_1,\delta_2+\epsilon_2)\times M\times B_0$. From this point on we repeatedly apply Proposition \[model.prop\] and Corollary \[Impcor\]. We notice that $(\sigma_2,\sigma_3)(\sigma_1^{-1}(S_p))$ can be substituted with $(\sigma_2,\sigma_3)(\sigma_1^{-1}(S_p)\cap N^0)$ and these points are easy to describe as $s(s^{-1}(S_p\times B_0))$ where $p$ is any of the critical points at $i$-th step. It is not any difficult to see that $T_{\sigma\bigr|_{\partial^1_j N}}$ is a sum: $$\sum_{p} (U_p\cap f^{-1}(c_i-\theta_i,c_i+\theta_i))\times s(s^{-1}(S_p\times B_0))$$ where the sum here runs over the critical points that have already been crossed. \[FinHm\] There is a subtlety in the proof of Theorem \[tlimxi\], that is easy to miss. In order to prove an equality of currents, these have to exist to begin with. In particular $T$ and $U(F)\times_Fs(s^{-1}(S(F)))$ should be shown to have finite local $n+1$ and respectively $n$-Hausdorff measures. But this is a straightforward corollary of the existence of the flow resolutions we have constructed. As an immediate consequence we get via Fubini that $U(F)$ and $s(s^{-1}(S(F)))$ have finite Hausdorff measures if $B$ is compact. Odd Chern-Weil theory {#OCW} ===================== We now start a new topic altogether. Let $E{\rightarrow}B$ be a hermitian vector bundle over a compact manifold $B$. Denote by $\mathscr{U}(E)$ the fiber bundle of unitary isomorphisms and let $U\in\Gamma(\mathscr{U}(E))$ be a section of this bundle. Let $\nabla$ be a connection compatible with the metric. Given any invariant polynomial $P$, we introduce odd degree forms $\operatorname{\mathrm{TP}}(E,U,\nabla)\in \Omega^*(B)$, called odd Chern-Weil forms, which satisfy the following properties - $d\operatorname{\mathrm{TP}}(E,U,\nabla)=0$; - $\operatorname{\mathrm{TP}}(E,U,\nabla)-\operatorname{\mathrm{TP}}(E,U,\nabla')$ is exact for any two metric compatible connections $\nabla,\nabla'$; - if $U_0,U_1\in \Gamma(\mathscr{U}(E))$ are homotopic then $\operatorname{\mathrm{TP}}(E,U_0,\nabla)-\operatorname{\mathrm{TP}}(E,U_1,\nabla)$ is exact. - if $\varphi:B_1{\rightarrow}B$ is a smooth map then $\varphi^*\operatorname{\mathrm{TP}}(E,U,\nabla)=\operatorname{\mathrm{TP}}(\varphi^*E,\varphi^*U,\varphi^*\nabla)$. Recall the fundamental Theorem of Chern-Weil theory. If $P$ is an invariant polynomial and $\nabla_1$ and $\nabla_2$ are compatible connections then one can associate to $\nabla_0$ and $\nabla_1$ two forms $P(E,\nabla_1)$ and $P(E,\nabla_0)$ and there exists a non-unique form $\operatorname{\mathrm{TP}}(\nabla_1,\nabla_0)$ such that $$P(E,\nabla_1)-P(E,\nabla_0)=d\operatorname{\mathrm{TP}}(\nabla_0,\nabla_1).$$ The construction of a particular such form $\operatorname{\mathrm{TP}}(\nabla_1,\nabla_0)$ goes as follows. Let $\pi_2^*E{\rightarrow}[0,1]\times B$ be the pull-back of $E$ with respect to the projection $[0,1]\times B{\rightarrow}B$. Consider the following connection on $\pi_2^*E$ $$\tilde{\nabla}:=\frac{d}{dt}+(1-t)\nabla_0+t\nabla_1.$$ Then the standard homotopy formula implies that $$d\left(\int_{[0,1]}P(\pi_2^*E,\tilde{\nabla})\right)=P(E,\nabla_1)-P(E,\nabla_0).$$ where integration on the left is over the fibers of $\pi_2$. Define $\operatorname{\mathrm{TP}}(\nabla_0,\nabla_1)$ to be $\int_{[0,1]}P(\pi_2^*E,\tilde{\nabla})$. It is not hard to see that $$\operatorname{\mathrm{TP}}(\nabla_0,\nabla_1)=-\operatorname{\mathrm{TP}}(\nabla_1,\nabla_0).$$ This is because the diffeomorphism on $[0,1]\times M{\rightarrow}[0,1]\times M$, $(t,m){\rightarrow}(1-t,m)$ reverses the orientation of the fiber and fiber integration is sensitive to this. If one takes different paths between the connection $\nabla_0$ and $\nabla_1$, by a result of Simons and Sullivan [@SS] the transgression forms obtained for two different paths differ by an exact form. In our context, consider the connections $\nabla_0:=\nabla$ and $\nabla_1=U^{-1}\nabla U$ on $E$ and write down the transgression formula from Chern-Weil theory: $$P(F(U^{-1}\nabla U))- P(F(\nabla))=d\operatorname{\mathrm{TP}}(\nabla,U^{-1}\nabla U),$$ Notice that $F(U^{-1}\nabla U)=U^{-1}F(\nabla)U$. It follows from the fact that $P$ is invariant that $P(F(\nabla))=P(F(U^{-1}\nabla U))$, hence the form $\operatorname{\mathrm{TP}}(\nabla,U^{-1}\nabla U)$ satisfies property (a) above. The next lemma helps prove property (b) for $\operatorname{\mathrm{TP}}(\nabla,U^{-1}\nabla U)$. If $\nabla_i$, $i=1,4$ and are four metric compatible connections then $$\sum_{i}\operatorname{\mathrm{TP}}(\nabla_i,\nabla_{i+1})$$ is exact where $\nabla_5:=\nabla_1$. If $H:C\times B{\rightarrow}M$ is a smooth map, $C$ is an oriented, compact manifold with corners of dimension $c$ and $\omega$ is a smooth form on $M$ of degree $k\geq c-1$ then: $$\label{chom} \int_{C}H^*d\omega+(-1)^{c-1}d\int_{C}H^*\omega=\int_{\partial C}H^*\omega.$$ To see (\[chom\]), apply first Stokes on $C\times B$ to $$d(H^*\omega\wedge \pi_2^*\eta)=dH^*\omega\wedge \pi_2^*\eta+(-1)^{k}H^*\omega\wedge\pi_2^*d\eta$$ where $\eta\in\Omega^{n-k+c-1}(B)$ is a smooth test form and $\pi_2:C\times B{\rightarrow}B$ is the projection and then integrate over the fiber. On the closed, oriented $B$ one has: $$\label{StB}0=d\left(\int_CH^*\omega\right)\wedge \eta+ (-1)^{k-c}\left(\int_CH^*\omega\right)\wedge d\eta$$ Hence, using the orientation of the fiber first convention we get from (\[StB\]) $$(-1)^{k}\int_CH^*\omega\wedge\pi_2^*d\eta=(-1)^{k}\left(\int_CH^*\omega\right)\wedge d\eta=(-1)^{c-1}d\left(\int_CH^*\omega\right)\wedge \eta.$$ Take $C=\Delta^2$ be the standard simplex in ${{\mathbb R}}^2$ with coordinates $(s,t)$ and on $\pi_2^*E{\rightarrow}C\times B$ consider the connection that “interpolates” between $\nabla_1$, $\nabla_2$, $\nabla_3$: $$\tilde{\nabla}:=\frac{d}{ds}+\frac{d}{dt} +\nabla_1+s(\nabla_2-\nabla_1)+t(\nabla_3-\nabla_1)$$ where $\frac{d}{ds}+\frac{d}{dt}$ is the differential on $C$. The form $P(\pi_2^*E,\tilde{\nabla})$ on $C\times B$ is closed and $$\int_{\partial C}P(\pi_2^*E,\tilde{\nabla})=TP(\nabla_1,\nabla_2)+\operatorname{\mathrm{TP}}(\nabla_2,\nabla_3)+\operatorname{\mathrm{TP}}(\nabla_3,\nabla_1)$$ Use now (\[chom\]) for $H=\operatorname{id}_{C\times B}$ to conclude that the Lemma works for three connections. Using that $\operatorname{\mathrm{TP}}(\nabla,\nabla')=-\operatorname{\mathrm{TP}}(\nabla',\nabla)$ one can extend by induction to any finite number of connections. One applies the lemma with $\nabla_1:=\nabla$, $\nabla_2:=U^{-1}\nabla U$, $\nabla_3=U^{-1}\nabla' U$ and $\nabla_4:=\nabla'$ in order to conclude property (b). Indeed one has $$\operatorname{\mathrm{TP}}(\nabla_2,\nabla_3)=\operatorname{\mathrm{TP}}(\nabla,\nabla')=-\operatorname{\mathrm{TP}}(\nabla_4,\nabla_1)$$ Property (c) for $\operatorname{\mathrm{TP}}(\nabla,U^{-1}\nabla U)$ is proved as follows. Consider the vector bundle $\pi^*E{\rightarrow}\mathscr{U}(E)$ where $\pi:\mathscr{U}(E){\rightarrow}B$ is the natural projection. Then $\pi^*E$ has a connection $\pi^*\nabla$ and it also has a tautological unitary isomorphism $U^{\tau}:\mathscr{U}(E){\rightarrow}\mathscr{U}(\pi^*E)$. Therefore there exists a natural transgression closed form $\operatorname{\mathrm{TP}}(\pi^*\nabla,(U^{\tau})^{-1}(\pi^*\nabla) U^{\tau})\in\Omega^*(\mathscr{U}(E))$. Let $U_t$ be a smooth homotopy between $U_0$ and $U_1$. Then $U_t$ is a section of $$\mathscr{U}(\pi_2^*E){\rightarrow}[0,1]\times B.$$ Then for any closed form $\omega$ on $\mathscr{U}(\pi_2^*E)$ the standard homotopy formula informs that $U_0^*\omega$ and $U_1^*\omega$ differ by an exact form on $B$. Take $\omega:= p^*\operatorname{\mathrm{TP}}(\pi^*\nabla,(U^{\tau})^{-1}\pi^*\nabla U^{\tau})$ where $p:\pi_2^*\mathscr{U}(E){\rightarrow}\mathscr{U}(E)$ is the natural projection. It is not hard to check that $$\begin{aligned} U_0^*\omega=U_0^*\operatorname{\mathrm{TP}}(\pi^*\nabla,(U^{\tau})^{-1}\pi^*\nabla U^{\tau})=\operatorname{\mathrm{TP}}(\nabla,U^{-1}_0\nabla U_0)\quad \mbox{and} \\ U_1^*\omega=\operatorname{\mathrm{TP}}(\nabla,U^{-1}_1\nabla U_1)\qquad \end{aligned}$$ and this finishes the proof of the third property. One checks rather immediately the naturality of $\operatorname{\mathrm{TP}}$. Hence $\operatorname{\mathrm{TP}}(\nabla,U^{-1}\nabla U)$ satisfies the four properties above. Define then the odd Chern-Weil forms associated to $(P,E,U,\nabla)$ by $$\operatorname{\mathrm{TP}}(E,U,\nabla):=\operatorname{\mathrm{TP}}(\nabla,U^{-1}\nabla U).$$ We will also denote this by $\operatorname{\mathrm{TP}}(U,\nabla)$ when the bundle is clear from the context. \[clutQ\] There is an alternative way of thinking about $\operatorname{\mathrm{TP}}(E,U,\nabla)$ that reminds one of the clutching construction. Consider the fiber bundle $\pi_2^*E{\rightarrow}{{\mathbb R}}\times B$ where now $\pi_2:{{\mathbb R}}\times B{\rightarrow}B$. Then ${{\mathbb Z}}$ acts on $\pi^*E\rightarrow {{\mathbb R}}\times B$ as follows: $$k*(t,b,v)=(t-k,b,U_b^kv).$$ We get thus a vector bundle $\tilde{E}=T(E,U)=\pi_2^*E/{{\mathbb Z}}$. A smooth section of $\tilde{E}$ is a smooth family of sections $(s_t)_{t\in{{\mathbb R}}}\in\Gamma(E)$ satisfying: $$s_{t-k}(b)=U_b^ks_{t}(b),\quad\quad \forall b\in B,\; k\in{{\mathbb Z}},\; t\in {{\mathbb R}}$$ Suppose $\nabla_t$ is a smooth family of connections on $E{\rightarrow}B$ satisfying: $$\label{eqUnab} \nabla_{t+k}=U^{-k}\nabla_t U^k,$$ Then the connection $T(\nabla_t)=\frac{d}{dt}+\nabla_t$ on $\pi_2^*E$ “descends” to a connection on $\tilde{E}$ as the next computation shows: $$(T(\nabla_t)s)_{t-k}(b)=\frac{\partial s}{\partial t}(t-k,b)~dt+\nabla_{t-k}s_{t-k}(b)=$$$$=U_b^k\frac{\partial s}{\partial t}(t,b)~dt+ U_b^k\nabla_t (U_b^{-k}U_b^ks_t)(b)=U_b^k(T(\nabla_t)s)_t)(b).$$ Now, the affine family $(1-t)\nabla+tU^{-1}\nabla U$ defined for $t\in[0,1]$ satisfies (\[eqUnab\]) for $k=1$ and $t=0$. Unfortunately, one cannot extend it to a *smooth* family satisfying (\[eqUnab\]) so one cannot say that $\tilde{\nabla}$ is a smooth connection on $\tilde{E}$. However, one can show that $\operatorname{\mathrm{TP}}(E,U,\nabla)$ represents the cohomology class $$\int_{S^1}P(\tilde{E})$$ where $P(\tilde{E})$ is the deRham cohomology class of the vector bundle $\tilde{E}{\rightarrow}S^1\times B$. This is because the transgressions between $\nabla$ and $U^{-1}\nabla U$ determined by the affine path and by a path satisfying (\[eqUnab\]) differ by an exact form. If one starts with a connection on $\tilde{E}$ of type $T(\nabla_t)$, the integral over $S^1$ of $P(F(T(\nabla_t)))$, is really the integral over $[0,1]$ of the same quantity and is just the transgression between $\nabla$ and $U^{-1}\nabla U$ given by the path of connections $\nabla_t$. This justifies the claim. All vector bundles over $S^1\times B$ when $B$ is compact arise up to isomorphism via the clutching construction. This is because the pull-back of such a bundle to $[0,1]\times B$ is isomorphic with the pull-back of a vector bundle from $B$. The isomorphism of the fiber at $0$ with the fiber at $1$ gives the desired gauge transformation. Consider the trivial vector bundle $\underline{{{\mathbb C}}^n}{\rightarrow}U(n)$. It has a tautological gauge transformation $\widetilde{U}(U):=U$. Let $d$ be the trivial connection. Then the difference between the $\widetilde U^{-1}d \widetilde{U}$ and $d$ is just the Maurer-Cartan $1$-form of $U(n)$ usually denoted $g^{-1}(dg)$. Then the transgression forms $\operatorname{Tc}_k(\tilde{U},d)$ corresponding to the elementary symmetric polynomials $c_k$ in the eigenvalues of a matrix, or if you want to the standard Chern classes are constant multiples of the forms $$\operatorname{tr}\wedge ^{2k-1}g^{-1}(dg).$$ We determine these constants now. Let $\omega:=g^{-1}(dg)$. Consider the family of connections over $\underline{{{\mathbb C}}^n}$: $$d+tg^{-1}(dg)=d+t\omega$$ Then the induced connection on $\underline{{{\mathbb C}}^n}{\rightarrow}[0,1]\times B$ is $\tilde{\nabla}=d+t\pi_2^*\omega$. Using the Maurer-Cartan identity $d\omega+\omega\wedge\omega=0$ we get that $$F(\tilde{\nabla})=dt\wedge \pi_2^*\omega+(t^2-t)\pi_2^*\omega\wedge \pi_2^*\omega.$$ Then, by definition, the component of degree $k$ of the Chern character is $$\operatorname{ch}_k(\tilde{\nabla})=\left(\frac{i}{2\pi}\right)^{k}\frac{1}{k!}\operatorname{tr}(F(\tilde{\nabla})^k).$$ But letting $A=dt\wedge \pi_2^*\omega$ and $B=(t^2-t)\omega\wedge \omega$ we see that $A^2=0$ and $AB=BA$ and so $(A+B)^k= B^k+kB^{k-1}A$. We therefore get $$\operatorname{ch}_k(\tilde{\nabla})=\left(\frac{i}{2\pi}\right)^{k}\frac{1}{(k-1)!} (t^2-t)^{k-1}dt\wedge \operatorname{tr}(\wedge^{2k-1}\pi_2^*\omega)+\left(\frac{i}{2\pi}\right)^{k}\frac{1}{k!}\operatorname{tr}(\wedge^{2k} \pi_2^*\omega)$$ The last term vanishes before integration because $\wedge^{2k} \pi_2^*\omega=\frac{1}{2}[ \pi_2^*\omega, \pi_2^*\omega]$ and trace vanishes on commutators. Hence $$\label{chk}\operatorname{ch}_k(\tilde{\nabla})=\left(\frac{i}{2\pi}\right)^{k}\frac{1}{(k-1)!} (t^2-t)^{k-1}dt\wedge \operatorname{tr}(\wedge^{2k-1}\pi_2^*\omega).$$ On the other hand, $\int_{[0,1]} (t^2-t)^{k-1}dt=(-1)^{k-1}B(k,k)=(-1)^{k-1}\frac{[(k-1)!]^2}{(2k-1)!}$. We conclude that $$\label{chkint} \operatorname{Tch}_k(\tilde{U},d)= \int_{[0,1]}\operatorname{ch}_k(\tilde{\nabla})=(-1)^{k-1}\left(\frac{i}{2\pi}\right)^k\frac{(k-1)!}{(2k-1)!}\operatorname{tr}(\wedge^{2k-1}g^{-1}(dg))$$ We emphasize that this is the same form as $(-2\pi i)^{-k+1/2}\gamma_{2k-1}$ where $\gamma_{2k-1}$ appears in Definition 5.1 of [@Qu]. In fact, Quillen shows in his Proposition 5.23, by a rather long argument, that the form $(-2\pi i)^{-k+1/2}\gamma_{2k-1}$ represents the integral of $\operatorname{ch}_k(\tilde{E})$ over $S^1$, where $\tilde{E}$ is the vector bundle on $S^1\times U(n)$ constructed from the trivial bundle via the clutching construction with respect to the tautological gauge transform. Remark \[clutQ\] gives a more straightforward justification of this fact. Now, the symmetric polynomials $\operatorname{ch}_k$ and $c_k$ are related via the Newton identities. One has an identity of type: $$c_k=(-1)^{k-1}(k-1)! \operatorname{ch}_k+ R$$ where $R$ stands for a sum of products of $\operatorname{ch}_{i}$ with $i<k$. A quick glance at (\[chk\]) convinces us that $R(\tilde{\nabla})=0$. Hence after integration over $[0,1]$ we get that $$\label{tck1} \operatorname{Tc}_k(\tilde{U},d)=\left(\frac{i}{2\pi}\right)^k\frac{[(k-1)!]^{2}}{(2k-1)!}\operatorname{tr}(\wedge^{2k-1}g^{-1}(dg)).$$ For $k=1$ one gets $Tc_1(\tilde{U},d)=-\frac{1}{2\pi i}\operatorname{tr}(g^{-1}(dg))$. Let us end this example by giving a simple application. \[eqikn\] Let $k\leq n$ and $\iota_{k-1,n}:U(k-1){\rightarrow}U(n)$ be the natural inclusion. Then $\iota_{k-1,n}^*\operatorname{Tc}_l(\tilde{U},d)$ is exact for $l\geq k$. It is enough to prove the Lemma for $l=k$, and derive the general property from the inclusions $U(k-1)\hookrightarrow U(l-1)\hookrightarrow U(n)$. We look at the “clutching” bundle determined by ${{\mathbb C}}^{n}={{\mathbb C}}^{k-1}\times {{\mathbb C}}^{n-k+1}$ over $S^1\times U(k-1)$ where $U(k-1)$ acts trivially on ${{\mathbb C}}^{n-{k+1}}$. Denote this bundle by $\tilde{E}_{k,n}$. Clearly $\tilde{E}_{k,n}$ will have $n-k+1$ linearly independent sections, namely the vectors $e_k,\ldots,e_n$ of the canonical basis get glued to themselves and hence determine an trivial rank $n-k+1$ subundle of $\tilde{E}_{k,n}$. It follows that $c_k(\tilde{E}_{k,n})=0$ in cohomology and hence also $\int_{S^1}c_k(\tilde{E}_{k,n})=0$. By Remark \[clutQ\] and naturality of the odd Chern-Weil forms, this is the same as the class of $\iota_{k-1,n}^*\operatorname{Tc}_k(\tilde{U},d)$. The odd Chern-Weil theory associates to a gauge transformation certain odd cohomology classes. For simplicity we presented it here for structure group $U(n)$, but it can be easily adapted to principal bundles $P{\rightarrow}B$ endowed with a gauge transform $g$ and principal connection $1$-form $\omega$. Then one gets a new principal connection $g^{-1}\omega g$ and integrating over $[0,1]$ the invariant polynomial in the curvature entries of $(1-t)\omega+tg^{-1}\omega g$ gives a transgression closed form on the *base space* that satisfies analogous properties as $\operatorname{\mathrm{TP}}(U,\nabla)$. Recall that Chern-Simons theory constructs typically *non-closed* odd forms living in the *total space* of the principal bundle as follows. Given a principal bundle $\pi:P{\rightarrow}B$ with structure group $G$, connection $1$-form $\omega$ and an invariant polynomial $Q$. Then $\pi^*P=P\times_BP{\rightarrow}P$ is a trivial principal bundle with the trivializing section given by the diagonal embedding. It is endowed with two connections: $\pi^*\omega$ and the pull-back of the Maurer-Cartan connection $1$-form $\pi_2^*g^{-1}(dg)$ via the projection $\pi_2:P{\rightarrow}G$ induced by the trivialization. Using the affine family of connections between $\pi_2^*g^{-1}(dg)$ and $\pi^*\omega$ one gets the Chern-Simons form $TQ(\omega)$ on $P$. Since the curvature of $g^{-1}(dg)$ is zero it follows that $d TQ(\omega)=\pi^*Q(\omega)$. The form $TQ(\omega)$ “descends” to a form on $B$ only under very special circumstances. The odd Chern-Weil theory on the other hand can be constructed in the same spirit, only that one is using the bundle of *gauge transformations* which is the bundle associated to $P{\rightarrow}B$ via the adjoint action of $G$ onto itself. The Chern classes of a gauge transform ====================================== In this section we show how the main transgression identity from Corollary \[c.principal\] can be applied to simplify the proof and give an extension to a result of Nicolaescu in [@Ni]. This application was previously announced in a pre-print of the first named author and posted on arxiv, using the *tame* version of the Corollary \[c.principal\]. However the flow used did not satisfy the condition of tameness. We revisit this application under a renewed framework. The generalization has to do with the use of the odd Chern-Weil forms introduced in the previous section. The starting question is the following. Let $E{\rightarrow}B$ be a hermitian vector bundle of rank $n$ endowed with a gauge transform $U\in \Gamma(\mathcal{U}(E))$ and a compatible connection $\nabla$. Give a description of the Poincaré dual of $\operatorname{Tc}_k(U,\nabla)$, one in terms of pointwise spectral data of $U$. In order to achieve this purpose will use a horizontally constant, vertical Morse-Smale vector field on fiber bundle over $B$ with total space $\mathscr{U}(E)$. Let $\pi:\mathscr{U}(E){\rightarrow}B$ be the projection and $\pi^*\mathscr{U}(E)=\mathscr U(E)\times_B\mathscr{U}(E){\rightarrow}\mathscr{U}(E)$ be the pull-back. It has a tautological section $U^{\tau}$ (the diagonal embedding of $\mathscr{U}(E)$ in $\pi^*\mathscr{U}(E)$) which is obviously a gauge transform of $\pi^*E$. With the help of the connection $\pi^*\nabla$ we can construct $\operatorname{Tc}_k(U^{\tau},\pi^*\nabla)\in\Omega^*(\mathscr{U}(E))$ and the naturality of $\operatorname{Tc}_k$ shows that $$U^*\operatorname{Tc}_k(U^{\tau},\pi^*\nabla)=\operatorname{Tc}_k(U,\nabla).$$ The form $\omega\in \Omega^*(\mathscr{U}(E))$ to be “flown” will be $\omega=\operatorname{Tc}_k(U^{\tau},\pi^*\nabla)$. What is the flow then? In order to define it we need to fix a complete flag of subbundles $$E=W_0\supset \ldots\supset W_n=\{0\}.$$ For $1\leq i\leq n$ let $E_i:=W_{i-1}/W_{i}$ (better said the orthogonal of $W_i$ in $W_{i-1}$) and $$A:=\bigoplus_{1\leq k\leq n}k\operatorname{id}_{E_i}$$ be a self-adjoint endomorphism of $E$ with distinct eigenvalues. Then the function $$f:\mathscr{U}(E){\rightarrow}{{\mathbb R}},\qquad f(U)=\operatorname{Re}\operatorname{Tr}(AU)$$ has a fiberwise or vertical gradient which can be described as $\operatorname{grad}^vf(U)=A-UAU$. The restriction to each fiber $\mathscr{U}(E_b)$ is a Morse-Smale function on the unitary group, thoroughly explored in [@Ni][^5] We recall the essential properties. The critical points are in one-to-one correspondence with the invariant subspaces of $A$. More precisely, let $\langle e_1,\ldots, e_n\rangle$ be a basis of $E_b$ such that $(W_k)_b^{\perp}=\langle e_1\ldots, e_k\rangle$. The critical points are reflections $U_{I}:=-\operatorname{id}_{V}\oplus \operatorname{id}_{V^{\perp}}$ where $V=\langle e_{i_1},\ldots,e_{i_k}\rangle$ for some ordered set $I=\{i_1,\ldots,i_k\}\subset\{1,\ldots,n\}$. There are $2^n$ such critical points with the absolute minimum (of $f\bigr|_{E_b}$) corresponding to $V=E_b$ and the absolute maximum to $V=\{0\}$. The flow is given by the expression [@DV] $$(t,U){\rightarrow}(\sinh{(tA)}+\cosh{(tA)}U)(\cosh{(tA)}+\sinh{(tA)}U)^{-1}$$ From this we deduce that the stable and unstable manifolds can be described by the following incidence relations (compare with Corollary 16 in [@Ni]): $$\begin{aligned} \label{DefS} \qquad \;\;S(U_{I})=\{U\in\mathcal{U}(E_b)~|~\dim[{\operatorname{Ker}{(1+U)}\cap W_m}]=k-p,\;\;\forall 0\leq p\leq k,\;\qquad\quad\\\nonumber \forall i_p\leq m< i_{p+1}\}\\ \nonumber U(U_{I})=\{U\in\mathcal{U}(E_b)~|~\dim[{\operatorname{Ker}{(1-U)}\cap W_m}]=n-k-q,\;\;\forall 0\leq q\leq n-k,\;\\ \nonumber\forall j_q\leq m< j_{q+1}\}\end{aligned}$$ where we set $i_0:=0$, $i_{k+1}:=\infty$ and $\{j_1<\ldots <j_{n-k}\}=I^c$ is the complement of $I$. We note that $\dim{U(U_I)}=\operatorname{codim}{S(U_I)}=\sum_{i\in I}2i-1$.[^6] \[Pseudo\] It is shown in [@Ni] (Proposition 17 and Corollary 18) that the flow satisfies the Smale property. Moreover, their closures $\overline{U(U_I)}$ and $\overline{S(U_I)}$ are real algebraic sets. One can show that in fact $\overline{S(U_I)}$ has a stratification with no codimension $1$ strata (see the comments after Corollary 5.1 in [@Ci1]) and by reversing the flow (or using the involution $U{\rightarrow}-U$) the same is true about $\overline{U(U_I)}$. In other words, $\overline{S(U_I)}$ and $\overline{U(U_I)}$ are pseudo-manifolds and $S(U_I)$ and $U(U_I)$ determine *closed* currents once an orientation is chosen. Their volume is finite by the results of [@HL1] (see Remark \[FinHm\]). This also implies in particular that the Morse-Witten complex associated to the Morse-Smale flow induced by $f$ is perfect, i.e. all the differentials are zero. This is not the case for the analogous flow on the real orthogonal group. We will need the following two computational results interesting in their own right. Recall the forms $\operatorname{Tc}_k (\tilde{U},d)$ from (\[tck1\]). We use the same notation for the analogous forms on $\mathcal{U}(E_b)$. In order to keep the notation simple, for this part, we will forget about the point $b$ and use $E$ and $W$ for $E_b$ and $W_b$, etc. \[UI\] Let $I\neq \{k\}$. Then $$\int_{U(U_I)}\operatorname{Tc}_k (\tilde{U},d)=0$$ Clearly this is true by definition if $\dim{U(U_I)}\neq \dim{U(U_{\{k\}})}=2k-1$. If $\dim{U(U_I)}=2k-1$ but $U_I\neq U_{\{k\}}$ then we infer that $\iota:=\max{\{i\in I\}}\leq k-1$. We claim that this implies that $$\operatorname{Ker}(1-U)\supset W_{k-1},\qquad \forall U\in U(U_{I}).$$ Indeed, let $m\geq \iota=\max{I}$. We first estimate $p$ which satisfies $j_p \leq m< j_p+1$, where $j_p\in I^c$. Let $l:=|I|$. Then, for some $s\geq 0$, we have $$\label{miota} m=\iota+s<j_{\iota-l+s+1}.$$ To see this more clearly consider first $s>0$ and notice that in fact $\iota + s = j_{\iota-l+s}$ as there are exactly $\iota-l+s$ of $j$’s in $I^c$ which are smaller or equal than $\iota+s$, which itself lies in $I^c$. For the case $s=0$ one still has $m=\iota< j_{\iota-l+1}$ since there are exactly $\iota- l$ numbers smaller or equal $\iota$ which are not in $I$. Then (\[miota\]) implies that $p\leq \iota-l+s=m-l$. Therefore for $U\in U(U_{I})$ one has: $$\dim{[\operatorname{Ker}{(1-U)}\cap W_{m}]}=n-l-p\geq n-m=\dim{W_m}$$ Hence $U\bigr|_{W_m}=\operatorname{id}_{W_m}$ for all $m\geq \max{I}$ and this applies to $m=k-1$. It follows then that the (proper) inclusion map: $$\iota_{k-1,n}:\mathcal{U}(W_{k-1}^{\perp}){\rightarrow}\mathcal{U}(E), \qquad U{\rightarrow}U\oplus \operatorname{id}_{W_{k-1}}$$ takes $U(U_{I})\subset \mathcal{U}(W_{k-1}^{\perp})$ diffeomorphically to $U(U_{I})\subset \mathcal{U}(E)$.[^7] Now $U(U_I)$ is a closed current in $ \mathcal{U}(W_{k-1}^{\perp})$ irrespective of the orientation. Then Lemma \[eqikn\] and Remark \[Pseudo\] finish the proof. In order to compute $\operatorname{Tc}_k(\tilde{U},d)$ over $U(U_{\{k\}})$ we need to fix an orientation. Notice first that $U(U_{\{k\}})=\iota_{k,n}(U(U_{\{k\}}))$, where the latter lies within $\mathcal{U}(W_k^{\perp})$ by the same type of argument that was used in Lemma \[UI\] for $U(U_{I})$. Moreover by the naturality of $\operatorname{Tc}_k(\tilde{U},d)$ one has $\iota_{k,n}^*\operatorname{Tc}_k(\tilde{U},d)=\operatorname{Tc}_k(\tilde{U},d)$. It is only natural then to work on $\mathcal{U}(W^{\perp}_k)$. Notice that we have a natural flag on $W^{\perp}_k$ defined by $W_i':=W_i/W_k$, $0\leq i\leq k$. Then $$U(U_{\{k\}})=\{U\in\mathcal{U}(W^{\perp}_k)~|~\dim[\operatorname{Ker}{(1-U)\cap W_m}]=k-1-m,\;\; \forall 0\leq m\leq k-1 \}$$ In fact $U(U_{\{k\}})$ is an open dense subset of the following manifold defined by a single incidence relation: $$U(U_{\kappa}):=\{U\in\mathcal{U}(W^{\perp}_k)~|~\dim \operatorname{Ker}{(1-U)}=k-1 \}$$ This is because, generically a hyperplane of $W^{\perp}_k$ (like $\operatorname{Ker}{(1-U)}$ for $U\in U(U_{\kappa})$ will intersect $W_m'$ in dimension $k-1-m$ for $m\geq 1$. Now $$\overline{U(U_{\kappa})}=\{U\in\mathcal{U}(W^{\perp}_k)~|~\dim \operatorname{Ker}{(1-U)}\geq k-1 \}=U(U_{\kappa})\cup \{\operatorname{id}_{W_k^{\perp}}\}.$$ We use the following map $$\phi:S^1\times {{{\mathbb P}}}(W_k^{\perp}){\rightarrow}\mathcal{U}(W_k^{\perp}),\qquad (\lambda,L){\rightarrow}\lambda \operatorname{id}_{L}\oplus \operatorname{id}_{L^{\perp}}$$ and note that for $\{\lambda\neq 1\}$, this map is a smooth bijection onto ${U(U_{\kappa})}$ while $\{\lambda =1\}$ it collapses ${{{\mathbb P}}}(W_k^{\perp})$ to $\operatorname{id}_{W_k^{\perp}}$. Notice that the map $\phi$ induces an homeomorphism between $\Sigma{{\mathbb C}}{{{\mathbb P}}}^{k-1}$ (the Thom space of the trivial real line bundle over ${{\mathbb C}}{{{\mathbb P}}}^{k-1}$) and $\overline{U(U_{\kappa})}$. Now $S^1\times {{{\mathbb P}}}(W_k^{\perp})$ has a canonical orientation. We put the orientation on $U(U_{\kappa})$ (implicitly also on $U(U_{\{k\}})$) that makes $\phi$ *orientation reversing*. The reason is the next result. $$\int_{U(U_{\{k\}})}\operatorname{Tc}_k(\tilde{U},d)=\int_{\overline{U(U_{\kappa})}}\operatorname{Tc}_k(\tilde{U},d)=1$$ Fix $L\in {{{\mathbb P}}}(W_k^{\perp})$. We write $\phi$ in the “chart” $S^1\times \operatorname{Hom}(L,L^{\perp})$: $$\phi(\lambda,A)=\left(\begin{array}{cc}\frac{\lambda+A^*A}{1+A^*A} & (\lambda-1)(1+A^*A)^{-1}A^*\\ (\lambda-1)(1+AA^*)^{-1}A&\frac{1+\lambda AA*}{1+AA^*}\end{array}\right)$$ where the decomposition of $\phi(\lambda,A)$ on the right is relative $L\oplus L^{\perp}$. The differential at the point $(\lambda,0)$ in this chart is: $$d\phi_{\lambda,L}(w,S)=\left(\begin{array}{cc} w& (\lambda-1)S^*\\ (\lambda-1)S&0\end{array} \right)$$ Hence $$\phi^{-1}(\lambda,L)d\phi_{\lambda,L}(w,S)=\left(\begin{array}{cc} \bar{\lambda}w& (1-\bar{\lambda})S^*\\ (\lambda-1)S&0\end{array} \right)$$ The write hand side is always a skew-symmetric matrix. This is of course the pull-back of $g^{-1}(dg)$ to $S^1\times {{{\mathbb P}}}(W_{k}^{\perp})$ it should be looked at as a $1$-form with values in $\mathfrak{u}(k)=\mathfrak{u}(\tau\oplus \tau^{\perp})$ where $\tau$ is the tautological bundle over ${{{\mathbb P}}}(W_{k}^{\perp})$ pulled-back to $S^1\times {{{\mathbb P}}}(W_{k}^{\perp})$. Then we see that $$\phi^*(g^{-1}dg)(\lambda,L)=\left(\begin{array}{cc} \lambda^{-1}d\lambda& -\overline{\alpha(\lambda)}dS^*_L\\ \alpha(\lambda)dS_L&0\end{array} \right)\qquad \alpha(\lambda)=\lambda-1$$ where $dS$ denotes (the pull-back of) the $1$-form with values in the bundle $\operatorname{Hom}(\tau,\tau^{\perp})\simeq T^{(1,0)}{{{\mathbb P}}}(W_k^{\perp})$, obtained by differentiating $\operatorname{id}_{{{{\mathbb P}}}(W_{k}^{\perp})}$ and $dS^*$ is the conjugate of $dS$. The important point is that $dS$ is globally defined not just in the chart centered at $L$. We need to compute $\wedge^{2k-1}\phi^*(g^{-1}dg)$. Write then $$\phi^*(g^{-1}dg)=C+B$$ where $$C:=\left(\begin{array}{cc} \lambda^{-1}d\lambda &0\\ 0 & 0\end{array}\right)\qquad B:=\left(\begin{array}{cc}0 & -\overline{\alpha(\lambda)}dS^*\\ \alpha(\lambda)dS& 0\end{array}\right)$$ We need also $C_1=\left(\begin{array}{cc} 0 &0\\ 0 & -\lambda^{-1}d\lambda\otimes\operatorname{id}\end{array}\right)$. The following relations are straightforward $$C^2=0,\quad C_1^2=0,\quad B^2C=CB^2,\quad B^2C_1=C_1B^2,$$ $$CC_1=C_1C=0,\quad BCB=B^2C_1,\quad C_1BC=0,\quad CBC_1=0.$$ Let $\wedge^0B:=\operatorname{id}$. We prove by induction that for all $j\geq 1$ the following holds. $$\label{wedge-1}\wedge^{2j-1}\phi^*(g^{-1}dg)=\wedge^{2j-2}B\wedge[jC+(j-1)C_1]+\wedge^{2j-1}B$$ Indeed the equality is trivially true for $j=1$. Let $\omega:=\phi^*(g^{-1}dg)$ and write $\omega^{j}:=\wedge^j\omega$. Then the following computation finishes the proof of (\[wedge-1\]): $$\omega^{2j-1}=\omega^{2j-3}\wedge\omega^2=[B^{2j-4}((j-1)C+(j-2)C_1)+B^{2j-3}](CB+BC+B^2)=$$ $$=B^{2j-4}((j-1)C + (j-2)C_1)B^2 + B^{2j-3}CB + B^{2j-2}C + B^{2j-1}=$$ $$B^{2j-2}((j-1)C+(j-2)C_1)+B^{2j-2}C_1+B^{2j-2}C+B^{2j-1} = B^{2j-2}(jC+(j-1)C_1)+B^{2j-1}.$$ Now $B^{2j-1}$ is block anti-diagonal, hence $\operatorname{tr}B^{2k-1}=0$ and we conclude that: $$\operatorname{tr}\wedge^{2k-1}\omega=\operatorname{tr}B^{2k-2}\wedge (jC+(j-1)C_1)=$$ $$\qquad\qquad\qquad\quad=[(-1)^{k-1}|\alpha(\lambda)|^{2k-2}\lambda^{-1}d\lambda] \cdot\operatorname{w-str}(D^{k-1})$$ where $\operatorname{w-str}\left(\begin{array}{cc} T_1&0\\ 0& T_2 \end{array}\right)=k\operatorname{tr}T_1 -(k-1)T_2$ and $D=\left(\begin{array}{cc} dS^*\wedge dS & 0\\ 0& dS^*\wedge dS \end{array}\right).$ We have thus written $\operatorname{tr}\wedge^{2k-1}\phi^*g^{-1}(dg)$ as a product of pull-backs of forms from $S^1$ and ${{{\mathbb P}}}(W_{k}^{\perp})$ respectively. Since $\phi$ is orientation reversing we have $$\label{eqUka}\int_{U(U_{\kappa})}\operatorname{tr}\wedge^{2k-1}g^{-1}(dg)=\int_{S^1}(-1)^{k}|\alpha(\lambda)|^{2k-2}\lambda^{-1}d\lambda\cdot\int_{{{{\mathbb P}}}({W_{k}^{\perp}})}\operatorname{w-str}(D^{k-1})$$ We use the orientation preserving Cayley transform $t{\rightarrow}\frac{t-i}{t+i}$ to turn the integral: $$\int_{S^1}(-1)^{k}|\alpha(\lambda)|^{2k-2}\lambda^{-1}d\lambda=(-1)^k2^{k-1}\int_{S^1}(1-\operatorname{Re}\lambda)^{k-1}\lambda^{-1}d\lambda$$ into $$\label{intS1}2^{2k-1}(-1)^{k}i\int_{{{\mathbb R}}}\frac{1}{(1+t^2)^{k}}~dt=(-1)^k2\pi i{2k-2 \choose k-1}$$ In order to compute the integral of $\operatorname{w-str}(D^{k-1})$ we notice first that $\operatorname{w-str}(D^{k-1})$ is a $\mathcal{U}(W_k^{\perp})$ invariant form on ${{{\mathbb P}}}(W_k^{\perp})$ and therefore has to equal a constant multiple times the Fubini-Study volume form of ${{{\mathbb P}}}(W_k^{\perp})$. If we fix a point $L_0\in {{{\mathbb P}}}(W_k^{\perp})$ we can describe $dS$ and $dS^*$ in terms of a canonical basis of the chart centered at $L_0$ as $$dS_{L_0}=\left(\begin{array}{c} dz_1 \\ dz_2\\ \ldots\\ dz_{k-1} \end{array}\right)\qquad dS_{L_0}^*=\left(\begin{array}{cccc} d\bar{z}_1 & d\bar{z}_2 & \ldots & d\bar{z}_{k-1} \end{array}\right)$$ Therefore $$dS_{L_0}^*\wedge dS_{L_0}=\sum_{i=1}^{k-1}d\bar{z}_i\wedge dz_i,\;\;\mbox{and}\;\; (dS_{L_0}\wedge dS_{L_0}^*)_{ij}=dz_i\wedge d\bar{z}_j,\quad \forall\; 1\leq i,j\leq k-1.$$ We get $$(S_{L_0}^*\wedge dS_{L_0})^{k-1}=(k-1)!(-1)^{k-1}dz_1\wedge d\bar{z}_1\wedge\ldots\wedge dz_{k-1}\wedge d\bar{z}_{k-1}.$$ One checks rather easily that $(dS_{L_0}\wedge dS_{L_0}^*)^{k-1}$ is diagonal and each diagonal entry is up to a sign equal to $(k-2)!dz_1\wedge d\bar{z}_1\wedge\ldots\wedge dz_{k-1}\wedge d\bar{z}_{k-1}$. In fact $$(dS_{L_0}\wedge dS_{L_0}^*)^{k-1}=(-1)^{k-2}(k-2)!dz_1\wedge d\bar{z}_1\wedge\ldots\wedge dz_{k-1}\wedge d\bar{z}_{k-1}\otimes \operatorname{id}$$ and thus $$\operatorname{w-str}{D^{k-1}_{L_0}}=(-1)^{k-1}(2k-1)(k-1)!dz_1\wedge d\bar{z}_1\wedge\ldots\wedge dz_{k-1}\wedge d\bar{z}_{k-1}$$ The Kähler form at the point $L_0$ on ${{{\mathbb P}}}(W_{k}^{\perp})$ (see [@GH], page 31) is $$\eta_{L_0}:=\frac{i}{2\pi}\sum_{j=1}^{k-1}dz_j\wedge d\bar{z}_j$$ and $$\int_{{{{\mathbb P}}}(W_{k}^{\perp})}\wedge^{k-1}\eta=1$$ We compare $\wedge^{k-1}\eta_{L_0}$ and $\operatorname{w-str}{(D^{k-1}_{L_0})}$ and deduce, due to the fact that they are invariant forms that $$\operatorname{w-str}{(D^{k-1})}=(-1)^{k-1}(2k-1)\left(\frac{2\pi}{i}\right)^{k-1}\wedge^{k-1}\eta$$ and therefore $$\label{intCP1}\int_{{{{\mathbb P}}}(W_{k}^{\perp})}\operatorname{w-str}{(D^{k-1})}=(-1)^{k-1}(2k-1)\left(\frac{2\pi}{i}\right)^{k-1}$$ Putting together (\[eqUka\]) (\[intS1\]) and (\[intCP1\]) we conclude that $$\int_{U(U_{\kappa})}\operatorname{tr}\wedge^{2k-1}g^{-1}(dg)=-2\pi i{2k-2\choose k-1}(2k-1)\left(\frac{2\pi}{i}\right)^{k-1}=\left(\frac{2\pi}{i}\right)^{k}\frac{(2k-1)!}{[(k-1)!]^2}.$$ Just as a curiosity note that $$\overline{S(U_{\{k\}})}=\{U\in \mathcal{U}(W_k^{\perp})~|~\dim{\operatorname{Ker}(1+U)}\geq 1,\; W_{k-1}/W_{k}\subset {\operatorname{Ker}(1+U)}\}= \mathcal{U}(W_{k-1}^{\perp}),$$ with $\mathcal{U}(W_{k-1}^{\perp})$ lying inside $\mathcal{U}(W_k^{\perp})$, via $U{\rightarrow}-\operatorname{id}_{W_{k-1}/W_k}\oplus U$. This embedding of $\mathcal{U}(W_{k-1}^{\perp})$ inside $\mathcal{U}(W_k^{\perp})$ is the isotropic space of $(-1,0,\ldots, 0)\in S^{2k-1}$, the unit sphere inside $\mathcal{U}(W_k^{\perp})$. It is no wonder then that the map $$S^1\times {{\mathbb C}}{{{\mathbb P}}}^{k-1}{\rightarrow}S^{2k-1},\qquad(\lambda,L){\rightarrow}\phi(\lambda,L)(-1,0,\ldots,0)$$ is a map of degree $1$. We will keep the notation $U_{I}$, $S(U_{I})$ and $U(U_{I})$ for the corresponding critical/stable/unstable manifolds in $\mathcal{U}(E)$. \[Nico\] Let $E{\rightarrow}B$ be a trivializable hermitian vector bundle of rank $n$ over an oriented manifold with corners $B$ endowed with a compatible connection. Let $g:E{\rightarrow}E$ be a smooth gauge transform. Suppose that a complete flag $E=W_0\supset W_1\supset \ldots \supset W_n=\{0\}$ (equivalently a trivialization of $E$) has been fixed such that $g$ as a section of $\mathcal{U}(E)$ is completely transverse to all the manifolds $S(U_{I})$ determined by the flag. Then, for each $1\leq k\leq n$ there exists a flat current $T_k$ such that the following equality of currents of degree $2k-1$ holds: $$\label{TCkE} \operatorname{Tc}_k(E, g,\nabla)-g^{-1}(S(U_{\{k\}}))=dT_k.$$ where $$\begin{aligned} g^{-1}(S(U_{\{k\}}))=\{b\in B~|~\dim{\operatorname{Ker}(1+g_b)}=\dim{\operatorname{Ker}{(1+g_b)\cap (W_{k-1})_b}}=1,\qquad\\ \dim{\operatorname{Ker}{(1+g_b)}\cap (W_{k})_b}=0\}.\end{aligned}$$ In particular, when $B$ is compact without boundary, then $\operatorname{Tc}_k(E, g,\nabla)$ and $g^{-1}(S(U_{\{k\}}))$ are Poincaré duals to each other. We use Corollary \[c.principal\] for the fiber bundle $\mathcal{U}(E)$ with the flow described in this section and form $\omega=\operatorname{Tc}_k(U^{\tau},\pi^*\nabla)$ where $\pi:\mathcal{U}(E){\rightarrow}B$. All the necessary residue computations have been performed. The current $T_k$ is a spark in the terminology of Harvey and Lawson. \[transcon\] While the transversality condition of $g$ with the stable manifolds $S(U_{I})$ such that $S(U_{I})\subset \overline{S(U_{\{k\}})}$ is a reasonable requirement for the existence of the current $g^{-1}(S(U_{\{k\}}))$, it seems unnatural that one needs to impose the transversality of $g$ with *all* stable manifolds $S(U_{I})$ in order to obtain (\[TCkE\]) as one does in Theorem \[Nico\]. We conjecture that (\[TCkE\]) is true under the weaker hypothesis. A Fredholm transgression formula ================================ In [@Qu], Quillen introduced various smooth differential forms that live on (infinite dimensional) Banach manifolds that are classifying for even and odd $K$-theory. Fix $H$ a complex, separable Hilbert space and let $\mathscr{L}$, $\mathscr{L}^+$, $\mathscr{K}$ be the space of bounded, bounded and self-adjoint, respectively compact operators on $H$. Inside $\mathscr{K}$ there exists a sequence of two-sided ideals, called Schatten spaces, denoted $\operatorname{Sch}^p$. The Palais spaces are the spaces of unitary operators $\mathcal{U}^p:=\mathcal{U}(H)\cap (\operatorname{id}_H+\operatorname{Sch}^p)$ and it is well-known that they are smooth Banach manifolds (modelled on $\operatorname{Sch}^p\cap\mathscr{L}^+$) and also classifying for odd $K$-theory, i.e. they have the weak homotopy type of the topological direct limit of spaces $U(\infty):=\lim U(n)$. Quillen defined different families of smooth closed forms $\gamma_{2k-1}^t$, $\gamma_{2k-1,q}^t$, $\Phi_{2k-1}^u$ where $t\in{{\mathbb C}},\; \operatorname{Re}t>0$, $n\in {{{\mathbb N}}}$, $u>0$ which are *well-defined* on the finite dimensional unitary groups $U(n)$ and on certain Palais spaces as follows (see Theorem 5 in op. cit.): - $\gamma_{2k-1}^t$ on $\mathcal{U}^p$ when $p\leq 2k-1$; - $\gamma_{2k-1,q}^t$ on $\mathcal{U}^{p}$ when $p\leq 2k-1+2q$; - $\Phi_{2k-1}^u$ on $\mathcal{U}^{p}$ for all $p$. The forms $\gamma_{2k-1}^t$, $\gamma_{2k-1,q}^t$ and $\Phi_{2k-1}^1$ are all cohomologous and represent the degree $2k-1$-component of the odd Chern character $\operatorname{ch}_{2k-1}$ of the universal $K^{-1}$-class, i.e. the class induced by the identity map $\operatorname{id}_{U(\infty)}$. We will call each of them a Quillen form. In [@Ci1] we gave alternative construction to the pull-backs $\varphi^*{\operatorname{ch}_{2k-1}}$ when $\varphi: B{\rightarrow}\mathcal{U}^{p}$ when $B$ is a compact oriented manifold and $\varphi$ is smooth. In fact, the theory works for maps $\varphi:B{\rightarrow}\mathcal{U}^{-}$, where $\mathcal{U}^-$ is the open subset of unitary operators $U$ such that $1+U$ is Fredholm. This is another manifold classifying space for $K^{-1}$ that contains $\mathcal{U}^p$ for every $p$, however it does not come with any easy to describe smooth differential forms on it. Under a certain finite set[^8] of transversality conditions, the classes $\varphi^*{\operatorname{ch}_{2k-1}}$ were described (up to multiplication by a rational number) via preimages $\varphi^{-1}\overline{Z_{\{k\}}}$ where $\overline{Z_{\{k\}}}$ are stratified subspaces of codimension $2k-1$ in $\mathcal{U}^-$. In fact, the Schubert cell $Z_{\{k\}}$ is defined by the same incidence relations as the stable manifold $S(U_{\{k\}})$ we saw in last section. We used local (sheaf) cohomology in order to associate to a finite codimensional, cooriented stratified space a cohomology class, which behaves well under *transverse* pull-back. We take a different path here and show that in fact under a different but still finite set of transversality condition one can define the current $\varphi^*Z_{\{k\}}$ and this is Poincaré dual to $\frac{(-1)^{k-1}}{k-1)!} \varphi^*\Omega_k$ where $\Omega_k$ is a Quillen form. In fact something stronger is true. We will assume that a complete flag $$H\supset W_0\supset\ldots \supset W_k\supset$$ has been fixed with $\operatorname{codim}{W_k}=k$. For every $I=\{i_1<\ldots<i_k\}$ a $k$-tuple of positive integers, let $$\begin{aligned} Z_{I}^p:=\{U\in \mathcal{U}^p~|~\dim{\operatorname{Ker}(1+U)}=k, \dim{\operatorname{Ker}(1+U)\cap W_{m}}=k-p, \; \forall 0\leq p\leq k,\; \\ \forall i_p\leq m<i_{p+1}\}\end{aligned}$$ where as usual $i_0=0$, $i_{k+1}=\infty$. We notice that for every smooth map $\varphi:B{\rightarrow}\mathcal{U}^p$ and every $p$ from a compact manifold $B$ there exists a subspace $W_N$ of the flag such that ${\operatorname{Ker}(1+\varphi(b))\cap W_N}=\{0\}$ for all $b\in B$. This is because if $U\in \mathcal{U}^p$ then $1+U$ is Fredholm and this is an open condition. It follows that the collection of transversality condition $\varphi\pitchfork Z_I^p$ is trivially satisfied if there exists $ a>N$ such that $a\in I$ since then $\varphi^{-1}(Z_I^p)=\emptyset$. Hence $\varphi\pitchfork Z_I^p$ for every $I$ is a generic condition. \[thm71\] Let $\varphi:B{\rightarrow}\mathcal{U}^p$ be a smooth map from a compact, oriented manifold $B$, possibly with corners such that $\varphi\pitchfork Z_I^p$ for every $I$. Let $\Omega_k$ be a Quillen form of degree $2k-1$ that makes sense on $\mathcal{U}^p$. Then for every such $\Omega_k$, there exists a flat current $T_k$ such that: $$\label{lasteq} \varphi^{-1}Z_{\{k\}}-(-1)^{k-1}(k-1)!\varphi^*\Omega_k=dT_k.$$ In particular, when $B$ has no boundary, $\frac{(-1)^{k-1}}{(k-1)!}\varphi^{-1}Z_{\{k\}}^p$ represents the Poincaré dual of $\operatorname{ch}_{2k-1}([\varphi])$, where $[\varphi]\in K^{-1}(B)$ is the natural odd $K$ theory class determined by $\varphi$. We use symplectic reduction. For each linear subspace $W\subset H$ of finite codimension there exists a smooth (even real analytic) map $\mathscr{R}^{W}:\mathcal{U}^p_W{\rightarrow}U(W^{\perp})$, where $$\mathcal{U}^p_W:=\{U\in \mathcal{U}^p~|~\operatorname{Ker}(1+U)\cap W=\{0\}\}$$ is an open subset of $\mathcal{U}^p$. The expression of $\mathcal{R}^W$ relative to the decomposition $U=\left(\begin{array}{cc} X& Y\\ Z& T\end{array}\right)$ vis-a-vis $H=W\oplus W^{\perp}$ is: $$\mathcal{R}^W(U)=T-Z(1+X)^{-1}Y.$$ The map $\mathcal{R}^W$[^9] has some nice properties. For example it can be shown that together with the “0-section”: $$\iota:U(W^{\perp})\hookrightarrow \mathcal{U}^p_W,\qquad U{\rightarrow}-\operatorname{id}_{W}\oplus U$$ is diffeomorphic to a vector bundle over $U(W^{\perp})$ (see Corollary 4.1 in [@Ci1]). Hence by choosing $W=W_N$ a subspace of the flag as mentioned before the proof we get that $\operatorname{Im}\varphi\subset \mathcal{U}^p_W$ and there exists a smooth homotopy $h:[0,1]\times B{\rightarrow}\mathcal{U}^p$ between $\psi:=\mathcal{R}^W\circ \varphi$ and $\varphi$. All the Quillen forms $\Omega_k$ have finite dimensional counterparts $\Omega^{W^{\perp}}_k$ such that $\iota^*\Omega_k=\Omega^{W^{\perp}}_k$. Hence there exists a smooth form $\beta(\Omega)$ on $B$ such that $$\varphi^*\Omega_k- \psi^*\Omega_k=d(\beta(\Omega_k)).$$ Another important property is that $\mathcal{R}^W(Z_I^p)=S(U_{I})$ and in fact $\mathcal{R}^W)^{-1}(S(U_{I}))=Z_I^p$. It follows that $\varphi^{-1}(Z_{\{k\}}^p)=\psi^{-1}(S(U_{\{k\}}))$. Therefore we can use Theorem \[Nico\] to conclude that (\[lasteq\]) holds for $\Omega_k=\gamma_{2k-1}^1$ which coincides with $\operatorname{Tch}_k$. Since the Quillen forms of degree $2k-1$ are all cohomologous in the finite dimensional case we get the result for such forms. The coorientation (implicitly the orientation) of $\varphi^{-1}Z_{\{k\}}$ is discussed in detail in [@Ci1]. It seems that there are too many transversality conditions in Theorem \[thm71\] (see Remark \[transcon\] and compare with Proposition 7.1 from [@Ci1]). [99]{} J. Cheeger, J. Simons, *Differential characters and geometric invariants*, Lecture Notes in Math., vol. 1167, Springer-Verlag, New York, 1985, pp. 50-80. D. Cibotaru, *The odd Chern character and index localization formulae*, Comm. An. Geom, [**19**]{}, (2011), 209-276. D. Cibotaru, *Vertical flows and a general currential homotopy formula*, Indiana U. Math. J, [**65**]{} (2016), 93-169. D. Cibotaru, *Chern-Gauss-Bonnet and Lefschetz Duality from a currential point of view*, Adv. Math., [**317**]{} (2017), 718-757. D. Cibotaru, *Vertical Morse-Bott-Smale Flows and Characteristic Forms.* Indiana Univ. Math J., [**65**]{} (2016), 1089-1135. I.A Dynnikov, A.P. Veselov, *Integrable gradient flows and Morse Theory*, St.Petersburg Math.J. [**8**]{} (1997), 429-446. H. Federer, *Geometric measure theory*, Grundlehren der mathematischen Wissenschaften, Springer-Verlag, New York, 1969. P. Griffiths, J. Harris, *Principles of Algebraic Geometry*, John Wiley & Sons, 1978. R. Harvey, B. Lawson Jr. *Finite Volume Flows and Morse Theory*, Annals of Math. [**153**]{} (2001), no.1, 1-25. R. Harvey, B. Lawson Jr., *A Theory of Characteristic Currents Associated with a Singular Connection*, Astérisque [**213**]{}, Soc. Math. de France, Montrouge, France, 1993. R. Harvey, B. Lawson Jr., *Geometric Residue Theorems*, Amer. J. Math., [**117**]{} (1995), no. 4, 829-873. R. Harvey, B. Lawson Jr., J. Zweck, *The de Rham-Federer theory of differential characters and character duality*, Amer. J. Math. [**125**]{} (2003), no. 4, 791-847. R. Harvey, G. Minervini, *Morse Novikov theory and cohomology with forward supports*, Math. Ann. [**335**]{}, 787-818. J. Latschev, *Gradient flows of Morse-Bott functions*, Math. Ann. [**318**]{} (2000), 731-759. J. Lee, *Introduction to smooth manifolds*, Sec. Ed., Springer, 2013. T. Lydman, C. Manolescu, *The equivalence of two Seiberg-Witten Floer homologies*, Astérisque [**399**]{}, 2018. G. Minervini, *A current approach to Morse and Novikov Theories*, Rend. Mat. [**37**]{} (2015), 95-195. L. Nicolaescu, *Schubert calculus on the Grassmannian of hermitian lagrangian spaces*, Adv. Math., [**224**]{} (2010), 2361-2434. W. Pereira, *Fluxos não-tame de correntes e teoria Chern-Weil impar*, PhD Thesis (in portuguese), Universidade Federal do Ceará, Fortaleza, 2018. D. Quillen, *Superconnection character forms and the Cayley transform*, Topology, [**27**]{}, 211-238, 1988. L. Shilnikov, *Methods of non-qualitative theory in nonlinear dynamics Part I.* Nonlinear Science, World Scientific, 1998. J. Simons, D. Sullivan, *Structured Vector Bundles Define Differential $K$-Theory*, Quanta of Maths, Clay Math. Proc. [**11**]{}, AMS, Clay Math. Inst., 2010. F. Treves, *Topological Vector Spaces, Distributions and Kernels*, Academic Press, 1967. [^1]: Partially supported by the CNPq Universal Project [^2]: the trajectory $\gamma_p$ is determined by $p$ [^3]: with respect to any measure induced by the volume form of a Riemannian metric on $\mathcal{A}_1$ [^4]: The type of a manifold with corners gives the codimension of the smallest strata in the boundary. Some authors prefer to call this “depth”. [^5]: The analysis in [@Ni] is on the Grassmannian of hermitian Lagrangians but, by a Theorem of Arnold a clever way of writing the Cayley transform makes this space diffeomorphic to the unitary group. [^6]: We ignore the indication of the point $b$ in the flag so as not to complicate notation. It should be clear from the context whether we refer to the fiber component or the entire fiber bundle. [^7]: Of course we abused notation by not making any difference between the unstable manifold corresponding to $U_{I}$ in $\mathcal{U}(E)$ and in $\mathcal{U}(W_{k-1}^{\perp})$, respectively. [^8]: hence it applies to “generic” smooth maps [^9]: It is called symplectic reduction because under Arnold’s theorem which identifies the unitary group $\mathcal{U}^p$ with the (Hermitian) Lagrangian Grassmannian of Schatten class $p$ it corresponds to the homonymous process well-known in symplectic topology.
{ "pile_set_name": "ArXiv" }
--- abstract: | Hive is the most mature and prevalent data warehouse tool providing SQL-like interface in the Hadoop ecosystem. It is successfully used in many Internet companies and shows its value for big data processing in traditional industries. However, enterprise big data processing systems as in Smart Grid applications usually require complicated business logics and involve many data manipulation operations like updates and deletes. Hive cannot offer sufficient support for these while preserving high query performance. Hive using the Hadoop Distributed File System (HDFS) for storage cannot implement data manipulation efficiently and Hive on HBase suffers from poor query performance even though it can support faster data manipulation. There is a project based on Hive issue Hive-5317 to support update operations, but it has not been finished in Hive’s latest version. Since this ACID compliant extension adopts same data storage format on HDFS, the update performance problem is not solved. In this paper, we propose a hybrid storage model called DualTable, which combines the efficient streaming reads of HDFS and the random write capability of HBase. Hive on DualTable provides better data manipulation support and preserves query performance at the same time. Experiments on a TPC-H data set and on a real smart grid data set show that Hive on DualTable is up to 10 times faster than Hive when executing update and delete operations. author: - | Songlin Hu[$^{\#,1}$]{}, Wantao Liu[$^{\#,2}$]{}, Tilmann Rabl[$^{\dagger,3}$]{}, Shuo Huang[$^{\#,4}$]{},\ Ying Liang[$^{\#,5}$]{}, Zheng Xiao[$^{\S,6}$]{}, Hans-Arno Jacobsen[$^{\dagger,7}$]{}, Xubin Pei[$^{\ddagger,8}$]{}, Jiye Wang[$^{\ast,9}$]{}\ *$^{\#}$Institute of Computing Technology, Chinese Academy of Sciences, China\ $^{\ddagger}$Zhejiang Electric Power Corporation, China\ $^{\S}$State Grid Electricity Science Research Institute, China\ *$^{\dagger}$Middleware Systems Research Group, University of Toronto, Canada\ $^{\ast}$Dept. of Information Technology, State Grid Corporation of China, China\ {husonglin$^1$,liuwantao$^2$,liangy$^5$}@ict.ac.cn, [email protected]$^3$, [email protected]$^4$,\ [email protected]$^6$, [email protected]$^7$, [email protected]$^8$, [email protected]$^9$\ ** bibliography: - 'DualTable.bib' title: 'DualTable: A Hybrid Storage Model for Update Optimization in Hive' --- Introduction ============ The Hadoop ecosytem is the quasi-standard for big data analytic applications. It provides HDFS as a new file system treating files as consistency unit, which makes it possible to significantly improve batch data reading and writing [@Borthakur2007]. Hive is a data warehouse system based on Hadoop for batch analytic query processing [@Thusoo2009]. It has become very popular in Internet companies. The success and ease of deployment of Hive attracts attention from traditional industries, especially when facing large data processing challenges. Smart Grid applications, as typical use cases, have to deal with enormous amounts of data generated by millions of smart meters. For instance, the Zhejiang Grid, a province-level company in China, currently owns about 17 million deployed smart meters, which will be increased to 23 million within 2 years. According to the China State Grid standard, each of these meters needs to record data and send it to the data center 96 times per day. The system has to support efficient querying, processing and sharing of these enormous amounts of data, which add up to 60 billion measurements per month only on province level. The whole system needs to support user electricity consumption computing, district line loss calculating, statistics of data acquisition rates, terminal traffic statistics, exception handling, fraud detection and analysis, and more amounting to around 100,000 lines of SQL stored procedures in total. As requested by the State Grid, the computing task must be finished from 1am to 7am every day, or it will affect the business operations in working hours. In fact, the processing cost of these stored procedures is so high that current solutions based on relational database management systems (RDBMS) deployed on a high performance 2\*98 core cluster and an advanced storage system can hardly complete the analysis in time. Even with the current number of smart meters and a comparably low frequency of data collection of a single measurement per day, the performance of current commercial solutions is not acceptable after careful system optimizations carried out by professional database administrators and business experts. For instance, due to sophisticated join operations on 5 tables that contain 60G data, around 1 billion data records in total, the average processing time of the user electricity consumption is around 3 hours. With increasing collection frequencies and a growing number of installed meters the capacity of the current solution will be exceeded soon. Considering the advantages of Hadoop and Hive, such as superior scalability, fault tolerance, and low cost of deployment they were chosen for the Zhejiang Grid. The use of Hive makes pure statistical applications in Zhejiang Grid more efficient. The performance of some statistical query executed in a Hive cluster is significantly better than that of current RDBMS cluster. The main challenge in this use case is that current Hive lacks the capability of supporting efficient data manipulation operations. Although HIVE-5317 aims at implementing insert, update, and delete in Hive with full ACID support, it has not been released yet [@Hive-5317]. Meanwhile, judging from its design document, its main focus is on full ACID guarantee rather than performance optimization of update operation. This makes it very difficult for current RDBMS-based applications to be migrated to a Hadoop environment. Traditional enterprises have to process complicated business logic functions rather than only pure statistical applications. Many of the enterprise level data processing applications are built using complex stored procedures. Besides sophisticated analysis on huge data, they contain a high ratio of update and delete operations. As shown in our analysis of the Zhejiang Grid smart electricity consumption information system, a typical application, which can have more than 10,000 lines of stored procedure code, includes 70% data manipulation operations [@Liu:2014]. Without full update support, specifically missing UPDATE, DELETE and the proprietary MERGE INTO operations, Hive has to use *INSERT OVERWRITE* to rewrite huge HDFS files even if only 1% of the complete data set is modified. As a result, the lack of update support in Hive results in huge I/O costs, which will cancel out all the performance benefits. The weakness of the Hive data manipulation operations lies in its storage subsystem: HDFS or HBase. HDFS is designed for a *write once read many* scenario and is good at batch reading. It treats a whole file as consistency unit without any support of random writes. HBase provides record level consistency to support efficient random reads and writes at the cost of batch reading efficiency. Choosing either one of these two as the underlying storage will sacrifice the benefits of the other, resulting in severe side-effect when facing complex workloads. As described by the design document, the ongoing implementation of Hive-5317 proposes an approach to support data manipulation by using a base table and several delta tables. Unmodified data is stored in the base table, and each transaction creates a delta table. The read operation retrieves a record from base table and *merges* it with corresponding records in delta tables to get the up-to-date data view. However, due to the usage of same storage format, the performance problem is not solved in this approach. To combine the benefits of file-level consistency and record-level consistency, and thus to support high throughput batch reads and efficient random writes in a unified way, DualTable, a hybrid storage model is proposed in this paper. It enables efficient reads and random writes through integration of two different storage formats. A cost model-based adaptive mechanism dynamically selects the most efficient storage policy at run-time. The data consistency can transparently be maintained by our *UNION READ* approach. The use of random read capability of HBase makes the *UNION READ* efficient. With the support of DualTable, update capability of Hive can be enhanced without losing its batch read efficiency. The Smart Grid use-case is presented in Section \[sec:Motivaton\]. We then give a detailed analysis of the weakness of data manipulation operations in Hive in Section \[sec:limitation\]. Section \[sec:dualtable\] presents Dualtable. Section \[sec:costmodel\] discusses DualTable’s cost model. The implementation and evaluation will be given in Section \[sec:impl\] and Section \[sec:eval\] respectively. We will introduce related work on Hive optimization in Section \[sec:related\]. Finally, conclusions and future work are presented in Section \[sec:conclusion\]. Smart Grid {#sec:Motivaton} ========== The smart electricity consumption information collection system is an very important part of smart grid, which acts as a mediator between electricity consumers and the grid. Smart Electricity Consumption Information Collection System {#sec:collection_system} ----------------------------------------------------------- The smart electricity consumption information system makes it possible for the energy provider to be aware of the quasi-real-time electricity consumption and to improve its businesses such as electricity supply and pricing policy through deeply analyzing and utilizing the data it collects. Different from traditional collection systems, which mostly focus on support of billing processes based on once-per-month data collection, it collects data hundreds of times per day and serves as an intelligent service for diverse applications in the life cycle of marketing, production, and overhauling of the grid as well as a data source for interactive user service. Considering its advantages, like cost-efficiency, fault tolerance, and scalability, the Zhejiang Grid introduced Hive into its information collection system and leveraged it as its big data processing platform. The platform contains 5 subsystems as pictured in Figure \[SystemArch\]: the communication system that collects data from smart meters and sends them to the cloud after encoding, the information collection cluster, i.e., front end PC (FEP), that receives the data and does pre-processing like decoding, the cloud data storage system that receives data from the FEP cluster and stores it, the Hive and MapReduce environment that processes analytic procedures on the cloud storage, and an RDBMS-based archive database that copes with daily data management transactions on archive information of devices (smart metering devices and inter-media devices), users, organizations, etc. ![The Architecture and Data Flow of the Smart Electricity Consumption Information Collection System[]{data-label="SystemArch"}](./images/SGapplication.pdf){width="\columnwidth"} The data flow within the system is illustrated in Figure \[SystemArch\]. The collection system gets smart metering data at fixed frequency, currently every 15 minutes. In cases of missing data or errors, the system manager re-collects data from specified smart meters. The FEP Cluster usually appends huge amount of the collected data to cloud data storage, thus, it needs a very fast storage system to store the data. When recollection happens, it needs to update the data set, which is marked as (1). The archive database communicates with data managers and maintains data according to their requests. To support grid data analysis, the archive information involved will be copied to the cloud storage. And, the modified data needs to be forwarded to the cloud storage via data synchronization marked as (2). The computing environment executes all data processing procedures several times per day and the results it generates are written back by the cloud storage system to the RDBMS database for query and management. It reads data from the cloud data storage and overwrites tables when needed. Moreover, as what has been implemented in the RDBMS, it needs to update or delete only a small part of a table during data processing. This is marked as (3). From the perspective of the cloud storage system the computing environment has to cope with data updating and deleting besides data appending and inserting, which is currently supported by Hive system. In HiveQL, these operations must be implemented using current INSERT OVERWRITE operation. Since a table in the collection system is very big, the OVERWRITING operation will be very costly, which in turn heavily affects the efficiency, and sometimes exhausts the resources and blocks the whole system. Hive Data Manipulation Limitations {#sec:limitation} ---------------------------------- In real world enterprise data analysis use-cases, there is a high ratio of update operations, as shown in Table \[tab:dml-ratio\]. The following paragraphs will discuss the three update cases shown in Figure \[SystemArch\] in detail respectively. Frist, the SQL DML operations DELETE, UPDATE, and MERGE INTO, which updates existing records and inserts new records, are extensively used in smart grid applications as illustrated in Figure \[SystemArch\] (3). For example, in the Zhejiang Grid data processing system, there are 5 important application scenarios: power line loss analysis, electricity consumption statistics, data integrity ratio analysis, end point traffic statistics, and exception handling. Each of these was implemented in stored procedures in a traditional RDBMS, the total count of SQL code lines is more than 10,000 per application scenario. Each of the operations is executed more than 3 times per day. Table \[tab:dml-ratio\] summarizes the amount of DML statements in the five core business scenarios. The table shows that DML operations (UPDATE, DELETE, and MERGE) amount to at least 50% in every scenario. Note that Hive features efficient INSERT operations, which is why we do not list INSERT in this table. Scenario Total Delete Update Merge % DML ---------- ------- -------- -------- ------- ------- 1 133 15 52 15 62 2 75 25 20 9 72 3 174 27 97 13 79 4 12 3 3 0 50 5 41 3 23 0 63 : Ratio of DML Operations in Grid Scenarios[]{data-label="tab:dml-ratio"} Due to its initial target use cases and limitation of HDFS, Hive lacks adequate support for DML operations. Hive only supports *complete overwrite* (INSERT OVERWRITE), *append* (INSERT INTO), and *delete* (DROP) at table or partition level. Although row-level UPDATE and DELETE operations can be transformed into equivalent INSERT OVERWRITE statements, it is a tedious and error-prone process, let alone the complex logic correlations and huge number of DML statements in an enterprise data analysis system. To illustrate the challenges of transforming a data manipulation statement from SQL to HiveQL, we show a typical UPDATE statement in Listing \[lst:RDBMSsql-update\] and its corresponding HiveQL translation in Listing \[lst:hive-update\] from the electricity information collection system. The UPDATE statement is part of the application scenario, which computes the total line loss of an organization on a specific date from table tj\_tqxs\_r and changes the value of column QRYHS in table tj\_tqxsqk\_r. As a comparison, in order to update only one column, Hive reads every record and a total of 22 columns from table tj\_tqxsqk\_r, conducts a left outer join with table tj\_tqxs\_r, and finally, writes back all 23 columns of every record into table tj\_tqxsqk\_r using INSERT OVERWRITE. It is obvious that accessing the irrelevant columns and records incurs high overhead. Using INSERT OVERWRITE, the cost of a update operation is always proportional to total amount of data instead of the amount of modified data. This leads to a significant performance penalty for Hive if used as enterprise data analysis systems, which typically contain tables with huge number of records and columns. Especially, when only a small portion of records and columns are updated or deleted per operation. In addition, the use of INSERT OVERWRITE is not as intuitive as the SQL’s counterpart UPDATE. UPDATE tj_tqxsqk_r t SET t.QRYHS = (SELECT SUM(k.tqyhs) FROM tj_tqxs_r k WHERE t.rq = k.tjrq AND k.glfs = t.glfs AND k.zjfs = t.cjfs AND k.dwdm = t.dwdm AND k.sfqr = 1) WHERE t.rq = v_date; INSERT OVERWRITE TABLE tj_tqxsqk_r SELECT t.dwdm,t.rq,t.jb,t.xslzctqs,t.xslcdtqs, t.xslwftqs,t.ztqs,t.xslzcyhs,t.zyhs, t.tqxsksl,t.glfs,t.cjfs,t.qfgl, t.ljqfgyhs,t.ljfgyhs,t.xslbkstqs1, t.xslbkstqs2,t.xslksyhs,t.xslzcyhs_x, t.xslksyhs_x, IF (t.rq = ${v_date}, g.qryhs, t.qryhs) AS qryhs, t.gxdyyhs FROM tj_tqxsqk_r t LEFT OUTER JOIN ( SELECT SUM(k.tqyhs) AS qryhs, k.tjrq,k.glfs, k.zjfs, k.dwdm FROM tj_tqxs_r k WHERE k.sfqr = 1 GROUP BY k.tjrq,k.glfs, k.zjfs,k.dwdm) g ON t.rq = g.tjrq AND g.glfs = t.glfs AND g.zjfs = t.cjfs AND g.dwdm = t.dwdm Unfortunately, in real enterprise data processing systems, data columns and records are fairly big, while the number of columns and records that need to be modified in one statement are limited. In our analysis, we found that most of the tables in the smart grid system contain more than 50 columns, but the columns being modified in one statement are less than 3 in average. In most cases, the ratio of records that need to be modified is less than 1%. Therefore, the INSERT OVERWRITE strategy will cause a large percentage of redundant writes. For example, in an energy consumption table or partition containing 1.8 billions records, only a few hundred records need to be modified for a recurrent processing task. Overwriting the whole table file will heavily degrade the efficiency of the query. Second, upgrading devices or modifying user information leads to change of archive data as shown in Figure \[SystemArch\] (2). In the Zhejiang Grid information system, even in extreme cases, there are no more than 500 out of 22 million devices changed on a single day. Thus the ratio of upgrading devices information is also very small. However, it takes more than 15 minutes to rewrite the device information in the Cloud Data Storage System if Hive’s overwrite operation is utilized. Finally, the update operations of data recollection shown in Figure \[SystemArch\] (1) also only affect a very small amount of data, approximately less than 2000 records in a single update operation, which yields an update ratio of less than 0.01%. However, rewriting the electricity consumption table in the Cloud Data Storage System will take more than half an hour with our current cluster setting. Apache Hive also plans to support data modification operations [@Hive-5317]. We will give a detailed comparison to our technique in Section \[sec:impl\]. All these instances reflect the same general problem of the lack of efficient update operations in Hive. In the next section, we present DualTable, our solution to this problem. DualTable {#sec:dualtable} ========= ![DualTable Architecture[]{data-label="fig:dualtable"}](./images/Architecture1.pdf){width="0.9\columnwidth"} DualTable is a system that combines the strong read performance of HDFS with the update performance of HBase. An abstract view of the architecture can be seen in Figure \[fig:dualtable\]. Data is stored in two locations, the *Master Table* and the *Attached Table*. The Master Table is the main data storage, it is optimized for batch reading and initially contains all the records in the table; the Attached Table is an additional storage location for maintaining information about the updated or deleted records. Besides Hive’s original data manipulation operations INSERT INTO, CREATE, DROP, and LOAD, DualTable also supports the additional operations UNION READ, UPDATE, DELETE and COMPACT; the *Cost Model* is used to select an implementation plan for UPDATE and DELETE operations and it includes separate cost models for each operation. Each DualTable contains one Master Table and one Attached Table. Each row recorded in DualTable has a unique ID in the table scope, which links the record data in the Master Table and the Attached Table. When an UPDATE or DELETE operation is executed, the system will choose an implementation plan based on the cost model, either an *OVERWRITE Plan* or an *EDIT Plan*. The OVERWRITE Plan rewrites the Master Table using Hive’s original INSERT OVERWRITE syntax while the EDIT Plan writes modification information to the Attached Table. In order to combine the data from the Master Table and the Attached table, a UNION READ operation is used, it generates a merged view. When the Attached Table is too large and the merging becomes too expensive, the Attached Table is compacted into the Master Table and cleared using the COMPACT operation. Master Table ------------ The Master Table stores the main part of the data. The storage used for this table must provide high performance batch read and write. It can be implemented using HDFS, Google File System [@Ghemawat2003], the Quantcast File System [@QFS], or optimized file formats such as ORC [@Leverenz2013]. Attached Table -------------- The Attached Table stores information about the new values for updated record fields or delete markers for deleted records. All these record modification data are associated with their record IDs so that they can be merged with the record data in the Master Table. The storage used for this table should provide high performance random read and write. Possible candidates are, for example, HBase, MySQL, and MongoDB. Due to the good integration, we will only discuss the HBase-based implementation. DualTable Operations -------------------- Below we will characterize the basic storage operations in DualTable. - **CREATE and DROP:** Using the CREATE operation, DualTable will create an Attached Table and a Master Table. Analogously, it will delete the Master Table and the Attached Table when a DROP operation is issued. - **LOAD and INSERT:** LOAD and INSERT are the same operations as in original Hive. Data are loaded and inserted into the Master Table. A unique ID will be assigned to each Master Table file, which is necessary to generate a unique ID for each row record. See Section \[sec:impl\] for further details. - **UPDATE and DELETE:** When an UPDATE or DELETE operation is issued, the cost model will be used to choose the most efficient implementation plan from OVERWRITE Plan and EDIT Plan. The OVERWRITE Plan will execute Hive’s INSERT OVERWRITE and replace the existing Master Table and Attached Table with a newly generated Master Table and an empty Attached Table, while the Edit Plan will add the updated information into the Attached Table. For an UPDATE operation, it will add the new value for the updated record fields. For a DELETE operation, it will add a DELETE marker to a corresponding record ID. In both UPDATE and DELETE the Master Table will not be changed. - **UNION READ:** UNION READ reads and merges data from Master Table and Attached Table using the record ID. In order to make the merge process more efficient, we keep record IDs in the Master Table and Attached Table sorted, and implement a simple Map Reduce algorithm using a divide-and-conquer strategy. - **COMPACT** As more data modification information is stored in the Attached Table with the EDIT Plan, the Attached Table grows. The more data is in the Attached Table, the higher the cost of the UNION READ operation, since it needs to read and merge data in the Master Table and the Attached Table. COMPACT does a UNION READ through the existing tables and creates a new Master Table by using INSERT OVERWRITE operation, which replaces the existing Master Table and Attached Table. All the other operations will be blocked during COMPACT. COMPACT can be scheduled to off-line hours or issued manually if the cost of a UNION READ is too expensive. Cost Model {#sec:costmodel} ========== DualTable uses a cost model to choose the most efficient implementation plan, OVERWRITE or EDIT, for UPDATE and DELETE operations. The cost of a plan consists of two parts: 1. cost of reading and writing the Master Table 2. cost of reading and writing the Attached Table. To determine the best implementation, the cost model estimates the costs of both OVERWRITE and EDIT by computing the cost of data reading and writing separately. By subtracting one from the other, the best plan can be found. If the result is positive, it means that EDIT plan is cheaper and thus it will be chosen. Otherwise, the OVERWRITE plan will be used. To calculate the costs, we make the following assumptions: #### Notation 1 In a storage table $S$, the cost of reading or writing data of the amount $D$ is denoted as $\mathcal{C}_\text{Read}^{S}(D)$ and $\mathcal{C}_\text{Write}^{S}(D)$, where $S$ can be $M$ (Master Table) or $A$ (Attached Table). #### Assumption 1 We assume that the cost of reading and writing is directly proportional to the data volume read/written. This is denoted as $\mathcal{C}_\text{Read}^{S}(\lambda D) \approx \lambda\mathcal{C}_\text{Read}^{S}(D)$, where $\lambda \in (0,1)$. The same holds for $\mathcal{C}_\text{Write}^{S}(D)$. #### Notation 2 The total cost of a plan $P$ is denoted as $\text{Cost}_P$, where $P$ can be OVERWRITE or EDIT. #### Assumption 2 $\text{Cost}_P$ equals “modification cost” plus “following read cost”, where “modification cost” indicates the total cost to execute UPDATE or DELETE using plan $P$; “following read cost” indicates the cost to read the whole table for $k$ times after UPDATE or DELETE completes. Given a DualTable $T$ containing data of size $D$, suppose we execute one modification on $T$ and then read the table $k$ times, the corresponding cost models for UPDATE and DELETE are illustrated as follows. #### UPDATE Cost Model Suppose the ratio of data updates is denoted as $\alpha$, $\alpha \in (0,1)$, the costs of the OVERWRITE plan and EDIT plan are shown below, each consisting of two parts: Let $\text{Cost}_U$ be the cost of OVERWRITE plan minus the cost of EDIT plan: $$\begin{aligned} \text{Cost}_U &=& \text{Cost}_\text{OVERWRITE} - \text{Cost}_\text{EDIT} \nonumber \\ &=& \mathcal{C}_\text{Write}^{M}(D) + k\mathcal{C}_\text{Read}^{M}(D) - \mathcal{C}_\text{Write}^{A}(\alpha D) \nonumber \\ &-& k(\mathcal{C}_\text{Read}^{A}(\alpha D) +\mathcal{C}_\text{Read}^{M}(D)) \nonumber \\ &=& \mathcal{C}_\text{Write}^{M}(D) + k\mathcal{C}_\text{Read}^{M}(D) - \alpha\mathcal{C}_\text{Write}^{A}(D) \nonumber \\ &-& k\alpha\mathcal{C}_\text{Read}^{A}(D) - k\mathcal{C}_\text{Read}^{M}(D) \nonumber \\ &=& \mathcal{C}_\text{Write}^{M}(D) - \alpha(\mathcal{C}_\text{Write}^{A}(D) + k\mathcal{C}_\text{Read}^{A}(D)) \label{eq:costu}\end{aligned}$$ The update ratio $\alpha$ can be estimated using historical analysis of the execution log or can directly be given by the designer. The number of successive read operations after an update $k$ can directly be set by the designer, or inferred from the HiveQL code. Using the model, it is clear that when $\alpha$ and $k$ is small, $\text{Cost}_U$ can be positive. This means that the EDIT plan is more efficient when the update ratio and the number of consecutive reads are small. On the other hand, when the update ratio and the number of consecutive reads become too large, the OVERWRITE plan is a better choice. As an example, suppose we use HDFS for hosting the Master Table $M$ and HBase for the Attached Table $A$, the data volume $D$ = 100GB, update ratio $\alpha=0.01$. The rate of HDFS writes using multiple Map tasks adds up to 1GB/s. The rate of HBase reading and writing add up to 0.5GB/s and 0.8GB/s, respectively. Suppose we read continuously for 30 times after the updating operation, the cost model can be computed as follows: $$\begin{aligned} \text{Cost}_U &=& \text{Cost}_\text{OVERWRITE} - \text{Cost}_\text{EDIT} \nonumber \\ &=& \mathcal{C}_\text{Write}^{M}(D) - \alpha(\mathcal{C}_\text{Write}^{A}(D) + k\mathcal{C}_\text{Read}^{A}(D)) \nonumber \\ &=& 100GB / 1GBps - 0.01 \cdot (100GB/ 0.8 GBps \nonumber \\ &+& 30 \cdot 100GB/0.5GBps) \nonumber \\ &=& 38.75s \nonumber \label{eq:costuexample}\end{aligned}$$ As in this example, the time consumption of EDIT plan is shorter than that of OVERWRITE plan. We will choose EDIT as a consequence. #### DELETE Cost Model Suppose the ratio of records being deleted is $\beta$ and $\beta \in (0,1)$. Suppose the average data size of each row is $d$, the size of a DELETE marker is $m$, then the data size of deleted data, denoted as $\beta D$, is $\frac{\beta D}{d}m$. The cost of OVERWRITE and EDIT plans are shown below: Let $\text{Cost}_D$ be cost of OVERWRIT plan minus the cost of EDIT plan. It can be computed as follows: $$\begin{aligned} \text{Cost}_D &=& \text{Cost}_\text{OVERWRITE} - \text{Cost}_\text{EDIT} \nonumber \\ &=& \mathcal{C}_\text{Write}^{M}((1-\beta)D) + k\mathcal{C}_\text{Read}^{M}((1-\beta)D) \nonumber \\ &-& \mathcal{C}_\text{Write}^{A}(\frac{\beta Dm}{d}) - k(\mathcal{C}_\text{Read}^{A}(\frac{\beta Dm}{d}) + \mathcal{C}_\text{Read}^{M}(D)) \nonumber \\ &=& (1-\beta)\mathcal{C}_\text{Write}^{M}(D) + \beta k\mathcal{C}_\text{Read}^{M}(D) - \beta\mathcal{C}_\text{Write}^{A}(D) \nonumber \\ &-& k\beta\mathcal{C}_\text{Read}^{A}(\frac{Dm}{d}) - k\mathcal{C}_\text{Read}^{M}(D) \nonumber \\ &=& \mathcal{C}_\text{Write}^{M}(D) - \beta(\mathcal{C}_\text{Write}^{M}(D) + k\mathcal{C}_\text{Read}^{M}(D) \nonumber \\ &+& \frac{m}{d}\mathcal{C}_\text{Write}^{A}(D)+k\frac{m}{d}\mathcal{C}_\text{Read}^{A}(D)) \label{eq:costd}\end{aligned}$$ Where $m$ is a constant value, which can be determined via data sampling. Estimation of $\beta$ is similar to that of $\alpha$ in the UPDATE cost model. $\mathcal{C}_\text{Write}^{M}(D)$, $\mathcal{C}_\text{Write}^{A}(D)$, and $\mathcal{C}_\text{Read}^{A}(D)$ can be computed the same way as in UPDATE cost model. Using the model, it is obvious that when $\beta$ and $k$ are small, $\text{Cost}_D$ is positive. This means that the EDIT plan is more efficient when the delete ratio and the consecutive number of reads are small. On the other hand, when the delete ratio and the consecutive number of reads become larger, the OVERWRITE plan becomes more efficient. Implementation Details {#sec:impl} ====================== ![DualTable Implementation on Hive[]{data-label="fig:dtimpl"}](./images/Implementation.pdf){width="0.8\columnwidth"} We have implemented DualTable with Apache HBase, HDFS and Hive. In this section, we discuss technical details about our extensions to Hive, the data layout and the record ID management. Extensions to Hive ------------------ Hive provides multiple abstractions that enable extensions: the InputFormat and OutputFormat classes are used in a MapReduce job to read and write data rows. Hive uses Serializer and Deserializer classes to parse records from data rows. Hive supports *user defined table functions* (UDTF) to add new functionality in statements to manipulate data. As shown in Figure \[fig:dtimpl\], we use HDFS for the Master Tables and HBase for the Attached Tables and a system wide metadata table. Each DualTable contains an HDFS-based *Master Table* and an HBase-based *Attach Table*. We developed custom InputFormat, OutputFormat, Serializer, and Deserializer classes with UNION READ and record ID management logic for DualTable. Additionally, two UDTFs implement the EDIT Plans for UPDATE and DELETE. The UPDATE UDTF takes the name of the updated table, the updated columns and the new values as input and stores the update information in HBase. The DELETE UDTF only takes the name of the table and puts a DELETE marker for each deleted row in HBase. We have added UPDATE and DELETE commands to HiveQL. If a HiveQL statement contains an UPDATE or DELETE command, it will be sent to the DualTable parser, otherwise, it will go to the original Hive parsing procedure. For these two DML commands the parser will choose to generate a Hive-compatible statement using INSERT OVERWRITE or our UDTFs, based on the cost evaluator. The former one is for an OVERWRITE plan, and the later one for an EDIT plan. The cost evaluator is in charge of cost evaluation based on our cost model as described above. The DualTable metadata manager collects and manages information required for cost evaluation. Data Layout and Record ID Management ------------------------------------ We use the ORC file format in HDFS for the Master Table and one Master Table may consist of multiple ORC files in an HDFS folder. Besides the ORC file format’s handy features like compression and Hive type support, we chose it for two important reasons: 1. We maintain an incremental integer file ID for each DualTable in the system wide metadata table. Whenever a MapReduce mapper creates a new file, it retrieves and stores a unique ID in the file metadata. 2. We can retrieve row numbers when reading data rows. The row numbers are computed during reading operations and have no storage cost, which makes it a perfect way to maintain unique IDs for each DualTable row record. A DualTable record ID is generated on read by concatenating the file ID and record’s row number, which makes the record ID unique in one DualTable. In the HBase-backed Attached Table, we use DualTable record IDs as HBase row keys. For UPDATE information, the updated field’s column number (as maintained by Hive) serves as HBase column qualifier and the new field value as HBase cell value. For DELETE information, only a delete marker (a special HBase cell) is stored in the deleted record’s ID row. With the data layout and record ID generation policy above, sequential record IDs within an ORC file are in ascending order. Meanwhile, record IDs stored as row keys in HBase are already sorted. This makes it simple and straightforward for a Mapper to merge data in the Master Table and the Attached Table for UNION READ operations because it only needs to read through and merge two sorted ID lists. Comparison to Hive ACID Extensions {#sec:HiveACIDvsDualTable} ---------------------------------- Apache Hive also plans to support data modification operations. They published a design document in 2013, but the up-to-date version Hive-0.13, which is released in April 2014, does not support data update or delete yet. The feature is still under development [@Hortonworks_trans]. Due to the fact that Hive-0.13 does not support UPDATE/DELETE statement, we could only compare the two systems from conceptual perspective. DualTable puts the data modification information into a HBase table, which is called Attached Table; The original data is saved into Master Table; Each Master Table has only one Attached Table. The read operation accesses both Master Table and Attached Table to get the original data and its modification information, then combines them to get the up-to-date data view; For write operation (UPDATE or DELETE), DualTable could either overwrite the whole Master Table or just update the Attached Table, and it makes use of a cost-model to make decision. When the size of Attached Table exceeds a threshold, DualTable merges it with its Master Table. Hive puts both the original data and modification information into the same Hive database [@Leverenz2014]. They are called base table and delta tables, respectively. Each transaction creates a new delta table for a base table. Therefore, a base table could have multiple delta tables. The read operation retrieves a record from base table and merge sorts it with corresponding records in delta tables to get the up-to-date data view. The write operation puts the whole updated record into delta tables, even if only one cell is changed. Hive supports two compact modes, minor compact merges all delta tables belonging to the same base table into a single delta table; and major compact merges the delta tables with their corresponding base table. We compare DualTable and Hive from three aspects: First, their objectives are different. Hive aims to support transaction and full ACID guarantee [@Leverenz2014][@Hortonworks_ACID]. DualTable focuses on optimization of data update performance for our smart grid industrial scenarios. Secondly, their storage policies are different. DualTable employs hybrid storage architecture to make full advantage of both HDFS and HBase. In this way, DualTable could improve random write performance significantly without obvious negative impact on sequential read. While Hive puts both the original data and modified information into HDFS. For data read operation, Hive merge sorts the base table with all relevant delta tables to get the up-to-date view. Since delta table is stored as plain Hive tables and updated records are all appended to the tables, the reader has to scan them sequentially and selects latest updated values for particular record. On the contrary, DualTable retrieves a row from master table, then randomly accesses HBase based Attached table to get changed record and its latest value according to the row ID. They are combined in the UnionRead operation. In addition, DualTable can make use of HBase’s multiple-version feature to track data change history. Third, DualTable supports runtime selection of update policy. Our experiments find that overwriting the whole table with INSERT OVERWRITE statement sometimes performs better when update ratio exceeds a threshold. Therefore, DualTable incorporates a cost model to decide whether to put data modification information into the Attached Table or overwrite the whole table. However, Hive always updates the delta tables. It could not make better decisions at runtime. Evaluation {#sec:eval} ========== In following sections, we compare DualTable with Hive in terms of query performance and performance of update and delete operations by experiments. We conduct two sets of experiments. The first set of experiments uses a dataset from the Zhejiang Grid and runs on a cluster of 26 nodes; In order to further assess the generic applicability of DualTable, we perform the second set of experiments with TPC-H dataset on a 10-node cluster. Each node is equipped with 8 cores, 16 GB memory, and 250 GB hard disk. All nodes run CentOS 6.2, Java 1.6.0-41, Hadoop-1.2.1 and HBase-0.94.13. We implement DualTable based on Hive-0.11. Since DualTable is implemented based on ORC file format, we set Hive to use the same file format for fair comparison. JobTracker, Namenode and HMaster run on the same node. TaskTracker, Datanode, and RegionServer run on other nodes. Every worker in Hadoop is configured with up to 6 mappers and 2 reducers. HDFS is configured with 3 replicas and 64 MB chunk size. We run all experiments three times and report the average result. The metric used in our experiment is run time. Evaluation of Real Grid Workloads --------------------------------- We carried out our performance evaluation with production data collected from electricity information collection system deployed in the Zhejiang Grid of the China State Grid, which is the largest electric utilities company in the country. In order to make the experiment easy to conduct and the run time controllable, the total data set we use is around 64 GB. To avoid use of memory cache, we reset the system every time when we finish one experiment. Since the IO cost of Hive will increase nearly linearly with the growth of data size, it is obvious that the performance trend using this workload is typical and will reflect the trend in bigger or smaller workloads. The six tables involved are listed in Table \[tab:GridSchema\]. We also list some representative columns involved in our experiments. We have also tested HBase-based Hive, which can also support update and delete operations rather than Hive’s default INSERT OVERWRITE operation. The TPC-H workload running on a 10-node setting shows that HBase-based Hive is much slower than Hive itself and DualTable, respectively. This is the reason why we do not consider HBase-based Hive as a comparison target system in this section. [l|r|X]{} Table & \# Records & Columns in Experiments\ yh\_gbjld & 7112576 & dwdm: organization code; gddy: voltage; hh: family id; sfyzx: withdrawn or not\ zd\_gbcld & 7963648 & cldjh: measure point id; zdjh: terminal code; dwdm: organization code;\ zc\_zdzc & 74104736 & dwdm: organization code; zdjh: terminal code; zzcjbm: manufacture code; cjfs: collection method; zdlx: terminal type;\ rw\_gbrw & 34045664 & xfsj: issued time; rwsx: task property; cldh: measure point id;\ tj\_gbsjwzl\_mx & 239032928 & yhlx: user type; rq: date; dwdm: organization code; cjbm: manufacture code;\ tj\_dzdyh & 9805312 & zdjh: terminal code; [**Performance Overhead of Queries:**]{} In the first experiment, we assess read performance of DualTable and Hive using two typical SELECT statements of State Grid business logic. The first statement retrieves records from table yh\_gbjld according to some predicates, in which yh\_gbjld joins with table zc\_zdzc and table zd\_gbcld. The second statement calculates total number of records in table tj\_gbsjwzl\_mx. The Attached Table of DualTable is empty in this experiment. Both Hive and DualTable scan the whole table to filter records. Since the Attached table is empty, DualTable does not need to merge the original record from Master Table with data modification information. Figure \[fig:Select1\] shows the results. For statement \#1, Hive takes 111 minutes and DualTable takes 120 minutes. The performance difference is about 8%, which is attributed to the overhead incurred by the Attached Table (although it does not contain any data, the function invocation is inevitable). For Statement \#2, Hive takes 89 seconds and DualTable takes 101 seconds. Hive outperforms DualTable about 12%, which again is attributed to overhead of the Attached Table. This experiment shows that the overhead of the Attached Table is fairly low. ![image](./images/fig4_5.pdf){width="\columnwidth"} ![image](./images/query3_update.pdf){width="\columnwidth"} ![image](./images/query3_delete.pdf){width="\columnwidth"} ![image](./images/exp4_update_then_select.pdf){width="\columnwidth"} ![image](./images/exp4_update_plus_select.pdf){width="\columnwidth"} ![image](./images/exp4_delete_then_select.pdf){width="\columnwidth"} [**Performance of Updates:**]{} This experiment demonstrates how Hive and DualTable perform when handling update and delete operation. It is very common in the business logic of State Grid to change or remove records of some specific dates. We mimic this behavior in this test. The tables involved contain roughly uniformly distributed data of 36 days, and the experiment starts by changing data of one day ($\frac{1}{36}$) until data of 18 days ($\frac{18}{36}$). In order to verify the effectiveness of the cost model, we first run DualTable with cost-model and, as a comparison, DualTable in EDIT mode, which means DualTable always writes data modification information into the Attached Table. Figure \[fig:Update1\] shows the performance of DualTable and Hive for an update operation. It can be seen that Hive’s execution time does not fluctuate much with variation of data modification ratio, since Hive always overwrites the whole table. For DualTable, the cost of writing update information into the Attached Table is proportional to the amount of updated data. When the update ratio is smaller than $\frac{6}{36}$, the cost model selects *EDIT* instead of *OVERWRITING*, so *DualTable EDIT* overlaps with *DualTable cost-model*; the cost of writing into Attached table is less than overwriting the whole data, which makes DualTable perform significantly better than Hive. When data update ratio increases, the execution time of DualTable EDIT mode grows drastically. When data update ratio exceeds $\frac{6}{36}$, DualTable switches to *OVERWRITE* mode, and DualTable takes a little longer than Hive to run the UPDATE statement due to its own overhead. Figure \[fig:Delete1\] depicts a performance comparison of delete operations on Hive and DualTable with various data deletion ratios. Hive’s *overwrite the whole table* approach results in reduction of data written into HDFS when data deletion ratio increases, so its run time is inversely proportional to the delete ratio. Hive’s run time drops from 772 seconds to 572 seconds when the ratio raises from $\frac{1}{36}$ to $\frac{18}{36}$. In the other hand, *DualTable EDIT* puts a *DELETE marker* for each removed row into the Attached Table. As a result, its run time increases with the data deletion ratio. When $\frac{1}{36}$ of the data is deleted, DualTable outperforms Hive by a factor of 3. With a delete ratio smaller than $\frac{10}{36}$, the cost model selects *EDIT* instead of *OVERWRITE*. Therefore, *DualTable EDIT* overlaps with *DualTable cost-model*. After that, the cost of writing DELETE markers into HBase exceeds the overhead of overwriting the whole table, and DualTable starts to adopt the overwriting approach to accomplish data deletion. There is a small overhead to run the DELETE statement. [**Impact of Size of Attached Table:**]{} The previous experiment evaluates update performance of DualTable and Hive with various data modification ratios. We analyzed the State Grid workload and found that changed tables will be retrieved in subsequent operations to get the latest values. To reflect this in the experiments, we issue a SELECT query after UPDATE and DELETE operations like we did in last experiment, to show how the size of Attached table impacts performance of following UnionRead operations. Figure \[fig:Update2\] shows the run time of a SELECT query following the UPDATE operation used in the previous experiment. Hive performance does not fluctuate much with the UPDATE ratio, since the UPDATE operation does not change data amount in the related Hive tables. In this experiment, DualTable is always slower than Hive. The performance difference is very small when only one specific day’s data is updated; however, DualTable takes more time for UnionRead operation with raising UPDATE ratio, and it is 2.7 times slower than Hive when the UPDATE ratio grows to 18/36. This is because DualTable EDIT mode puts all UPDATE information into the Attached Table, and the following UnionRead operation needs to first read the original record from the Master Table, then merge with the corresponding record in the Attached Table to get the latest value. Figure \[fig:UpdateSelectTotalTime\] demonstrates the total time taken by the UPDATE operation and the following SELECT query. The trend shown in this figure and its explanation is similar to Figure \[fig:Update1\] and, therefore, we do not repeat it here for space limitation. Figure \[fig:Delete2\] and Figure \[fig:DeleteSelectTotalTime\] depict the run time of a SELECT query following the DELETE operation used in the previous experiment. These results are similar to the last one we just explained, therefore, we do not repeat it here for space limitation. [l|r|X]{} Table & \# Records & Columns in Experiments\ tj\_tdjl & 58494976 & tdsj: outage time; qym: area code; zdjh: terminal code;\ tj\_td & 33036288 & hfsj: recovery time; tdsj: outage time;\ tj\_sjwzl\_r &73569360 & rq: date; rcjl: sampling rate of a day; yhlx: user type;\ tj\_dysjwzl\_mx & 382890014 & rq: date; sfld: miss a point or not; cjfs: collection method;\ tj\_sjwzl\_y & 2586120 & rq: date\ tj\_gk & 30655920 & rq: date; dwdm: organization code;\ [l|X|r|r|r|r]{} Stmt & Semantics & Update Ratio & Hive (sec) & DualTable (sec) & Improvement\ U \#1 & Set the area code in which an outage event happens at some specified time to a new value. & 2% & 159.81 & 51.39 & 311%\ U \#2 & When the outage recovery time is earlier than the start time, set the outage recovery time to a value which indicates an error. & 5% & 104.90 & 60.81 & 173%\ U \#3 & set the sampling rate of a day to a new value for a specified date and specified user type. & 0.1% & 389.19 & 47.52 & 819%\ U \#4 & Set the collection method of a specified day and specified user type to a new value. & 3% & 1577.87 & 161.73 & 976%\ D \#1 & Delete records from table tj\_sjwzl\_y for a specified month. & 4% & 46.26 & 22.47 & 206%\ D \#2 & Delete records from table tj\_tdjl for a specified area code. & 5% & 102.04 & 47.26 & 216%\ D \#3 & Delete records from table tj\_gk for a specified organization code and a marker. & 3% & 147.87 & 34.97 & 423%\ D \#4 & Delete records from table tj\_tdjl for a specified terminal code and outage time. & 0.01% & 140.94 & 29.47 & 478% [**More Experiments:**]{} As mentioned above, the data modification ratio is rarely higher than 10% in the data analysis system of the State Grid. In order to further verify the effectiveness of DualTable regarding State Grid workload, we extracted four representative UPDATE statements and DELETE statements from line loss and low voltage calculation modules. The six tables involved are listed in Table \[tab:GridSchema2\]. Their total size is 70 GB. We also list some representative columns involved in the experiments. Their data modification ratio ranges from 0.01% to 5%. We compare DualTable and Hive in terms of query run time, and calculate the performance improvement of DualTable in Table \[tab:realResults\](U is abbreviation of UPDATE, D is abbreviation of DELETE in the table). We can see that DualTable outperforms Hive an order of magnitude for all the 8 operations thanks to its cost model and the Attached Table storage model. TPC-H Workloads Evaluation -------------------------- Besides the above performance evaluation conducted with real production data, we further assessed the generic applicability of DualTable using the standard TPC-H queries and data. We conduct a number of experiments to measure the read and update performance of DualTable. When we use update or delete in HBase-based Hive, we implement the EDIT plan similar to DualTable using user defined functions instead of relying on the INSERT OVERWRITE statement. The tables lineitem and order of TPC-H were used, these are the two largest tables of the TPC-H data set. In the TPC-H 30GB data set that was used, they have 0.18 billion rows (i.e., 23GB) and 45 million rows (i.e., 5GB) respectively. We modify TPC-H queries to add update and delete operations. ![image](./images/exp4_delete_plus_select.pdf){width="\columnwidth"} ![image](./images/fig3.pdf){width="\columnwidth"} ![image](./images/fig4.pdf){width="\columnwidth"} ![image](./images/fig5.pdf){width="\columnwidth"} ![image](./images/fig6.pdf){width="\columnwidth"} ![image](./images/fig7.pdf){width="\columnwidth"} ![image](./images/fig8.pdf){width="\columnwidth"} ![image](./images/fig9.pdf){width="\columnwidth"} ![image](./images/fig10.pdf){width="\columnwidth"} In the first experiment, we use 3 different queries to estimate the read efficiency of DualTable. Query a is TPC-H query Q1, Query b is TPC-H Q12, and Query c is a count on the whole lineitem table. The Attached Table is empty in this experiment. Thus, we measure DualTable’s basic overhead, which is negligible as can be seen in Figure \[fig:Chart1\]. In the second experiment, we run 3 typical update statements. DML-a updates 5% of lineitem, DML-b deletes 2% of lineitem, and DML-c joins lineitem and order and updates 16% of order. At the beginning of the experiment, the Attached Table is empty. The performance results can be seen in Figure \[fig:Chart2\]. As can be seen in the figure, DualTable is most efficient for all updates, since it avoids unnecessary writes that Hive on HDFS would have to perform, but features faster reads than HBase. To assess the cost of DualTable’s performance for different ratios of deletes and updates, we perform an additional experiment. Starting with an empty Attached Table, we execute updates, which randomly update one field in 1% to 50% of the records in lineitem. In Figure \[fig:Chart3\], the performance for Hive, DualTable in EDIT mode, and DualTable with the cost model can be seen. As expected, the performance of updates in Hive is constant for all update ratios, while it increases with the amount of data changed in both versions of DualTable. The cross-over point is reached at an update ratio of 35%, when overwriting becomes cheaper than storing delta records in the Attached Table. The cost model based DualTable changes to the OVERWRITE plan when the Attached Table becomes too costly and thus has a similar performance to Hive from that point while the pure EDIT plan version gets more expensive. In Figure \[fig:Chart4\], the same experiment is repeated for deletes. Unlike in the update case, the workload for Hive becomes less with increasing delete ratio, since less data has to be written. Therefore, the cross-over point is reached at a lower delete ratio. The delete cost model again finds the correct ratio to switch plans. In Figure \[fig:Chart5\], the overhead of reading data from the Attached Table is shown. In the experiment, we executed a full table scan after updating 1% to 50% of the lineitem table. While the read performance of Hive is unaffected by the updates, since data is always rewritten, the DualTable UNION READ operation incurs additional load to read data from both HDFS and HBase and merge it. The overhead in the update case is linear to the amount of data in the Attached Table. In this experiment, no cost model was used. In Figure \[fig:Chart6\], the total cost of the update operation and an additional read are shown. This is the most realistic case, where updates are performed and then the updated data set is analyzed. The results are similar to the pure update experiments, with the difference that the cross over point is slightly below 35% update ratio, which is due to the additional overhead incurred for merging the data from the Master Table and the Attached Table in the read query. The more often the data is read the lower the cross over point will be, which underlines the importance of the cost model to ensure the best possible plan. We repeated this experiment for the delete operation. The results can be seen in Figure \[fig:Chart7\] and Figure \[fig:Chart8\]. The results confirm the results from previous experiments. Entries in the Attached Table incur an overhead for read operations, which is more pronounced for high delete ratios since in Hive less data has to be read for the query part, while DualTable keeps the original records and adds delete markers. Nevertheless, for delete ratios below 30% DualTable is always more efficient than Hive. The cost model always chooses the best plan. Related Work {#sec:related} ============ Hive provides HiveQL, a declarative query language, which exposes an SQL-like interface for Hadoop [@Thusoo2009]. Internally, Hive first translates HiveQL into a directed acyclic graph (DAG) of MapReduce jobs and then executes the jobs in a MapReduce environment. From this point of view, there are three aspects or levels of optimization goals in Hive: optimization of the query plan, especially when general SQL needs to be run in this environment; optimization of the execution system, mostly including optimization of MapReduce and development of compatible systems; and I/O optimization, which may include optimized data placement, index creation, etc. Even though work in one aspect may also involve contributions to some other aspects, related work can be categorized into these three classes. Query Plan Optimization ----------------------- Hive itself only supports some basic rule-based optimization such as predicate push down and multiple join strategies including MAP-join and Sort-Merge-Bucket join. YSmart can detect correlated operations within a complex query, and use a rule-based approach to simplify the whole query structure to generate a MapReduce plan with minimal tasks [@Lee2011]. YSmart has been merged into the official Hive version[^1]. Sai Wu proposed a Hive optimizer called AQUA [@Wu2011], which can categorize join operations in one query into several groups and choose the optimal execution plan of the groups based on a predefined cost model. Xiaofei Zhang presented an approach to optimize multiple path join operations in order to improve the overall parallelization [@Zhang2012]. Harold Lim presented a MapReduce workflow optimizer called Stubby, which uses a series of transformation rules to generate a set of query plans and find the best one [@Lim2012]. All of them attempt to solve the problem of translating SQL to MapReduce and reorganizing the MapReduce DAG to yield better performance, focusing on optimization at MapReduce level. Furthermore, QMapper considers variations of SQL queries and their influences on query performance [@Xu:2013]. QMapper uses a query rewrite-based approach to guide the translation procedure from SQL to a variation of Hive queries and selects the best plan based on a modified cost model. These works involve SQL-MapReduce or SQL-HiveQL-MapReduce translation, using techniques like query graph analysis, query rewriting, and optimization of the DAG structure. Their approach focuses on the MapReduce flow or higher layers and none of them considers data manipulation within one MapReduce task. All of them choose to use Hive-friendly storage, like HDFS, by default. Neither UPDATE nor DELETE operations are discussed. Execution Environment Optimization ---------------------------------- To improve the performance or features of Hive, many HiveQL compatible systems have been developed, like Shark [@Engle2012] based on Spark [@Zaharia2012], Cloudera Impala [@Impala], and others. Technologies for in-memory processing, more efficient data reading and writing, and partial DAG execution are utilized to enhance the whole system or just particular kinds of applications like recursive data mining and ad-hoc queries. Besides, by designing and analyzing MapReduce cost models, a large body of research has been done to enable execution level optimization of MapReduce. Starfish, as an example, focuses on automatic MapReduce job parameter configuration [@Herodotou2011]. It makes use of a profiler to collect detailed statistics from MapReduce executions and utilizes a what-if engine to stimulate the execution and estimate the cost. An optimizer is utilized to minimize the cost of finding a good configuration in a search space with combinatorial explosion. The aforementioned Stubby also uses the what-if engine here to estimate cost for a MapReduce workflow [@Lim2012]. MRShare aims at task sharing among queries that contain similar subtasks [@Nykiel2010]. Optimal grouping of relevant queries based on the MapReduce cost model minimizes redundant processing cost and improves the overall efficiency. From the perspective of data manipulation, these works are similar to those of query plan optimization. They optimize MapReduce tasks and plans, either through intelligent configuration of environment settings or just by improving sharing among MapReduce tasks. Data manipulation operations are out of their scope. I/O Optimization ---------------- Optimized data placement is a common way to reduce data loading and reading cost. The RCFile splits a data file into a set of row groups, each group places data in a column-wise order [@He2011]. With the help of the RCFile, a Hive application can efficiently locate its inputs onto several data groups while avoiding reading redundant columns of necessary rows. RCFile and Hortonworks’ ORC (Optimized RCFile) are widely used in the Hive environment [@Leverenz2013]. Different from RCFile, LLama divides data columns into groups, and provides another kind of data format, CFile, to store them [@Lin:2011]. An index mechanism is used for efficient data look up. It is shown that data loading and join performance can be improved. Driven by the requirement of Smart Grid data process, we have also proposed DGFIndex, a new multiple range index technology [@Liu:2014], which significantly improved the overall efficiency of multiple range query with a fair low cost of space occupation. Similar to RDBMS, creating indexes can also be of great value for I/O performance improvement. For now, Hive itself can support a compact index called CIndex [@HIVE-417]. CIndex can enable multi-dimensional queries at the cost of a large disk space for the index structure. Hadoop++ also provides an index-structured file format to reduce the I/O cost during data processing [@Dittrich:2010]. Data placement and index technologies try to minimize I/O to improve the query performance, but they do not improve update operation support. On the other hand, Hive indexes will result in additional cost of reconstructing index structures for applications with update operations implemented with INSERT OVERWRITE statement. Conclusion {#sec:conclusion} ========== In this paper, we have presented DualTable, a novel storage model for Hive. DualTable stores data selectively in HDFS or in HBase. While new records are always stored in HDFS, updates are either directly executed on HDFS or stored as delta records in HBase. The storage location is dynamically chosen by a cost model. Our experiments with standard industry benchmarks and real data and workloads from the China State Grid show that DualTable outperforms Hive by orders of magnitude in realistic settings. In future work, we will evaluate other storage options for the Attached Table, and compare the performance of DualTable with that of Hive ACID once it is available. Furthermore, we will investigate how the proposed storage model can be incorporated in other big data analytic systems such as Impala. Additionally, we will investigate multi-query optimization in Hive, which we expect to yield significant performance improvements in enterprise use cases such smart grid data management. **Acknowledgements**: This work is supported by the National Natural Science Foundation of China under Grant No.61070027, 61020106002, 61161160566. [^1]: <https://issues.apache.org/jira/browse/HIVE-2206>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have utilized neutron powder diffraction to probe the crystal structure of layered Na$_{x}$CoO$_{2}$ near the half doping composition of $x=$0.46 over the temperature range of 2 to 600K. Our measurements show evidence of a dynamic transition in the motion of Na-ions at 300K which coincides with the onset of a near zero thermal expansion in the in-plane lattice constants. The effect of the Na-ordering on the CoO$_{2}$ layer is reflected in the octahedral distortion of the two crystallographically inequivalent Co-sites and is evident even at high temperatures. We find evidence of a weak charge separation into stripes of Co$^{+3.5+\epsilon}$ and Co$^{+3.5-\epsilon}$, $\epsilon\sim$0.06$e$ below =150K. We argue that changes in the Na(1)-O bond lengths observed at the magnetic transition at [$T_{m1}$]{}=88K reflect changes in the electronic state of the CoO$_{2}$ layer.' author: - 'D. N. Argyriou' - 'O. Prokhnenko' - 'K. Kiefer' - 'C. J. Milne' title: 'Emergent charge ordering in near half doped Na$_{0.46}$CoO$_{2}$ ' --- Introduction ============ The alkali cobaltates Na$_{x}$CoO$_{2}$ have been the subject of intense interest as they are a rare example of competing interactions on a triangular lattice that can be easily tuned by chemical means. Varying the amount of Na ($x$) produces a rich phase diagram which exhibits spin dependent thermopower ($x=$0.75)[@Wang:2003rj], metal-insulator transitions ($x=$0.5)[@Foo:2004cx; @Huang:2004cx], antiferromagnetism and 5K superconductivity at $x=$0.3 for a hydrated compound[@Takada] . More recently it has been realized both experimentally and theoretically that the role of the Na ions goes beyond providing a simple means to electronically dope the CoO$_{2}$ layer[@Roger:2007lr; @Zhou:2007fk; @Marianetti:2007ly]. Rather, the ordering of Na-ions leads to a potential that perturbs the CoO$_{2}$ layer to produce strong electronic correlations[@Roger:2007lr]. The role of these correlations is still under investigation but it demonstrates that these materials can exhibit frustration in two different ways, one by the triangular topology of the CoO$_{2}$ layer and the other by the Na induced potential. This double frustration is best exhibited at half-doping. Here the Na ordering results in a relatively simple orthorhombic distortion of the parent hexagonal phase in sharp contrast to the complex incommensurate structures found for higher $x$ compounds[@Roger:2007lr]. For $x=$0.5 Na-ions order as to form stripes as shown in fig.\[strt\] while the magnetic susceptibility shows two abrupt decreases (see for example inset in fig.\[powder\]) at [$T_{m1}$]{}=88K and at [$T_{m2}$]{}=52K[@Huang:2004cx; @Foo:2004cx]. The first transition is associated with the onset of a long range antiferromagnetic ordering[@Huang:2004cx; @Foo:2004cx; @gasparovic] while the second transition coincides with a sharp rise in the resistivity[@Huang:2004cx; @Foo:2004cx]. This second transition has been ascribed to be driven by charge ordering (CO) of a $t_{2g}$ electron to form distinct LS Co$^{3+}$ ($t_{2g}^{6}, S=0$) and LS Co$^{4+}$ ($t_{2g}^{5}, S=1/2$) ions [@Foo:2004cx]. Recent $\mu$SR and neutron diffraction measurements[@gasparovic; @Mendels:2005cx; @Yokoi:2005cx] propose a magnetic structure consistent with this picture, as the magnetic lattice comprises of stripes of magnetically inactive Co$^{3+}$ and antiferromagnetic (AF) coupled Co$^{4+}$ (see fig. \[strt\](a)). ![(Color online) Crystal structure of the Na$_{0.46}$CoO$_{2}$. (a) a projection of the $ab-$plane showing the crystallographically inequivelant Co(1) (dark blue) and Co(2) (light blue) sites and the Na(1) (yellow) and Na(2) (orange) sites. The Na-atom reside alternatively above and below the CoO$_{2}$ sheet. The Na(1) atom is directly above or below the Co(1) atom. The red arrows indicate the idealized magnetic ordering of nominal low spin Co$^{4+}$ sites for the $x=$0.5 composition proposed on the basis of neutron and $\mu$SR measurements[@gasparovic]. (b) A portion of the CoO$_{2}$ sheet. Note that the Na(2) site resides above a triangle defined by the edges of CoO$_{6}$ octahedra. Selected Co-O bond lengths determined at 2K are shown.[]{data-label="strt"}](fig1.eps) What is striking in this cobaltate is that the sequence of charge ordering and Nèel transitions is reversed ([$T_{m2}$]{}$<$[$T_{m1}$]{}), compared to classic charge ordered systems such as the manganites[@Argyriou:2000lx] or magnetite[@wright] and has brought the charge ordering picture into some doubt. To reconcile the fact that [$T_{m2}$]{}$<$[$T_{m1}$]{}, Bobroff [*et al.*]{}[@bobroff] propose on the basis of NMR measurements a scenario of a successive nesting of the Fermi surface (FS) that is coupled to a spin density wave in a way in which charge carriers are localized successively with decreasing temperature. However, this idea has been more recently disputed as the role of the Na-ions and the crystal potential that they impose on the CoO$_{2}$ layer has been theoretically treated better. For example stripe Na ordering induces a weak incipient charge ordering on the CoO$_{2}$ layer[@Choy:2007ve; @Zhou:2007fk; @Marianetti:2007ly], however the mechanism of a progressive nesting of the FS with decreasing temperature arises directly from the Na-ordering as shown by a Hubbard model using the spatially unrestricted Gutzwiller approximation[@Zhou:2007fk]. Here the charge ordering is viewed to be driven by the ordering of Na above the antiferromagnetic ordering at [$T_{m1}$]{}=88K [@Choy:2007ve; @Zhou:2007fk]. Nevertheless this model successfully predicts the correct antiferromagnetic ordering[@Choy:2007ve] and suggests a charge separation into stripes of Co$^{+3.5+\epsilon}$ and Co$^{+3.5-\epsilon}$, with $\epsilon\sim$0.06$e$[@Zhou:2007fk]. This value represents a very weak charge ordering and is below the lower limit of detection for charge separation by NMR[@bobroff], but in agreement with powder diffraction measurements that suggest $\epsilon\sim$0.12$e$ at 10K[@williams]. In this paper we use neutron powder diffraction (NPD) over a wide temperature range (2-600K) to probe the crystal structure of layered Na$_{x}$CoO$_{2}$ near the half doping composition of $x=$0.46 over the temperature range of 2 to 600K. Our NPD measurements show evidence of a dynamic transition in the motion of Na-ions at 300K which coincides with the onset of a near zero thermal expansion in the in-plane lattice constants of our Na$_{0.46}$CoO$_{2}$ sample. The effect of the Na-ordering on the CoO$_{2}$ layer is reflected in the octahedral distortion of the two crystallographically inequivalent Co-sites and is evident even at high temperatures. We find evidence of a weak charge separation into stripes of Co$^{+3.5+\epsilon}$ and Co$^{+3.5-\epsilon}$, $\epsilon\sim$0.06$e$ below =150K, thus confirming a more physical sequence of charge ordering and magnetic transitions for this compound. We argue that changes in the Na(1)-O bond lengths observed at the magnetic transition at [$T_{m1}$]{}=88K reflect changes in the electronic state of the CoO$_{2}$ layer. The paper is structured in the following way. In section III we discuss the evidence for a weak charge ordering as determined from the temperature dependent NPD data. The dynamic behavior of Na-ions at high temperature and changes in Na-O bond lengths close to the magnetic transition [$T_{m1}$]{} are discussed in section IV, while the unusual temperature dependence of the lattice constants in section V. Discussion and summary are found in section VI and VII respectively. Experimental ============ ![Rietveld refinement of NPD data measured at 1.8K from our $x$=0.46 sample. Here crosses represent the measured NPD data, while the continuous line through the points represents the calculated diffraction pattern. The difference between the data and the model is shown at the bottom of the figure. Vertical bars represent expected Bragg reflections for the P$nmm$ structure of this compound. The weighted R-factor here is $wR_{p}$=4.22% and the Bragg R factor 4.3%. In the inset we show the magnetic susceptibility of our Na$_{0.46}$CoO$_{2}$ powder sample measured on a SQUID magnetometer under a field of 1kOe. Arrows indicate two anomalies that have been ascribed to [$T_{m1}$]{}=88K and [$T_{m2}$]{}=52K.[]{data-label="powder"}](fig2.eps) Polycrystalline samples were prepared using standard solid state synthesis techniques. The starting stoichiometry for these samples was Na$_{0.75}$CoO$_{2}$. In order to deintercalate Na from the lattice to achieve a $x\sim$1/2 composition, a 5g portion of the $x=$0.75 sample was immersed in a bromine-acetonitrile solution with a 1:1 Na to Br$_{2}$ ratio, stirred in solution for 7-14 days and washed. The Na/Co ratio of the product was measured using neutron activation analysis (NAA) giving a composition $x=$0.46(1). Magnetic susceptibility as a function of temperature ($\chi(T)$) was measured using a Quantum design MPMS and was found to be identical to the published literature as shown in the inset of fig. 2 [@Foo:2004cx; @Huang:2004cx]. Rapid measurements of high resolution neutron powder diffraction data were collected from the $x$=0.46(1) sample using the HRPD diffractometer ($\Delta d/d\sim5\times 10^{-4}$) at the ISIS-facility, Rutherford Appleton Laboaratory. Higher statistics data suitable for Rietveld refinement were measured between 2 to 600K using the high resolution powder diffractometer E9 ($\Delta d/d\sim2 \times 10^{-3}$, $\lambda$=1.7973Å), located at the Berlin Neutron Scattering Center, at the Hahn-Meitner-Institut (HMI). Supplementary temperature dependent data were also measured from the $x=$0.75 sample between 5-300K. All NPD data were analyzed using the Rietveld method which allowed us to measure lattice parameters, atomic positions and atomic displacement parameters as a function of temperature. A typical Rietveld refinement of the NPD data is shown in fig. \[powder\]. Evidence of Weak Charge Ordering ================================ ![(Color online) (a,b) Temperature dependence of Co-O bond lengths computed from Rietveld analysis of NPD data measured on the E9 diffractometer.(c) Bond valence sums for the Co(1) and Co(2) sites (filled circles) as a function of temperature computed from Co-O bond lengths determined from Rietveld analysis of the NPD data. The dashed line through the data is the average BVS for the two Co sites. BVS values obtained from ref. are also shown. On the same figure we plot the octahedral distortion parameter $\Delta$ for the CoO$_{6}$ octahedra centered in the Co(1) and Co(2) sites. Dashed lines are guides to the eye.[]{data-label="bonds"}](fig3.eps) The validity of the reported orthorhombic P$nmm$ structure for $x=\frac{1}{2}$[@Huang:2004cx; @williams] was tested by considering space groups that arise from distortions of the parent hexagonal structure P$6_{3}/mmc$ which are consistent with the reported orthorhombic unit cell. The program ISODISPLACE was used for this purpose[@cam]. This approach for the symmetry analysis resulted in space groups that were either centered or primitive monoclinic, in both cases incompatible with the diffraction data. Alternately, using as a starting point the reported space group P$nmm$, and testing for related space groups that are compatible with the diffraction data resulted in primitive non-centrosymmetric solutions such as P$mm2$ or P$mm2_{1}$. Rietvelds analysis on the basis of these space groups resulted in somewhat poorer fits than the reported structure. Our best modeling of the NPD data between 2-450K were obtained using the P$nmm$ model [@Huang:2004cx; @williams], producing refinements with $wR_{p}$ of approximately 4% (between 2- 450K). Contrary to the observation for higher $x$ compositions, we find no evidence of incommensurability in the NPD data[@Roger:2007lr]. The crystallography of the P$nmm$ structure has been published elsewhere[@Huang:2004cx; @williams], however for clarity we illustrate it in fig. \[strt\] and remind the reader that here there are two symmetry in-equivalent sites for Na (labeled Na(1) and Na(2)) and two for Co (labeled Co(1) and Co(2)). The local symmetry for the Co(1) and Co(2) sites differs in that the Na(1)-atom lies directly above or below the Co(1)-atom while the Na(2)-atom resides above a space formed between CoO$_{6}$ octahedra as illustrated in fig. \[strt\](b). The effect of the Na-ion potential correlates with the distortion of the CoO$_{6}$ octahedra. This distortion is shown in fig. \[strt\](b) where we illustrated the difference on the CoO$_{6}$-Na coordination and selected Co-O bond lengths. Here the Na-ion potential results in a distortion of the Co(1)O$_{6}$ octahedron, so at 1.8K the six equal Co-O bonds found in metallic $x=0.75$ distort to form three long bonds (1.90-1.93 Å) and three shorter bonds (1.84-1.88 Å). In sharp contrast the Co(2)O$_{6}$ octahedron is more regular with bond lengths varying between 1.88 to 1.91 Å. This indicates that the Co(1) ions experience essentially a different crystalline potential than the Co(2) ions, as a direct consequence of the Na ordering. In fig. \[bonds\](a,b) we show the temperature dependence of the Co-O bond lengths for the two Co sites. From these data it is evident that the larger distortion of the Co(1)O$_{6}$ octahedron is maintained from low until high temperature, while the spread of bond lengths for the Co(2)O$_{6}$ is smaller and relatively temperature invariant. These octahedral distortions can be quantified by the parameter $\Delta$, where $\Delta=\sqrt\frac{\Sigma(\langle Co-O\rangle-(Co-O)_{i})^{2}}{\langle Co-O\rangle^{2}}$, $\langle Co-O\rangle$ is the average Co-O bond length and the summation is done over the 6 Co-O bonds. Here $\Delta$ would be zero for a regular octahedron with 6 equivalent Co-O bonds. We find that the Co(1)O$_{6}$ octahedron is more distorted by a factor of 3 at high temperatures (see fig. \[bonds\](c)), compared to the octahedron centered on the Co(2) site. With decreasing temperature however this difference increases to a factor of 5 while the distortion saturates below $\sim$150K. As the data indicate on the same figure, the distortion of the Co(2) octahedron is much less sensitive to temperature. At first sight, comparison of these bond lengths at 1.8K to the ideal LS Co$^{3+}$ and Co$^{4+}$-O bond lengths of 1.93 and 1.83Årespectively would suggest that there is no evidence for integer charge separation between Co(1) and Co(2) sites. The analysis of the experimentally determined Co-O bond lengths using the bond valence sum (BVS) method allows us to estimate the difference in the valance of the two Co-ions. For the calculation of the BVS we used $b=$0.37 and $R_{o}=$1.70 [^1][@Brese:st0462]. The BVS as a function of temperature for the two Co sites is shown in fig. \[bonds\](c). For high temperatures we find that the BVS shows some scatter that reflects changes in the mobility and ordering of Na atoms between CoO$_{2}$ sheets (see below) and possibly to geometrical differences that arises from the Na-ion potential imposed on the CoO$_{2}$ sheet over the same temperature range.[^2] However, below $\sim$150K we find a small but measurable and consistent difference of $\epsilon\sim$0.06$e$ in the BVS for the two Co atoms. Although the difference is comparably smaller than what is found in conventional charge ordered systems, the separation of the data into two values (one low and one high) below 150K is statistically significant. These data would suggest that below =150K there is a separation of charge into Co$^{3.5+\epsilon}$ and Co$^{3.5-\epsilon}$ stripes running along the $b-$axis as shown in fig. \[strt\](a). This charge ordered structure is in agreement with the magnetic neutron diffraction measurements, where the magnetically active Co-site would correspond to the Co(1) site with the slightly higher BVS and octahedral distortion. The values of the BVS obtained are in good agreement with both theoretical predictions[@Zhou:2007fk] and recently reported values at 10 and 300K respectively[@williams] which are also plotted on fig. \[bonds\](c) for comparison. The weak charge ordering found here is consistent with the relatively low resistivity of this material at low temperatures ( $\sim 100\ m\Omega\ cm$ at 2K)[@Huang:2004cx; @Foo:2004cx; @gasparovic]. That the average BVS is approximately 3.3$e$ reflects the mixed valent nature of this compound.[^3]. Na Ordering and behavior of Na-O bond lengths ============================================= ![ (Color online) Isotropic atomic displacement parameters ($U_{iso}$) for the Na, Co and O-atoms determined from the Rietveld analysis of the NPD data. In the analysis the following constraints were used $U_{iso}$(Co(1))=$U_{iso}$(Co(2)), $U_{iso}$(Na(1))=$U_{iso}$(Na(2)),$U_{iso}$(O(1))=$U_{iso}$(O(2))=$U_{iso}$(O(3)). Lines through the data are guides to the eye. A slope change in the $U_{iso}$(Na) is evident around 300K. In the inset we show the temperature dependence of the (111) reflection in the orthorhombic P$nmm$ setting. This reflection is a superlattice reflections with respect to the parent P$6_{3}/mmc$ crystal structure and arises from the ordering of Na ions. The temperature $T_{s}\sim$460K marks the decomposition of the sample in hexagonal Na$_{x}$CoO$_{2}$ and Co$_{2}$O$_{3}$. []{data-label="uiso"}](fig4.eps) We now turn our attention to the behavior of the Na-layer for this composition. In fig. \[uiso\] we plot the temperature dependence of the atomic displacement parameters $U_{iso}$ (Debye-Waller factor) determined from our Rietveld analysis. We find that the $U_{iso}$ values of the O- and Co-atoms to be in general of the expected amplitude and show a linear behavior with temperature. The behavior of $U_{iso}$ for the Na-ions however is unusual in that there is a clear change in slope at 300K separating a low and a high temperature behavior, while for T$>$300K the $U_{iso}$ values for Na become large. Such behavior is indicative of a dynamical transition occurring at 300K involving only the motion of Na-ions, as similar signatures are absent for the Co- and O-atoms[@Huang:2004prb]. Indeed such large vales of $U_{iso}$ suggest that Na-ions may become mobile between CoO$_{2}$ layers above 300K. For higher temperatures we find that our sample decomposes at $T_{s}\sim$460K to [Na$_{x}$CoO$_{2}$]{} and Co$_{2}$O$_{3}$. This transition is quantified by tracking the intensity of the (111) reflection as shown in the inset of fig. \[uiso\]. This reflection is a superlattice reflection with respect to the parent P$6_{3}/mmc$ structure and arises from the ordering of Na-ions [@Huang:2004cx]. Our neutron powder data measured at 475K indicate the loss of this and other superstructure reflections and a return to P$6_{3}/mmc$ symmetry with the addition of Co$_{2}$O$_{3}$ reflections. Within this perspective we now look more closely to the temperature dependence of the Na-O bond lengths show in fig. \[Na\_bonds\](a,b). For both Na sites the Na-O bond lengths show a set of short bonds ($\sim$2.36Å) and a set of long bonds ($\sim$2.44Å). While the high temperature behavior is complicated by the high Na-ion motion as discussed above, on cooling below 300K the long bonds decrease, while the short bonds show an increase down to 150K. This correlated behavior in general indicates a displacement of Na-ions along the $a-$axis. At 150K we find that the small charge disproportionation in the CoO$_{2}$ layer is not reflected in the Na-O bonds. Surprisingly however we find that at [$T_{m1}$]{} a decrease of $\sim$0.01Å of the long Na(1)-O bond and a correlated increase in the short Na(1)-O bond, while for the Na(2)-O bond lengths an opposite and less clear effect can be seen in the data. ![ Temperature dependence of and Na-O bond lengths computed from Rietveld analysis of NPD data measured on the E9 diffractometer. Dashed lines are guides to the eye.[]{data-label="Na_bonds"}](fig5.eps) We interpret these changes in the Na-O bond lengths as reflecting changes in the electronic state of the CoO$_{2}$ layer at [$T_{m1}$]{}. The nature of any coupling between changes in the electronic state of the CoO$_{2}$ and Na can arises from the orbital configuration of the Co-ion itself. It is argued by Kroll [*et al.*]{}[@Kroll] that the edge-sharing Co$^{4+}$O$_{6}$ octahedra are compressed along the $c-$axis reduces the point group symmetry to $D_{3d}$. The $t_{2g}$ orbital of the Co$^{4+}$ is split in $D_{3d}$ as $t_{2g}$ = $a^{\prime}1_{g}$ + $e^{\prime}_{g}$, giving a fully occupied $e^{\prime}_{g}$ and a half filed $a^{\prime}1_{g}$. The latter orbital looks like a $3d_{z^{2}-r^{2}}$ orbital and points along the $c-$axis[@Kroll]. For the case of the higher valent Co(1)-ion (nominally Co$^{4+}$), this orbital would point in between the O-atoms and towards the Na(1)-atom as shown in the inset of fig. \[Na\_bonds\](a). Since the charge disproportionation here is small each Co ions will have a similar electronic and orbital configuration. Therefore the orbital configuration of the Co-ions provides a means to couple electronically the CoO$_{2}$ and Na layers. The nature of the coupling is electrostatic and would arise from the occupation of the $e^{\prime}_{g}$ orbital. We would expect that changes in the electronic configuration of the CoO$_{2}$ sheet to be reflected also in the relative positions of the Na-ions as indicated by the Na-O bond lengths. More precisely changes in the electronic state of Co should be clearest for the Na(1)-O bonds as the Na(1)-ion sits directly above (or below) a Co(1)-ion. The same argument would suggest a less pronounced effect for the Na(2)-O bond lengths as the Na(2) ion resides above and between CoO$_{6}$ octahedra. This is indeed reflected in the bond length data where a strong response is found in the Na(1)-O bonds and a less clear responce in the Na(2)-O bonds[^4]. At [$T_{m2}$]{} we find no clear evidence of changes in the Na-O bond lengths. This is expected as the changes in charge separation at this lower transition are computed to be much smaller than those at [$T_{m1}$]{}[@Zhou:2007fk]. Anomalous behavior of Lattice Constants ======================================= In fig. \[lps\](a-b) we show the temperature dependence of the lattice constants determined from Rietveld refinement of the NPD data. These data show a positive thermal expansion (TE) for the $c-$axis between 2 and 450K, but for the $a-$ and $b-$axis we find an almost constant TE between 2 and 300K; here for $T<$300K linear TE expansion coefficients are $-9(3)\times^{-7}$/K and $1.1(4)\times^{-6}$/K for $a$ and $b$ respectively. Such small TE was also discussed in ref. for a much more limited number of temperatures and smaller range in temperature. For $T>300$K there is a return to positive TE for both in-plane parameters. This crossover coincides with the dynamic transition in the motion of the Na-ions as indicated by the behavior of $U_{iso}$ for Na. ![(Color online) (a,b) Lattice constants and unit cell volume (inset in panel (a)) obtained from the Rietveld refinement of NPD data measured on HRPD at ISIS (filled circles) and E9 (open circles) over the temperature range of 2 to 450K. In panel (a) and inset the solid lines represent the thermal expansion obtained from a fit to the data using the second order Grüneisen approximation.(c) Temperature dependence of the additional component to the thermal expansion $\alpha$ for Na$_{0.46}$CoO$_{2}$ obtained by subtracting the temperature behavior of the in-plane lattice constant of the $x=$0.75 compound. In the inset of panel (c) we show lattice parameters measured as a function of temperature from a hexagonal $x=$0.75 sample. []{data-label="lps"}](fig6.eps) The behavior of the lattice constants for this $x=$0.46 sample is in sharp contrast to our $x=$0.75 sample (shown in the inset fig. \[lps\]), were we find a positive TE for both $a-$ and $c-$axes between 5 and 300K. Assuming that TE is dominated by acoustic phonons below 300K for Na$_{x}$CoO$_{2}$ materials and whose frequency is relatively invariant between $x=$0.75 to $x=$0.46, the $T-$dependence of the lattice constants for the $x=$0.75 sample (see inset in fig. \[lps\](f)) can be used to quantify the difference in the in-plane TE between these two samples. Here we define the term $\alpha=(a'-a_{hex})/a'$ where $a'=(\sqrt{3}a+b/2)/2$, $a$ and $b$ are the orthorhombic lattice constants of the $x=$0.46 compound and $a_{hex}$ is the hexagonal lattice constant of the $x=0.75$ compound.[^5] For the higher temperature data (T$>$300K) $a_{hex}$ was assumed to vary linearly with temperature. Here $\alpha$ represents an additional temperature dependent contribution to the expected in-plane lattice constants (as defined by $a_{hex}$) and its temperature dependence in shown is fig \[lps\](c). For T$>$ 300K $\alpha$ is near zero, although the data in this region are more limited. However for T$<$300K $\alpha$ increases with decreasing temperature and reaches a value of $\sim40\times10^{-3}\%$ at 1.8K. The maximum value of $\alpha$ is $\sim50\times10^{-3}\%$ at 150K, close to . The decrease below this temperature is due to an increase in the $a_{hex}$ for the $x=$0.75 as seen in the inset of fig. \[lps\](c). We speculate that the anomalous TE below 300K may be driven by electronic correlations induced by Na-ions. Discussion ========== The measurement we present here suggest a picture of incipient charge ordering for near half doped Na$_{x}$CoO$_{2}$. At high temperatures the ordering of Na-ions defines two different CoO$_{6}$ octahedra, one that is relatively undistorted and one that more distorted. On cooling the differences in octahedral distortions between these two different Co-sites becomes larger and may reflect an increasing influence of the Na-ion potential on the CoO$_{2}$ sheet, as Na ions become more localized around their mean crystallographic positions. Indeed the near zero TE for the in-plane lattice constants coincides with a dynamical transition in the displacement parameter $U_{iso}$ of the Na-ion. Further the physical meaning of $\alpha$ can be interpreted as a measure of the Na-induced electronic correlations onto the CoO$_{2}$ layer which in this view saturate at =150K. While our NPD work can correlate the distortion of the CoO$_{6}$ octahedra and the Na-ordering even at high temperatures, it is not until  that we find evidence for a weak charge separation into stripes. Indeed it is possible that this pattern of charge ordering is present in the lattice from the onset as a direct result of the Na-ordering but it in effect is hidden by the dynamic behavior of the Na-ions. Therefore we argue that the charge ordering emerges at low temperatures as Na-motion becomes more confined. More critically in terms of the physics of these materials we demonstrate that charge ordering occurs at a higher temperature that the magnetic ordering and electronic transitions at [$T_{m1}$]{} and [$T_{m2}$]{} respectively. This is consistent with recent theoretical models that suggest that the Na potential imposes a degree of charge ordering to the lattice[@Choy:2007ve; @Zhou:2007fk; @Marianetti:2007ly]. Indeed Zhou and Wang [@Zhou:2007fk], suggest that as much as half of the expected charge disproportionation would occur at a temperature above [$T_{m1}$]{}[^6], consistent with our observations. At lower temperature the changes in the CoO$_{2}$ layer may be inferred indirectly by monitoring the Na(1)-O bonds. Here the orbital configuration of the Co provides for charge density pointing directly to the Na(1)-ions thus providing a sensitive parameter to electronic changes in the CoO$_{2}$ layer. Indeed changes in the Na-O bond lengths may be more sensitive than changes in Co-O bonds as $e_{g}$ axial orbitals are empty. Our measurements find that that changes in the Na(1)-O bond lengths correlate with the magnetic transition at [$T_{m1}$]{} suggestive of further changes in the electronic state of the CoO$_{2}$ layer. It is predicted that charge separation is enhanced gradually below [$T_{m1}$]{}[@Zhou:2007fk] but the changes here are overall again small and may fall outside the limits of our sensitivity. At [$T_{m2}$]{} we find no evidence of changes in the lattice or changes in the lattice symmetry. The prediction of a modulation of the amplitude of antiferromagnetically coupled spins as well as the charge within a Co$^{+3.5+\epsilon}$ stripe is much smaller than our detection limit ($\sim$0.02$e$)[@Zhou:2007fk]. The structural observations at  we report here correlate with features in the charge dynamics. For example Quian [*et al.*]{}[@qian:046407] report from ARPES measurements that with increasing $T$ from the insulating region (were a clear gap is found) the size of the gap and the spectral weight around the gap decrease. Although the gap closes at [$T_{m1}$]{}the spectral weight does not completely vanish until approximately 120K, a behavior that is attributed to the formation of quasiparticles that gain significant weight due to coupling along the $c-$axis. For similar temperatures optical spectroscopy measurements find a broad feature that is associated with fluctuating charge ordering or a CDW in both anhydrous[@Wang:2004cx; @jhwang] and hydrated superconducting samples[@lemmens:167204]. These observation together with our structural measurements point towards a picture where at 150K an incipient charge ordering forms. Summary ======= In summary this work establishes that $(a)$ the Na-ordering on the CoO$_{2}$ layer is reflected in the octahedral distortion of the two crystallographically in-equivalent Co-sites and is evident even at high temperatures; $(b)$ The charge ordering occurs below =150K, a temperature higher than the magnetic ordering found at [$T_{m1}$]{}=88K, consistent with theoretical models that suggest that the Na potential imposes a degree of charge ordering to the lattice[@Choy:2007ve; @Zhou:2007fk; @Marianetti:2007ly]; $(c)$ Below  we find a weak charge ordering into stripes of Co$^{3.5+\epsilon}$ and Co$^{3.5-\epsilon}$ with a $\epsilon\sim$0.06 $e$, a value in good agreement with that obtained from a Hubbard model using the Gutzwiller approximation[@Zhou:2007fk]; $(d)$ A dynamic transition in the motion of Na-ions occurs at 300K and coincides with the onset of a near zero thermal expansion for the in-plane lattice constants of our Na$_{0.46}$CoO$_{2}$ sample. The authors thank P.G. Radaelli, and L.C. Chapon for helpful discussions and W.S. Howells for assistance in the collection and reduction of the HRPD data. [22]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , ****, (). , , , , , , , ****, (). , , , , , , , , , , ****, (), , , , , , , ****, (). , , , , , , , , , , , ****, (), , ****, (), , ****, (), , , , , , , , ****, (), , , , , , , , , , , , ****, (). , , , , , , , ****, (), , , , , , , , , , ****, (). , , , ****, (). , , , , , ****, (), , , , ****, (), , , , , , ****, (), , ****, (). ****, (). , , , , , , , ****, (), , , , ****, (), , , , , , , , , , , , ****, (), , , , , , , ****, (), , , , , ****, (). , , , , , , , , ****, (), [^1]: These values correspond to Co$^{3+}$-O bonds. Reliable values of Co$^{4+}$-O bonds are not available [^2]: Overall the validity of the BVS method may not hold in the case of dynamic effects. At high temperatures Na is mobile which is reflected in a large Debye-Waller factor ($U_{iso}\sim30\times 10^{-3}\AA^{2}$) at around 300K which decrease smothly and rapidly to values similar as those found for Co and O ($U_{iso}\sim7\times 10^{-3}\AA^{2}$) below 100K. [^3]: The average BVS value lower than 3.5 reflect the absence of reliable parameters for Co$^{4+}$. A similar BVS number are noted in ref. [^4]: The Na(2)-O response may arise from the repulsion of Na-ions [^5]: Here $a'$ was normalized to equal $a_{hex}$ at 300K. [^6]: We compute these change by averaging over site 1 and site 3 in reference
{ "pile_set_name": "ArXiv" }
--- abstract: 'Quantum dots are useful model systems for studying quantum thermoelectric behavior because of their highly energy-dependent electron transport properties, which are tunable by electrostatic gating. As a result of this strong energy dependence, the thermoelectric response of quantum dots is expected to be nonlinear with respect to an applied thermal bias. However, until now this effect has been challenging to observe because, first, it is experimentally difficult to apply a sufficiently large thermal bias at the nanoscale and, second, it is difficult to distinguish thermal bias effects from purely temperature-dependent effects due to overall heating of a device. Here we take advantage of a novel thermal biasing technique and demonstrate a nonlinear thermoelectric response in a quantum dot which is defined in a heterostructured semiconductor nanowire. We also show that a theoretical model based on the Master equations fully explains the observed nonlinear thermoelectric response given the energy-dependent transport properties of the quantum dot.' author: - Artis Svilans - 'Adam M. Burke' - Sofia Fahlvik Svensson - Martin Leijnse - Heiner Linke bibliography: - 'References.bib' title: 'Nonlinear thermoelectric response due to energy-dependent transport properties of a quantum dot' --- Introduction ============ Quantum dots (QDs) are known for their tunable and strongly energy-dependent electron transport properties, which result in a nonlinear response to an applied electrical bias $V_{SD}$. Nonlinear conductance due to the Coulomb blockade [@VanHouten1992] is perhaps the most well known example of such nonlinear behavior. It is also well established that the energy-dependent electron transport properties of QDs strongly influence their thermoelectric behavior [@Beenakker1992; @Staring1993], which has made them attractive model systems for fundamental studies of quantum thermoelectric effects [@Humphrey2002; @Edwards1993; @ODwyer2006; @Esposito2009; @Nakpathomkun2010; @Jordan2013; @Zianni2009]. Nonlinear response to an applied thermal bias $\Delta T$, in particular, has been theoretically investigated in various mesoscopic systems, including resonant tunneling structures [@WANG2006; @Snchez2013], multi-terminal quantum conductors [@Snchez2013; @Meair2013; @Whitney2013] and Kondo-correlated devices [@Boese2001; @Azema2012]. For QDs, one can expect that the quasi-discrete resonance energy spectrum of a QD alone should lead to nonlinear thermoelectric response [@Nakpath2010; @Svensson2013]. This behavior was explored in detail by Sierra and Sanchez who predicted a strongly nonlinear regime behavior in QDs when $\Delta T$ is about an order of magnitude larger than the background temperature $T_0$ [@Sierra2014]. In experiments, a nonlinear thermovoltage as a function of thermal bias $\Delta T$ has been observed in semiconductor QDs [@Staring1993; @Svensson2013; @Pogosov2006; @Hoffmann2009] and in molecular junctions [@Reddy2007]. Most recent studies using a tunable thermal bias have shown a strongly nonlinear thermovoltage and thermocurrent in semiconductor nanowire QDs that could not be fully explained by the energy-dependence of the QD resonance energy spectrum alone, and was attributed to a renormalization of resonance energies as a function of heating [@Svensson2013]. The key experimental challenge in the observation of nonlinear thermoelectric behavior in QDs is the ability to apply a tunable and large enough thermal bias $\Delta T$ across a nanoscale object without significant overall heating of the device. The latter can prevent the ability to perform low-temperature experiments, and makes it difficult to distinguish temperature-dependent transport effects from the true nonlinear response to the thermal bias $\Delta T$. Here, we report measurements of a strongly nonlinear thermocurrent as a function of $\Delta T$ across a QD that is defined by two InP segments within an InAs nanowire. To a large extent the measurements presented here were enabled by a recently developed heater architecture that allows local and electrically non-invasive thermal biasing of a nanowire [@Gluschke2014]. This architecture enables tuning of $\Delta T$ over a wide range by applying a relatively small heating power, thus minimizing the parasitic heating effects. We also use theoretical calculations based on Master equations to demonstrate that the experimentally measured thermocurrent can be fully understood from the QD resonance energy spectrum, and is consistent with the previously presented theory in Ref.[@Sierra2014]. Experiment ========== Device Fabrication ------------------ The device consists of a heterostructured InAs/InP nanowire with a 60 nm diameter (see Fig.\[fig:1\]a) that was grown by chemical beam epitaxy seeded by a gold particle [@Froberg2008; @Persson2007]. Based on transmission electron microscopy (TEM) analyses of $11$ nanowires from the same growth, the InAs/InP nanowire (starting from the seed particle) consists of a $350{\raisebox{.3\height}{\scalebox{.7}{ $\pm$ }}}70$ nm InAs segment, followed by a $17{\raisebox{.3\height}{\scalebox{.7}{ $\pm$ }}}1.5$ nm long InAs QD defined by two, $4{\raisebox{.3\height}{\scalebox{.7}{ $\pm$ }}}3$ nm thick, InP segments, and a second InAs segment of $265{\raisebox{.3\height}{\scalebox{.7}{ $\pm$ }}}60$ nm in length. The remaining nanowire, which is not used in the device, consists of a $25$ nm InP plug incorporated for growth reasons and another InAs segment. ![ (a) Transmission electron microscope image of a nanowire nominally identical to the one used in our thermoelectric device. (b) Device schematic with circuitry diagram for the $I_{th}$ measurement setup. The source and drain contacts in yellow, top-heaters in orange, InAs/InP nanowire in green, quantum dot in light green. The heater over the drain lead is unused. (c) Stability diagram of the InAs quantum dot. Magnitude of differential conductivity, $g=dI/dV_{SD}$, in log$_{10}$-scale as a function of back-gate bias, $V_G$, and source drain bias, $V_{SD}$. []{data-label="fig:1"}](Fig1.pdf){width="\columnwidth"} The nanowire is contacted to metallic source and drain contacts, as illustrated in Fig.\[fig:1\]b. Electrically isolated metallic top-heaters pass over the source and drain contacts enabling local dissipation of Joule heat directly on top of the contacts; ensuring heat transfer to the nanowire. Only the heater on top of the source contact was used in the experiments presented here. The device fabrication followed the process developed by Gluschke et al [@Gluschke2014]. In brief, electron-beam lithography (EBL) was used to define a pair of source and drain contacts centered around the QD and separated by $300$ nm. A dilute sulfur passivation is performed before source and drain contacts are deposited on the nanowire [@Suyatin2007]. A $10$ nm thick layer of HfO$_2$ was deposited via atomic layer deposition to insulate the metallic contacts from the overlying heaters, which were aligned and exposed in a second EBL step. Both the contacts and the heaters were deposited thermally with a metal stack of $25$ nm Ni and $75$ nm Au for the contacts and $25$ nm Ni and 125 nm Au for the heaters. The heater layer was thicker to ensure continuity as the heater steps onto the contact region. The entire device rests on $100$ nm of thermally grown SiO$_2$, allowing the underlying doped Si substrate to be used as a global back gate. Electrical Characterization --------------------------- Measurements were conducted in a cryostat in which the estimated electron temperature in the device, $T_0$, was below $1$ K without heating. Bias spectroscopy of the device was carried out using a Stanford Research SRS-830 lock-in amplifier. The voltage from the oscillation output was reduced using a $1:20000$ voltage divider circuit to provide a stable AC source-drain bias amplitude $dV_{SD}=25$ V $\ll k_B T_0/e$ ($k_B$ - Boltzmann constant, $e$ - elementary charge). To measure the differential conductance $g=dI/dV_{SD}$ as a function of a DC source-drain bias $V_{SD}$, the differential current amplitude, $dI$, was measured in response to $dV_{SD}$, while adding the AC and DC source-drain bias components in a summing box. To measure Coulomb oscillations (Fig.\[fig:2\]a), a source-drain current, $I_{SD}$, was measured in DC mode using Yokogawa 7651 voltage source to bias the source lead at $100$ V and a SR570 current preamplifier with $1$ M$\mbox{ }$ input impedance. The set-up used for thermoelectric characterization of the QD nanowire device is shown in Fig.\[fig:1\]b. A thermal bias, $\Delta T$, was applied by running a current $I_H$ through the heater on top of the source contact using a Yokogawa 7651 DC voltage source. The dissipated Joule heat mostly heats the underlying source contact, but is expected to also create a fractional temperature rise in the drain contact [@Gluschke2014]. The resulting thermocurrent through the QD nanowire device, $I_{th}$, was amplified via the SR570 current preamplifier. ![image](Fig2.pdf){width="\textwidth"} Experimental Results and Discussion ----------------------------------- The QD’s stability diagram, measured as a function of the source-drain voltage, $V_{SD}$, and a back-gate voltage, $V_G$, is shown in Fig.\[fig:1\]c. The dark diamond-like regions represent bias conditions at which the conductivity is suppressed due to Coulomb blockade. From the bias spectroscopy data we estimate a charging energy $E_C$ of $4.0{\raisebox{.3\height}{\scalebox{.7}{ $\pm$ }}}0.2$ meV, which is a measure of electron-electron interaction strength in the QD. We also determine the value of the coupling constant $\alpha_G=0.042{\raisebox{.3\height}{\scalebox{.7}{ $\pm$ }}}0.04$, which characterizes the capacitive coupling strength between the QD and the back-gate electrode. Figure \[fig:2\]b shows $I_{th}$ as a function of $V_G$. The data confirms that our device’s thermoelectric response is typical for QDs [@Beenakker1992; @Svensson2013; @Svensson2012] where $I_{th}$ goes to zero and changes direction at those $V_G$ values where the Coulomb peaks in Fig.\[fig:2\]a are centered. The locations of these thermocurrent zeros do not depend on the heating current, as can be seen in Fig.\[fig:2\]c, which shows $I_{th}$ as a function of $V_G$ and $I_H$. This independence of the $I_{th}$ zeros from $I_H$ is in contrast to previous studies [@Svensson2013], where the nonlinear behavior of $I_{th}$ was strongly influenced by a heating dependent renormalization (shift) of the resonance energies of the QD. The stability of the resonances in the present study is attributed to the benefits of the top-heater architecture where a higher $\Delta T$ can be applied with much less overall background heating of the device [@Gluschke2014]. The core observation of our experiments is the strongly nonlinear behavior of the thermocurrent as a function of $\Delta T$. This nonlinearity is clearly apparent in Fig.\[fig:2\]d where several back-gate voltage traces, taken from the data in the Fig.\[fig:2\]c, are plotted as a function of $I_H$. Several key features can be identified in the observed nonlinear behavior of $I_{th}$, all of which can be understood in terms of the QD’s resonance energy spectrum at different thermal biases. In the following we base our discussion on Ref.[@Sierra2014] and use phenomenological sketches of a QD resonance spectrum and Fermi-Dirac distributions in the leads to illustrate how the increase in $\Delta T$ can lead to nonlinear effects (Fig.\[fig:3\]). The currents $I_{\varepsilon 1}$ and $I_{\varepsilon 2}$ in Fig.\[fig:3\]b combine to give the overall thermocurrent $I_{th}$ through the QD. First, we observe that the $I_H$ at which $I_{th}$ starts to rapidly increase depends on $V_G$ (Fig.\[fig:2\]d). As shown in sketch A in Fig.\[fig:3\]a, this behavior can be understood based on the energy of the QD resonances, $\varepsilon_1$ and $\varepsilon_2$. Until the temperature on the hot side reaches a certain value, there is no net current because the electronic states at energies $\varepsilon_1$ and $\varepsilon_2$ in both leads are equally occupied - either completely full or completely empty. This is reflected in point A in Fig.\[fig:3\]b. The second interesting experimental feature in Fig.\[fig:2\]d is the nonlinear increase of $I_{th}$, as a function of thermal bias. Sketch B in Fig.\[fig:3\]a illustrates how increased heating on the source side leads to a misbalance of the electronic state occupancy in the leads at $\varepsilon_1$. This misbalance leads to a net current as indicated by an arrow in the sketch and by point B in Fig.\[fig:3\]b. Thus, the origin of the nonlinear increase in $I_{th}$ is the nonlinear change of the electronic state occupancy in the leads due to heating. ![ (a) Schematic representation of electron distribution in source (red) and drain (blue) leads when the thermal bias is (A) $k_B \Delta T_H/E_C = 0.02$, (B) $0.1$ and (C) $0.3$. Current direction through resonances of a quantum dot is indicated with arrows. Electron energy increases up the vertical axis. (b) Simulated thermocurrent as a function of thermal bias for the back-gate voltage $e\alpha_G V_G/E_C = 0.24$ (black). Brown curves are thermocurrent contributions through each resonance of the quantum dot. See Fig.\[fig:4\] for simulation parameters and Sec.\[sec:3\] for a detailed description. []{data-label="fig:3"}](Fig3.pdf){width="\columnwidth"} Finally, $I_{th}$ tends to decrease at higher $I_H$. Ref. [@Sierra2014] predicts such behavior due to an increasing backflow of electrons at large thermal bias values $(\Delta T/T\geq 10)$. We believe that the same is true for $I_{th}$ in our experiment, except we expect that we also parasitically heat the drain lead when aiming for high $\Delta T$. Sketch C in Fig.\[fig:3\]a illustrates that the major current contribution, $I_{\varepsilon 1}$, is still provided by the electron transport through $\varepsilon_1$, however, the thermally excited electrons on the source side also leak back through $\varepsilon_2$, thus contributing to the decrease in $I_{th}$. We note that any decrease of the current through $\varepsilon_1$ in the sketch is, in fact, caused by the overall increase in temperature; e.g. slight heating of the drain. However, the backflow of electrons through $\varepsilon_2$ is caused purely by the thermal bias. Theory {#sec:3} ====== Model Description ----------------- We model electron transport through the InAs/InP nanowire by considering a QD which is tunnel-coupled to two electron reservoirs (source and drain leads). Following the experimental setup showed in Fig.\[fig:1\]b the QD is considered in series with a resistive load $R$ to model the input impedance of the current preamplifier. The source and drain leads are characterized by their electrochemical potentials, $u_S = E_F - eV_S$ and $u_D = E_F - eV_D$, where $E_F$ is Fermi energy, and their temperatures, $T_S$ and $T_D$. Electrons in the leads are assumed to occupy states according to the Fermi-Dirac distribution $f_r (E) = \{ 1 + \exp \left[ (E-u_r) / (k_B T_r) \right] \}^{-1}$ and the density of states in the leads is assumed to be a constant. The QD is capacitively coupled to the leads with capacitances $C_S$ and $C_D$, and to the global back-gate with a capacitance $C_G$, giving rise to a charging energy $E_C = e^2 / ( C_S + C_D + C_G )$. In order to model resonance energies we consider a QD in which adding the $N^{th}$ electron changes its state from $i$ to $f$ and that has an electrochemical potential of the form $$\mu_{fi}=\epsilon_{fi}+(N-1)E_C- \!\!\!\!\sum\limits_{r=G,S,D} \!\!\!\!\alpha_r V_r.$$ Here $\epsilon_{fi}$ is energy of the single-electron orbital in which the electron is added and $\alpha_r = C_r/( C_S + C_D + C_G )$ are dimensionless coupling constants. We label the probability of the $f^{th}$ state to be occupied $p_f$. Steady-state probabilities for each state occupancy can be represented by a vector $\mathbf{P}$ and are found using the Master equation for a stationary case $$\mathbf{W P}=\mathbf{0}.$$ Here $\mathbf{W}$ is a matrix with elements $W_{fi}$ given by $$W_{fi} = \begin{cases} \sum\limits_{r=S,D}\!\!\left\{\Gamma_{fi}^{r,in}f_r(\mu_{fi})+ \Gamma_{fi}^{r,out}\left[1-f_r(\mu_{fi})\right]\right\} \text{,\hspace{2.5mm}if $i\neq f$}\\ -\sum\limits_m W_{mf}\text{,\hspace{4.65cm}if $i = f$} \end{cases}$$ where $\mathbf{\Gamma^{S,in}}$, $\mathbf{\Gamma^{D,in}}$, $\mathbf{\Gamma^{S,out}}$ and $\mathbf{\Gamma^{D,out}}$ are matrices containing tunnel rates for single electron tunneling in or out of the QD, involving source or drain leads. Here non-diagonal matrix elements $W_{fi}$ express physical rates at which the QD changes its state from $i$ to $f$. Probability normalization requires that the sum of all occupancy probabilities pf must be $1$. The current $I_{SD}$ through the QD is then found by adding up current contributions from all possible QD states given the calculated steady state occupancies $p_f$ $$I_{SD}=-e\sum\limits_{i,f}p_f \{\Gamma_{fi}^{S,in}f_S(\mu_{fi})-\Gamma_{fi}^{S,out}\left[1-f_S(\mu_{fi})\right]\}.$$ In order to calculate the current $I_{SD}$ through the circuit with the QD and the load $R$ in series, a bias value on the drain side $V_D$ is calculated self-consistently using the Ohms law $V_D=I_{SD}R$. For the purpose of comparing with our experimental results it is sufficient to consider a QD with only one single electron orbital, in which $N$ can take values 0, 1 or 2. Including electron spin this gives four possible QD states $i,f=\{0, \uparrow, \downarrow, \uparrow\downarrow\}$. In this case, the phenomenological resonance energies $\varepsilon_1$ and $\varepsilon_2$ discussed in the experimental section (Fig.\[fig:3\]) thus correspond to the electrochemical potentials $\mu_{\sigma 0}=\varepsilon_1$ and $\mu_{\uparrow\downarrow\sigma}=\varepsilon_2$, with $\sigma=$ $\uparrow,\downarrow$. For qualitative comparison with experiment we consider the tunnel-barriers to be identical and characterized by a constant tunnel rate $\Gamma$. Simulation Results {#sec:3.2} ------------------ We now calculate the thermocurrent as a function of temperature in source and drain leads. Since in our experiment the source lead is heated, we label the source temperature $T_S=T_H=T_0+\Delta T_H$ and the drain temperature $T_D=T_C=T_0+\Delta T_C$. In simulations the base temperature $T_0$ is chosen such that $k_B T_0/E_C=0.01$, which is close to the experimental value. Because in the experiments the drain lead is also expected to be somewhat heated we assume $\Delta T_C=\Delta T_H/3$. The ratio between $\Delta T_H$ and $\Delta T_C$ is chosen to obtain a qualitative agreement with the experimental data, but the precise value is not important for the discussed physics. ![ (a) Simulated thermocurrent as a function of back-gate voltage for different thermal biases $k_B \Delta T_H/E_C\!\!=\!\!(0, 0.04, 0.08, 0.12, 0.16, 0.32)$. (b) Simulated thermocurrent as a function of the thermal bias for several back-gate voltage values $e\alpha_G V_G/E_C\!\!=\!\!(0.11,0.24,0.37,0.50,0.63,0.76,0.89)$. (c) Simulated thermocurrent (color) as a function of both, back-gate voltage and thermal bias. Other parameters: $\Gamma=5$ GHz, $R=1$ M, $T_0=0.01E_C$. []{data-label="fig:4"}](Fig4.pdf){width="\columnwidth"} In Fig.\[fig:4\] we sum up our thermocurrent simulation results. Thermocurrent as a function of the back-gate voltage for different thermal bias values is shown in Fig.\[fig:4\]a (compare with the corresponding experimental data in Fig. \[fig:2\]b). Similarly, we plot the simulated thermocurrent as a function of the thermal bias for different back-gate voltage values in Fig. \[fig:4\]b. The dimensionless range of thermal bias shown is chosen based on the similarity to Fig.\[fig:2\]d. Finally, the color plot in Fig.\[fig:4\]c is produced using the ranges of the electrochemical potential and the thermal bias used in Figs.\[fig:4\]a and b, and closely matches the experimental result shown in Fig.\[fig:2\]c. According to our simulations, the source-drain bias $V_{SD}$ that develops across the QD due to the series load at peak currents is estimated to be below ${\raisebox{.3\height}{\scalebox{.7}{ $\pm$ }}}0.04$ $E_C/e$ and therefore does not significantly influence the behavior of the thermocurrent. Note that it is very challenging to measure the temperature in the leads leading up to the QD directly and this was not attempted in the experiment. However, given the qualitative agreement between the experimental thermocurrent data in Fig.\[fig:2\] and the simulated thermocurrent in Fig.\[fig:4\], one can conclude that the relation between $I_H$ and $\Delta T$ must be close to linear. Moreover, the agreement also suggests that $1$ mA of $I_H$ gives rise to a thermal bias $\Delta T$ of several Kelvin between the source and drain leads. Conclusions =========== In summary, we have reported measurements of a strongly nonlinear thermocurrent in a QD. By comparing our measurements to simulation results, we show that the nonlinear behavior can be fully explained in terms of the QD’s energy-dependent transport properties [@Sierra2014]. This is in contrast to earlier experiments [@Svensson2013] where this behavior was masked by effects that can also be explained by the overall heating of the device. Our results were enabled by use of a novel heating technique [@Gluschke2014] that allows the application of very large $\Delta T$ across a nanoscale device with minimal overall heating of the sample space, even at low temperatures. The ability demonstrated here opens a wide range of quantum thermoelectric experiments in mesoscopic systems. We thank Sebastian Lehmann for the TEM image in Fig.\[fig:1\]a. This work was supported by the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7-People-2013-ITN) under REA Grant agreement no. 608153, by the Swedish Energy Agency (Project P38331-1), by the Swedish Research Council (Project 621-2012-5122) and by NanoLund.
{ "pile_set_name": "ArXiv" }
--- author: - Hampus Wikmark - Chen Guo - Jan Vogelsang - 'Peter W. Smorenburg' - 'Hélène Coudert-Alteirac' - Jan Lahl - Jasper Peschel - Piotr Rudawski - Hugo Dacasa - Stefanos Carlström - Sylvain Maclot - 'Mette B. Gaarde' - Per Johnsson - 'Cord L. Arnold' - 'Anne L’Huillier' title: 'Spatio–temporal coupling of attosecond pulses' --- Electromagnetic waves are usually mathematically described by a product of purely spatial and purely temporal terms. This approximation often fails for broadband femtosecond laser pulses \[see [@AkturkJO2010] and references therein\] and spatio-temporal couplings need to be considered. Spatio–temporal couplings for visible or infrared light may be introduced by refractive and dispersive elements, such as lenses, gratings or prisms. The noncollinear amplification in optical parametric crystals may also potentially lead to spatio–temporal couplings, and it is important to develop characterization methods to measure and reduce their effects [@MirandaOL2014; @ParienteNP2016; @HarthJO2017]. In some cases, these couplings may be advantageously used, as, for example, demonstrated by Vincenti and Quéré for the so–called “lighthouse” effect [@VincentiPRL2012; @KimNP2013; @LouisyO2015]. The shortest light pulses, generated by high-order harmonic generation (HHG) in gases, are in the extreme ultraviolet (XUV)/ soft X-ray region and in the range of 100 as [@SansoneScience2006; @GoulielmakisScience2008; @LiNC2017; @GaumnitzOE2017], with bandwidths of a few tens or even hundreds of eVs [@PopmintchevScience2012; @CousinPRX2017]. These pulses are generated in a three-step process which was proposed at the beginning of the 1990’s [@SchaferPRL1993; @CorkumPRL1993]. When an atom is exposed to a strong laser field, an electron in the ground state can tunnel through the atomic potential bent by the laser field, propagate in the continuum, and recombine back to the ground state when (and if) returning close to the ionic core. In this process, an XUV photon is emitted, with energy equal to the ionization energy plus the electron kinetic energy at return. Two main families of trajectories leading to the same photon energy can be identified. They are characterized by the “short” or “long” time of travel of the electron in the continuum [@LewensteinPRA1995; @BelliniPRL1998]. Interferences of attosecond pulses emitted at each laser half-cycle leads to a spectrum of odd-order harmonics. The investigation of spatio–temporal coupling of attosecond pulses requires measurements of their spatial properties, as a function of time or, equivalently, frequency. Wavefronts of high-order harmonics have been measured by several groups, using different techniques such as Spectral Wavefront Optical Reconstruction by Diffraction (SWORD) [@FrumkerOL2009; @LloydSR2016; @JohnsonScA2018], lateral shearing interferometry [@AustinOL2011], point-diffraction interferometry [@LeeOL2003] and Hartmann diffraction masks [@ValentinJOSA2008; @FreisemOE2018]. In particular, Frumker et al. [@FrumkerOE2012] pointed out that the variation of wavefront and intensity profile with harmonic order leads to spatio–temporal coupling of the attosecond pulses, with temporal properties depending on where they are measured. The spatial and spectral properties of high-order harmonics strongly depend on the geometry of the interaction, and in particular on whether the gas medium in which the harmonics are generated, is located before or after the focus of the driving laser beam [@SalieresPRL1995]. The asymmetry between “before” and “after” can be traced back to the phase of the emitted radiation, which is equal to that of the incident laser field multiplied by the process order, as in any upconversion process, plus the dipole phase which is accumulated during the generation and mostly originates from electron propagation in the continuum. While the former is usually antisymmetric relative to the laser focus, the latter depends on the laser intensity and is therefore symmetric [@BalcouPRA1997; @AustinOL2011]. The total phase and thus the divergence properties are different before and after the laser focus, leading to a strong dependence of the spatio–temporal properties of the harmonic radiation on the generation conditions. In some conditions, harmonics can be emitted with a flat wavefront [@AustinOL2011] or even as a converging beam [@Quintard2017; @Quintard2018]. Another phenomenon leading to an asymmetry of HHG with respect to the generation conditions is ionization-induced reshaping of the fundamental field, which depends on whether the field is converging or diverging when entering the gas medium [@MiyazakiPRA1995; @TamakiPRL1999; @LaiOE2011; @JohnsonScA2018]. In the present work, we show that the frequency components of attosecond pulses generated by HHG in gases have different divergence properties, which depend on the geometry of the interaction and in particular on where the generating medium is located relative to the laser focus. In some conditions, the position of the focus and divergence strongly vary with frequency, leading to chromatic aberrations, as sketched in the inset in Fig. \[fig:coupling\], similar to the effect that a chromatic lens has on broadband radiation[@BorOL1989; @BorOC1992]. Any imaging optical component \[see Fig. \[fig:coupling\]\] will focus the frequency components of the attosecond pulses at different locations, resulting in spatio-temporal couplings. Depending on the position where the pulses are characterized or utilized, they will have different central frequencies, pulse durations and spatial widths. We develop an analytical model based on an analytical expression for the dipole phase [@GuoJPBAMOP2018] combined with traditional Gaussian optics to predict the radius of curvature, position of focus and divergence of the two trajectory contributions to HHG. This model is validated using numerical simulations of HHG [@LhuillierPRA1992] for both thin and thick generating media. We also present experimental measurements of the harmonic divergence as a function of position of generation relative to the laser focus. Finally, we discuss the implications of our results for the focusing of broadband attosecond pulses. Analytical expression of the dipole phase {#sec:phase .unnumbered} ========================================= The single atom response of HHG is well described by an approximate solution of the time-dependent Schrödinger equation for an atom in a strong laser field, called the Strong-Field Approximation (SFA) [@LewensteinPRA1994]. This theory leads to a simple analytical expression of the dipole phase, equal to $\alpha I$, where $\alpha$ depends on the harmonic order and on the trajectory contributing to HHG [@SalieresPRL1995; @LewensteinPRA1995; @VarjuJMO2005; @CarlstromNJP2016] and where $I$ is the laser intensity. This expression has been used in numerous investigations of the harmonic properties [@BelliniPRL1998; @ZairPRL2008; @CarlstromNJP2016; @Quintard2018]. Here, we utilize a more general analytical expression for the phase [@GuoJPBAMOP2018], based on the semi-classical description of attosecond pulse generation [@SchaferPRL1993; @CorkumPRL1993]. ---------------------- ------------------------- --------- -------- Units Brief description (Cycle) (fs) $t_\mathrm{s}$ short, threshold $0$ $0$ $t_\mathrm{ps}$ short, threshold, model $0.18$ $0.48$ $t_\mathrm{cs}$ short, cutoff, model $0.40$ $1.07$ $t_\mathrm{c}$ cutoff $0.45$ $1.20$ $t_{\mathrm{c}\ell}$ long, cutoff, model $0.50$ $1.35$ $t_{\mathrm{p}\ell}$ long, threshold, model $0.69$ $1.85$ $t_\ell$ long, threshold $0.75$ $2.00$ ---------------------- ------------------------- --------- -------- : Return times for the short and long trajectories relative to the zero of the electric field. For the last column, a laser wavelength of 800 nm is used.\ \[tab:parameters\_x\] In this approximation, the second step of the process is described by solving Newton’s equation of motion for a free particle in the laser field. Figure \[fig:energy\] shows the frequency ($\Omega$) of the emitted XUV radiation as a function of electron return time for two different fundamental field intensities, indicated by the bright or faint colors. The frequency varies from $\Omega_\mathrm{p}$, corresponding to the ionization threshold ($\hbar\Omega_\mathrm{p}$=$I_\mathrm{p}$, $I_\mathrm{p}$ denoting the ionization energy and $\hbar$ the reduced Planck constant) to the cutoff frequency $\Omega_\mathrm{c}$ ($\hbar\Omega_\mathrm{c}= 3.17 U_\mathrm{p}+I_\mathrm{p}$). $U_\mathrm{p}$ denotes the ponderomotive energy, equal to $$U_\mathrm{p}= \frac{\alpha_{\scalebox{.5}{\textsc{FS}}} \hbar I \lambda^2}{2 \pi c^2 m},$$ where $\alpha_{\scalebox{.5}{\textsc{FS}}}$ is the fine structure constant, $m$ the electron mass, $c$ the speed of light and $\lambda$ the laser wavelength. The frequency variation can be approximated by piecewise straight lines as indicated by the black solid lines. After inversion from $\Omega(t)$ to $t(\Omega)$, for each straight line, we have $$t_i(\Omega) = t_{\mathrm{p}i}+ \frac{t_{\mathrm{c}i}-t_{\mathrm{p}i}}{\Omega_\mathrm{c}-\Omega_\mathrm{p}}(\Omega-\Omega_\mathrm{p}),$$ where $i$=s,$\ell$ refers to the electron trajectory (short or long) and $t_{\mathrm{p}i}$, $t_{\mathrm{c}i}$ are defined as indicated by the dashed lines in Fig. \[fig:energy\]. The values of $t_{\mathrm{p}i}$ and $t_{\mathrm{c}i}$, in both laser cycles and femtoseconds (at $\lambda=800$ nm), are summarized in Table  \[tab:parameters\_x\]. We also indicate the return times for the short and long electron trajectories leading to the threshold frequency ($t_\mathrm{s}$,$t_\ell$) and the return time for the trajectory leading to the cutoff frequency ($t_\mathrm{c}$). Neglecting the frequency dependence of the time for tunneling and recombination, $t_i(\Omega)$ can be interpreted as the group delay of the emitted radiation. Its integral is the spectral phase $$\Phi_i(\Omega)=\Phi_i(\Omega_\mathrm{p})+t_{\mathrm{p}i}(\Omega-\Omega_\mathrm{p})+\frac{t_{\mathrm{c}i}-t_{\mathrm{p}i}}{\Omega_\mathrm{c}-\Omega_\mathrm{p}}\frac{(\Omega-\Omega_\mathrm{p})^2}{2}. \label{eq:phase}$$ As shown in Fig. \[fig:energy\], the return times $t_{\mathrm{p}i}$, $t_{\mathrm{c}i}$, and therefore the second term in Eq. \[\[eq:phase\]\] do not depend on laser intensity. Using $\Omega_\mathrm{c}-\Omega_\mathrm{p}=3.17U_\mathrm{p}/\hbar$, the coefficient in the third term can be written as $$\frac{t_{\mathrm{c}}-t_{\mathrm{p}i}}{\Omega_\mathrm{c}-\Omega_\mathrm{p}}= \frac{2\gamma_i}{I},$$ where $$\gamma_i=\frac{(t_{\mathrm{c}i}-t_{\mathrm{p}i})\pi c^2 m}{3.17 \alpha_{\scalebox{.5}{\textsc{FS}}}\lambda^2},$$ In this classical calculation, $\Phi_i(\Omega_\mathrm{p})$ is equal to zero for the short trajectory, while it is proportional to the laser intensity for the long: $\Phi_{\ell}(\Omega_\mathrm{p})=\alpha_{\ell} I$. The value of $\alpha_{\ell}$ can be obtained numerically within the classical approach used in this work [@Guo2018], and is found to be close to that given within the SFA, equal to $4\pi^2\alpha_{\scalebox{.5}{\textsc{FS}}}/m\omega^3$ [@LewensteinPRA1995; @CarlstromNJP2016]. Table \[tab:parameters\] indicates the parameters needed to describe $\Phi_i(\Omega)$ for 800 nm radiation. --------------------- --------------------------------------------------------- -- $\gamma_\mathrm{s}$ $1.03\times 10^{-18} \,\mathrm{s}^2\mathrm{W cm}^{-2}$ $\gamma_\ell$ $-0.874\times 10^{-18}\,\mathrm{s}^2\mathrm{W cm}^{-2}$ $\alpha_\mathrm{s}$ $0$ $\alpha_\ell$ $-2.38\times 10^{-13} \,\mathrm{W}^{-1}\mathrm{cm}^{2}$ --------------------- --------------------------------------------------------- -- : **Parameters for the short and long trajectories at 800 nm** \[tab:parameters\] The dipole phase can be approximated for the two families of trajectories by the expansion: $$\Phi_i(\Omega)=\alpha_iI+t_{\mathrm{p}i}(\Omega-\Omega_\mathrm{p})+ \frac{\gamma_i}{I}(\Omega-\Omega_\mathrm{p})^2. \label{eq:phasef}$$ The present expression gives very similar results to e.g. the numerical results presented in [@VarjuJMO2005], obtained by solving saddle point equations within the SFA, with the advantage of being analytical. Wavefront and spatial width of XUV radiation {#wavefront-and-spatial-width-of-xuv-radiation .unnumbered} ============================================ We now use this analytical expression for the dipole phase together with traditional Gaussian optics to predict the radius of curvature, position of focus and divergence of the two trajectory contributions to HHG. A similar derivation has been proposed, independently, by Quintard et al. [@Quintard2017; @Quintard2018] with, however, a different analytical formulation of the dipole phase. We neglect the influence of propagation, considering an infinitely thin homogeneous gas medium, and assume that the fundamental field is Gaussian, with intensity $I(r,z)$, width $w(z)$, radius of curvature $R(z)$ and peak intensity $I_0$, $z$ denoting the coordinate along the propagation axis and $r$ the radial coordinate. The focus position is $z=0$ and the waist $w_0=w(0)$. Considering only the contribution of one trajectory $i$, the phase of the $q^{\mathrm{th}}$ harmonic field can be approximated by $$\Phi_q(r,z)= q\phi(r,z)+ \Phi_i(r,z).$$ The phase of the fundamental Gaussian beam is $\phi(r,z)=kz-\zeta(z)+kr^2/2R(z)$, where $k$ is the wavevector equal to $\omega/c$ and $\zeta(z)$ the Gouy phase [@SalehPhotonics2007]. This article is mainly concerned with the third term, giving the curvature of the beam. The dipole phase $\Phi_i(r,z)$ is given by Eq. (\[eq:phasef\]), for $I=I(r,z)$ and $\Omega=q\omega$, $\omega$ being the laser frequency. Omitting the second term in Eq. (\[eq:phasef\]), which does not depend on intensity and therefore on space, $\Phi_i(r,z)$ can be expressed as $$\Phi_i(r,z)= \frac{\alpha_iI_0 w_0^2}{w^2(z)}e^{-\frac{2r^2}{w^2(z)}}+\frac{\gamma_i (\Omega-\Omega_p)^2w^2(z)}{I_0 w_0^2}e^{\frac{2r^2}{w^2(z)}}. \label{eq:phiq}$$ We use a Taylor expansion close to the center of the beam to approximate $\Phi_i(r,z)$ \[Eq. (\[eq:phiq\])\]. To determine the harmonic wavefront, we only keep the terms proportional to $r^2$ in Eq. (\[eq:phiq\]), to which we add the $r^2$-dependent contribution from the fundamental, equal to $qkr^2/2R(z)$. The resulting $r^2$-dependent contribution to the phase of the harmonic field can be written as $qkr^2/2R_i$, with $$\frac{1}{R_i}=\frac{1}{R(z)}-\frac{4\alpha_i I_0 w_0^2 c}{ w^4(z)\Omega}+\frac{4\gamma_i(\Omega-\Omega_p)^2c}{I_0 w_0^2\Omega}. \label{eq:radius}$$ For simplicity of the notations, we omit to explicitly indicate the $z$ dependence of $R_i$. The curvature of the harmonic field is equal to that of the fundamental (first term) plus that induced by the dipole phase. The second term is only present for the long trajectory. This equation outlines the dependence of the XUV radiation wavefront on frequency ($\Omega$), electron trajectory (i), intensity at focus ($I_0$) and generation position ($z$). Eq. (\[eq:radius\]) is illustrated in Fig. \[fig:radius\](a), representing the wavefronts induced to the harmonic by the fundamental (black) and due to the dipole phase for the short trajectory (green) as a function of the generation position. The fundamental wavefront changes from convergent to divergent through the focus, while that induced by the dipole phase is always divergent and independent of the generation position ($z$). Using the reduced coordinate $Z=z/z_0$, where $z_0=\pi w_0^2/\lambda$ is the fundamental Rayleigh length, Eq. (\[eq:radius\]) can be written as $$\frac{z_0}{R_i}=\frac{1}{Z+1/Z}-\frac{\eta_i}{(1+Z^2)^2} +\mu_i, \label{eq:redrad}$$ where $\eta_i=2\alpha_i I_0 /q$ and $\mu_i=2 \gamma_i \omega^2 (q-q_\mathrm{p})^2/qI_0$ are dimensionless quantities ($q_\mathrm{p}=\Omega_\mathrm{p}/\omega$). For the short trajectory, since $\alpha_\mathrm{s}=0$, the positions where the radius of curvature diverges, corresponding to a flat phase front, can be calculated analytically by solving a second-order equation in $Z$, $$Z^2+\frac{Z}{\mu_\mathrm{s}}+1=0. \label{z2}$$ For $\mu_\mathrm{s}\leq 0.5$, the solutions to this equation are real and the radius of curvature diverges at $$Z_\pm=-\frac{1}{2\mu_\mathrm{s}}\pm\sqrt{\frac{1}{4\mu^2_\mathrm{s}}-1}.$$ This discussion is illustrated graphically in Fig. \[fig:radius\](b) for the 23^rd^ harmonic of 800 nm radiation generated in argon, with $I_0=3 \times 10^{14}\mathrm{W}\,\mathrm{cm}^{-2}$. In these conditions, we have $\eta_\mathrm{s}=0$, $\mu_\mathrm{s}=0.253$, $\eta_\ell=-6.38$ and $\mu_\ell=-0.215$. Fig. \[fig:radius\](b) presents the radius of curvature in reduced units $R_i/z_0$ for the short (blue) and long (red) trajectory contributions. Over the range shown in the figure, between $-2z_0$ and $z_0$, $R_\mathrm{s}/z_0$, represented by the blue curve, diverges at $Z_+=-0.272$. The other solution of Eq. (\[z2\]) is $Z_-=-3.68$ which is outside the scale of the figure. For the long trajectory, the radius of curvature, represented by the red solid line, diverges at $Z \simeq -1.4$. This behavior is quite general for all harmonics, as discussed in the last section of this article. To estimate in a simple way the spatial width of the harmonic field at the generation position, we assume that its amplitude is proportional to the fundamental amplitude to a power $p$. This exponent is quite constant in the plateau region (typically of the order of 4) [@Quintard2018; @DurfeePRL1999] and increases in the cutoff region. The harmonic width is then simply equal to $W=w(z)/\sqrt{p}$. (Here as well, we omit to write explicitly the $z$-dependence of $W$). Focus position and beam waist {#focus-position-and-beam-waist .unnumbered} ============================= Knowing the beam radius of curvature and width at a given position $z$, it is a simple exercise within Gaussian optics to determine the position of the focus and the corresponding waist \[see e.g. [@SalehPhotonics2007]\]. The position of focus relative to the generation position $z$ is given by $$z_i=-\frac{R_i}{1+(\lambda_q R_i /\pi W^2)^2}, \label{zi}$$ with $\lambda_q=\lambda/q$. Using reduced coordinates relative to the fundamental Rayleigh length, Eq. \[\[zi\]\] can be written as $$\frac{z_i}{z_0}=-\frac{R_i}{z_0}\left(1+\left[\frac{p R_i}{ q z_0(1+Z^2)}\right]^2\right)^{-1}.$$ The corresponding waist at focus is $$w_i=\frac{W}{\sqrt{(1+(\pi W^2/\lambda_q R_i)^2}},$$ or, relative to the fundamental waist, $$\frac{w_i}{w_0}=\left(\frac{1+Z^2}{p}\right)^{\frac{1}{2}}\left(1+\left[\frac{qz_0 (1+Z^2)}{ pR_i}\right]^2\right)^{-\frac{1}{2}}. \label{wi}$$ ![Position of the focus of the 23^rd^ harmonic relative to the generation position (a) and far-field divergence (b) as a function of the generation position relative to the laser focus. The results for the short and long trajectory are indicated by the blue and red curves respectively. The dashed line corresponds to the position $Z_+$ where the radius of curvature for the short trajectory diverges. The color plots indicate results of a calculation based on the solution of the TDSE, where HHG is assumed to occur in an infinitely thin plane. In (a), the on-axis intensity at a certain position along the propagation axis is plotted as a function of generation position on a logarithmic scale. Three different focal regions, labeled I, II and III can be identified. In (b), the radial intensity calculated at a long distance from the generation position, and normalized to the fundamental radial intensity at the same distance is indicated.[]{data-label="fig:focus_tdse"}](TDSE.pdf){width="0.9\linewidth"} Fig. \[fig:focus\_tdse\] shows (a) the position of the harmonic focus ($z_i/z_0$) relative to that of the generation position ($z/z_0$), and (b) the normalized far-field divergence $\theta_i/\theta_0=w_0/w_i$ for the two trajectories, short (blue solid line) and long (red solid line). The color plots will be discussed in the next Section. The divergence of the fundamental $\theta_0$ is defined as $\lambda/\pi w_0$. Let us emphasize that the zero of the horizontal scale is the laser focus, while in (a), zero on the vertical scale means that the focus of the harmonic field coincides with the generation position. The focus position and divergence strongly vary with $z$, and quite differently for the two trajectories. In both cases, the focus position changes sign and the divergence goes through a minimum when the radius of curvature goes to infinity (see Fig. \[fig:radius\]). For the short trajectory and $Z\leq Z_+$, the focus is real and it is located after the generation position ($z_i \geq 0$) along the propagation direction. The negative curvature of the convergent fundamental beam is larger in magnitude than the positive curvature induced by the dipole phase and the harmonics are generated as a convergent beam [@Quintard2018]. When $Z > Z_+$, the focus is virtual and located before the generation position. Two cases can be considered: When $0>Z>Z_+$, i.e. when the generation position is before the IR focus, the negative curvature of the fundamental beam is smaller in magnitude than the positive curvature induced by the dipole phase: The harmonics are generated as a divergent beam. When $Z \geq 0$, both curvatures are positive and the harmonics are generated as a divergent beam. The divergence is smallest in the region close to $Z_+$. The same reasoning applies for the long trajectory contribution, except that $Z_+$ is now replaced by $Z\approx -1.4$ (see Fig. \[fig:radius\]). In this case, in the region with enough intensity for HHG, i.e. $|Z| \leq 1.5$, corresponding to $I=9 \times 10^{13}$ W$\,$cm$^{-2}$, the harmonic focus is located just before the generation position and the divergence is much larger than that of the short trajectory contribution. At the positions where the radius of curvature diverges (indicated by the dashed line in Fig. \[fig:focus\_tdse\] for the short trajectory), the harmonics are generated with a flat wavefront and with a large focus (low divergence). In contrast, harmonics generated far away from the divergence minima will inherit a curvature from the fundamental and the dipole contribution which corresponds to a significantly smaller beam waist in the real or virtual focus and thus in a significantly larger divergence. The variation of the divergence with generation position is due partly to the dipole phase contribution, but also to the mismatch between the harmonic order $q$ and the amplitude variation here described by a power law with exponent $p=4$ \[See Eq. (\[wi\])\]. Numerical calculations {#numerical-calculations .unnumbered} ====================== To validate the Gaussian model presented in this work, we performed calculations based on tabulated single-atom data obtained by solving the TDSE for a single active electron in argon. The time-dependent dipole response was calculated for $\simeq$ 5000 intensity points. This allows us, for each harmonic frequency, to precisely unwrap the amplitude and phase variation as a function of intensity, and thus to accurately describe the interferences of the trajectories. The complex electric field distribution at a given harmonic frequency is obtained by integrating in time the polarization induced by the fundamental field in an arbitrarily thin sheet of homogeneous argon gas. The field is then propagated to different positions relative to the generation position by calculating the diffraction integral in Fresnel approximation using Hankel transforms. The influence of ionization is not taken into account. This procedure is repeated for different gas target positions relative to the laser focus. We use a fundamental wavelength of 800nm, a pulse duration of 45fs, a peak intensity of $3\times 10^{14}$W cm$^{-2}$ and a fundamental waist size $w_0=350\,\mu$m. The corresponding Rayleigh length is equal to 0.48 m. ![Results of propagation calculations for a (a) 5.4 mm, (b) 30 mm and (c) 60 mm-long gas cell. The on-axis intensity at a certain position along the propagation axis axis is plotted as a function of generation position on a logarithmic scale. The results of the Gaussian model are indicated by the blue and red solid lines for the short and long trajectories and are identical to those of Fig. \[fig:focus\_tdse\](a).[]{data-label="fig:prop_tdse"}](TDSE_prop.pdf){width="0.9\linewidth"} Figure\[fig:focus\_tdse\] (a) presents a color plot of the 23$^\mathrm{rd}$ harmonic on-axis intensity for different generation positions (horizontal axis). The regions with the warmest colors (i.e. towards red) represent the focal regions. The small regions with high peak intensity (dark red, like that labeled II) correspond to the smallest focus. The agreement between the numerical predictions and those of the Gaussian model is striking. When $Z \leq Z_+$, the 23$^\textrm{rd}$ harmonic is focused after the generation position (region I). When $Z \geq Z_+$, two focal regions can be identified, a very thin one close to the generation position \[region II\], and a larger one at larger negative $z_i$ \[region III\]. The agreement with the results of the Gaussian model allows us to interpret the main contribution to these regions: short trajectory for I and III and long trajectory for II. The horizontal interference structures observed between I and II are a manifestation of quantum path interferences [@ZairPRL2008]. The harmonic radiation often exhibits two foci, due to the two trajectories. While the focus position for the long trajectory contribution remains close to (just before) the generation plane, the focus position of the short trajectory contribution strongly depends on the generation position. The color plot in Fig. \[fig:focus\_tdse\](b) is the 23$^\textrm{rd}$ harmonic radial intensity at a distance of $50z_0$, as a function $r/50z_0\theta_0$ (vertical scale) for different generation positions. This distance is long enough to reach the far field region, so that the radial intensity is proportional to the far field divergence. As for the focus position, the comparison with the prediction of the Gaussian model allows us to distinguish the contribution of the two trajectories, with quite different divergence, especially for $|Z| \leq 1$. The red (blue) curves represent the $1/e^2$ divergence within the Gaussian model for the long (short) trajectories. The blue-green colored regions in (b) can be attributed to the long trajectory while the red-yellow-bright green regions to the short trajectory. An important question is whether these results are still valid after propagation in a finite medium. We used the single atom data described previously as input in a propagation code based on the slowly-varying envelope and paraxial approximations [@LhuillierPRA1992]. We present in Fig. \[fig:prop\_tdse\] results obtained for a 5.4 mm (a), 30 mm (b) and 60 mm (c)-long homogeneous medium, using a 2 mbar gas pressure. While Fig. \[fig:prop\_tdse\](a) compares very well with the results shown in Fig. \[fig:focus\_tdse\](a), as expected, Fig. \[fig:prop\_tdse\](b) and (c) shows clear effects of propagation, related to ionization-induced defocusing of the fundamental laser beam. In fact, two different phase matching regimes appear: one similar to what is present in absence of propagation, and which agrees well with the predictions of the Gaussian model \[Compare regions I,III in Fig. \[fig:prop\_tdse\](a,b)\] and a second one, which also follows a similar model but for a fundamental focus moved to the left \[See regions I’,III’ in Fig. \[fig:prop\_tdse\](b)\], as expected for a fundamental beam that is defocused due to partial ionization of the medium [@MiyazakiPRA1995; @TamakiPRL1999; @LaiOE2011; @JohnsonScA2018]. To examine in more details the effect of propagation goes beyond the scope of this article and will be discussed in future work. Experimental divergence measurements {#experimental-divergence-measurements .unnumbered} ==================================== Experiments were performed at the intense XUV beamline of the the Lund Laser Centre [@ManschwetusPRA2016; @CoudertAS2017], using a multi-terawatt 45-fs Titanium-Sapphire laser operating at 10 Hz repetition rate. The beam was (slightly) apertured to 27 mm and focused using a spherical mirror with focal length $f=8$ m. In addition, a deformable mirror was used in order to correct for the laser wavefront aberrations and adjust the focal length. The harmonics were generated in a 60 mm gas cell filled with argon by a pulsed valve. We measured the divergence of the emitted harmonics using a flat field XUV spectrometer with an entrance slit located approximately 6 m after the generation. For each harmonic, the width was estimated by fitting a Gaussian function onto the transverse (spatial) direction of the spectrometer. The IR focus was moved relative to the gas cell along the direction of propagation by changing the voltage of the actuators controlling the curvature of the deformable mirror. The limits of the scan were imposed by the decrease of the harmonic yield, which is slightly asymmetric relative to the laser focus [@SalieresPRL1995]. \[fig:divergenceExperiment\] The widths of the 13$^\textrm{th}$ to 19$^\textrm{th}$ harmonics are shown in Fig. \[fig:divergenceExperiment\] (a), and compared with the predictions of the Gaussian model in (b), on the same vertical and horizontal scales. The harmonic widths in (b) were calculated as $(z_i+L)\theta_i$, where $L=6$ m is the distance from the gas cell to the measurement point. A laser waist of 450 $\mu$m and an intensity of $3.5 \times 10^{14}$ W cm$^{-2}$ were assumed. The general trends observed in the experiment are reproduced by the calculations. The harmonic width generally increases as the generation position moves towards (and beyond) the laser focus along the propagation direction, and this increase generally becomes steeper with harmonic order. Differences between the experiments and the predictions of the Gaussian model could be attributed to propagation/ionization effects \[see Fig. \[fig:prop\_tdse\] (b)\], non-Gaussian fundamental beam, etc. Chromatic aberrations of attosecond pulses {#chromatic-aberrations-of-attosecond-pulses .unnumbered} ========================================== Finally, we study the variation of the focus position and beam waist over a large spectral bandwidth. To obtain a broad spectral region, we consider generation of high-order harmonics in neon atoms. HHG spectra obtained in Ne [@MacklinPRL1993] are broader and flatter than those in Ar, where a strong variation due to a Cooper minimum is observed around 45 eV. Fig. \[fig:focus\_harm\] shows the predictions of the Gaussian model for the 31^st^ to the 71^st^ harmonics of 800 nm radiation, at an intensity of 5 $\times 10^{14}$ W$\,$cm$^{-2}$. We only consider here the contribution from the short trajectory. ![Position of harmonic focus $z_i$ (a) and waist (b) as a function of generation position for harmonics 31 to 71. The different harmonic orders are indicated by different rainbow color codes, from brown (31) to dark blue (71). The inserts show harmonic spectra at four different positions along $z_i$, indicated from the top to the bottom by the numbered circles, for the generation position marked by the dashed line in (a).[]{data-label="fig:focus_harm"}](focus_waist_with_spectra_inset_2.pdf){width="0.9\linewidth"} The variation of the focus position as a function of generation position strongly depends on the process order. This is due to the frequency dependence of Eq. \[\[eq:radius\]\], and in particular depends on whether the radius of curvature diverges. Since $\mu_\mathrm{s}$ increases with frequency, the two zeros $Z_\pm$ of Eq. \[\[eq:radius\]\] move closer to each other, as is clear in Fig. \[fig:focus\_harm\] (a) by comparing, e.g. harmonics 41 and 43 ($Z_\pm$ correspond to the maxima in the figure). At a certain frequency, corresponding to harmonic 45 in Fig. \[fig:focus\_harm\] (a), $-1/\mu_\mathrm{s}$ becomes tangent to $R(z)$ at $z=-z_0$ (see also Fig. \[fig:radius\]). Above this frequency, the radius of curvature does not diverge and remains negative. The harmonic focus position is then always located before the generation position. As $-1/\mu_\mathrm{s}\to 0$ when the frequency increases, the focus position becomes largely independent from the generation. In this region, the harmonics are much more focused, as shown by the blue lines in Fig. \[fig:focus\_harm\] (b). To estimate the consequence of these spatial properties on the spectral characteristics of the attosecond pulses [@FrumkerOE2012], we examine the variation of the on-axis spectrum at different positions (on the vertical axis), for the generation position indicated by the dashed line. This is equivalent to examining the properties of the generated radiation after refocusing as illustrated in Fig. \[fig:coupling\], as a function of a “detection position”, in the focal region. We here assume equal strength of the generated harmonics, but account for the frequency variation in beam waist size and position, \[Fig. \[fig:focus\_harm\]\]. The harmonic spectra shown in the inset are found to be strongly dependent on the “detection position”, with, in some cases, strong bandwidth reduction and displacement of the central frequency. Spatio–temporal coupling of attosecond pulses {#spatiotemporal-coupling-of-attosecond-pulses .unnumbered} ============================================= Finally, we estimate the influence of the chromatic aberrations on the temporal properties of the attosecond pulses. We consider a flat spectrum between harmonics 31 and 71 at the generation position indicated by the dashed line in Fig. \[fig:focus\_harm\]. We propagate the harmonics and coherently add them to obtain the resulting attosecond pulse train in space and time at different “obervation” positions. We take into account the different focus positions and divergences of the frequency components of the attosecond pulses, as well as the so-called “attosecond” positive chirp according to the blue curve in Fig. \[fig:energy\]. Fig. \[fig:duration\] shows the spectral (a) and temporal (b) intensity (in color) of the generated attosecond pulse on-axis as function of the axial (observation) position relative to the generation position, here equal to $z=-0.75$ (dashed line in Fig. \[fig:focus\_harm\]). In these conditions, the central frequency and pulse duration of the attosecond pulse vary distinctively, indicating strong spatio–temporal couplings. In particular, the high frequency components (high harmonic orders) form a tight virtual focus before the generation position, while the low frequency components have a more loose and real focus behind \[see Fig. \[fig:duration\](a)\]. The highest intensity is obtained before the generation position, while the shortest pulse is obtained afterwards, as follows from Fig. \[fig:duration\](b). The attosecond pulse is not the shortest at the generation position, where the spectral bandwidth is the largest, because the attosecond chirp stretches the pulse in time. Fig.\[fig:duration\](a,b) strikingly show that the shortest pulse and the highest intensity of the attosecond pulse are obtained in different positions, illustrating the difficulty of re-focusing high-order harmonics, particularly for applications requiring high intensity. Finally, Fig. \[fig:duration\](c,d) shows the spatio-temporal intensity profiles of the attosecond pulse at the positions where is it spectrally broadest (c, ) and where it is most intense (d, ). The difference between the two quantities is a signature of the strong spatio-temporal couplings of the generated attosecond pulses. These couplings, here studied at $z=-0.75$ (dashed line in Fig. \[fig:focus\_harm\]), strongly depend on the position of generation. Conclusion {#conclusion .unnumbered} ========== In this work, we examine the focusing properties of high-order harmonics generated in gases. We develop a simple Gaussian optics model based on an analytical expression of the frequency- and intensity-dependent dipole phase. This model allows us to predict the focus and divergence of the two trajectory contributions to HHG. We validate the predictions of the model by numerical calculations based on solving the time-dependent Schrödinger equation for the single atom response and propagation equations for the response from the macroscopic medium. Experimental divergence measurements performed at the intense XUV beamline of the Lund Laser Centre show similar trends as those predicted by the model. We also discuss the consequences of the fact that the harmonics have different focus positions and beam waists on the resulting spectra and pulse durations. The effects investigated in the present work have a strong impact on applications of attosecond pulses, requiring a small focal spot (e.g. in order to reach a high XUV intensity) over a broad bandwidth or during a short (attosecond) duration. These spatio-temporal couplings may be reduced by locating the generation medium after the laser focus and/or by minimizing the influence of the dipole phase, using a shaped fundamental beam [@BoutuPRA2011] or generating in waveguides (capillaries) [@DurfeePRL1999; @PopmintchevNP2010]. Funding Information {#funding-information .unnumbered} =================== This research was supported by the Swedish Research Council, the Swedish Foundation for Strategic Research and the European Research Council (grant 339253 PALP), the Knut and Alice Wallenberg Foundation, and the National Science Foundation (Grant No. PHY-1713761). This project received funding from the European Union’s Horizon 2020 research and innovation program under Marie Skłodowska-Curie Grant Agreements no. 641789 MEDEA and 793604 ATTOPIE.
{ "pile_set_name": "ArXiv" }
--- abstract: | Universal enveloping algebras of braided m-Lie algebras and PBW theorem are obtained by means of combinatorics on words. 0.1cm 2000 Mathematics Subject Classification: 16W30, 16G10 keywords: Braided Lie algebra, Universal enveloping algebras. author: - | Shouchuan Zhang, Jieqiong He\ Department of Mathematics, Hunan University\ Changsha 410082,  P.R. China date: - - title: 'Universal Enveloping Algebras of Braided m-Lie Algebras ' --- \[section\] \[Proposition\][Theorem]{} \[Proposition\][Definition]{} \[Proposition\][Corollary]{} \[Proposition\][Lemma]{} \[Proposition\][Example]{} \[Proposition\][Remark]{} Introduction {#s0} ============ The theory of Lie superalgebras has been developed systematically, which includes the representation theory and classifications of simple Lie superalgebras and their varieties [@Ka77] [@BMZP92]. In many physical applications or in pure mathematical interest, one has to consider not only ${\bf Z}_2$- or ${\bf Z}$- grading but also $G$-grading of Lie algebras, where $G$ is an abelian group equipped with a skew symmetric bilinear form given by a 2-cocycle. Lie algebras in symmetric and more general categories were discussed in [@Gu86] and [@GRR95]. A sophisticated multilinear version of the Lie bracket was considered in [@Kh99] [@Pa98]. Various generalized Lie algebras have already appeared under different names, e.g. Lie color algebras, $\epsilon $ Lie algebras [@Sc79], quantum and braided Lie algebras, generalized Lie algebras [@BFM96] and $H$-Lie algebras [@BFM01]. In [@Ma94c], Majid introduced braided Lie algebras from geometrical point of view, which have attracted attention in mathematics and mathematical physics (see e.g. [@Ma95b] and references therein). In paper [@ZZ04], braided m-Lie algebras was introduced, which generalize Lie algebras, Lie color algebras and quantum Lie algebras. Two classes of braided m-Lie algebras are given, which are generalized matrix braided m-Lie algebras and braided m-Lie subalgebras of $End _F M$, where $M$ is a Yetter-Drinfeld module over $B$ with dim $B< \infty $ . In particular, generalized classical braided m-Lie algebras $sl_{q, f}( GM_G(A), F)$ and $osp_{q, t} (GM_G(A), M, F)$ of generalized matrix algebra $GM_G(A)$ are constructed and their connection with special generalized matrix Lie superalgebra $sl_{s, f}( GM_{{\bf Z}_2}(A^s), F)$ and orthosymplectic generalized matrix Lie super algebra $osp_{s, t} (GM_{{\bf Z}_2}(A^s), M^s, F)$ are established. The relationship between representations of braided m-Lie algebras and their associated algebras are established. In this paper we follow paper [@ZZ04] and obtain universal enveloping algebras of braided m-Lie algebras and PBW theorem by means of combinatorics on words (see [@Lo83]). Throughout, $ F$ is a field, Braided m-Lie Algebras ====================== We recalled two concepts. \[4’.1.1\] (See [@ZZ04]) Let $(L, [\ \ ])$ be an object in the braided tensor category $ ({\cal C }, C)$ with morphism $[\ \ ] : L \otimes L \rightarrow L$. If there exists an algebra $(A, m)$ in $ ({\cal C }, C)$ and monomorphism $\phi : L \rightarrow A$ such that $\phi [\ \ ] = m (\phi \otimes \phi ) - m (\phi \otimes \phi ) C_{L, L},$ then $(L, [ \ \ ])$ is called a braided m-Lie algebra in $ ({\cal C }, C)$ induced by multiplication of $A$ through $\phi $. Algebra $(A, m) $ is called an algebra associated to $(L, [ \ \ ])$. A Lie algebra is a braided m-Lie algebra in the category of ordinary vector spaces, a Lie color algebra is a braided m-Lie algebra in symmetric braided tensor category $ ({\cal M} ^{FG}, C^r)$ since the canonical map $\sigma: L \rightarrow U(L)$ is injective (see [@Sc79 Proposition 4.1]), a quantum Lie algebra is a braided m-Lie algebra in the Yetter-Drinfeld category $ (^B_B{\cal YD}, C)$ by [@GM03 Definition 2.1 and Lemma 2.2]), and a “good" braided Lie algebra is a braided m-Lie algebra in the Yetter-Drinfeld category $ (^B_B{\cal YD}, C)$ by [@GM03 Definition 3.6 and Lemma 3.7]). For a cotriangular Hopf algebra $(H, r)$, the $(H,r)$-Lie algebra defined in is a braided m-Lie algebra in the braided tensor category $({}^H{\cal M}, C^r)$. Therefore, the braided m-Lie algebras generalize most known generalized Lie algebras. For an algebra $(A, m)$ in $({\cal C}, C)$, obviously $L = A$ is a braided m-Lie algebra under operation $ [\ \ ] = m - m C_{L, L}$, which is induced by $A$ through $id _A$. This braided m-Lie algebra is written as $A^-$. \[1\] (see [@Zh99]) Let H be a Hopf algebra, $(V, \alpha)$ and $(V, \delta )$ be a left $H$-module and a left $H$-comodule, respectively. If $$\begin{aligned} \label {EYD}\delta(\alpha(h\otimes v))=\delta(h\cdot v)=\sum h_{(1)}v_{(-1)}S(h_{(3)})\otimes h_{(2)}.v_{0}\end{aligned}$$ $\forall v\in V$, $h\in G$, then $(V, \alpha, \delta )$ is called a Yetter-Drinfeld module over $H$, or a $H$- [YD]{} module in short. All of $H$- [YD]{} module construct a braided tensor category, called the Yetter-Drinfeld module category, denoted as $(^H_H {\cal YD}, C)$, where $C$ is the braiding. If $H= FG$ is a group algebra and $(V, \alpha, \delta)$ is an $FG$- [YD]{} module, then $V$ becomes a $G$-graded space $V = \oplus _{g\in G}V_g$ and the condition (\[EYD\]) becomes $$\begin{aligned} \label {EYDG}\delta(\alpha(h\otimes v))=\delta(h \cdot v)=\sum h gh^{-1}\otimes h \cdot v \end{aligned}$$ for any $h, g \in G, $ $v\in V_g.$ Let $G$ be a group and $\chi $ a bicharacter of $G$, i.e. $\chi$ is a map from $G \times G$ to $F$ satisfying $\chi (ab, c) = \chi (a, c)\chi (b, c)$, $\chi (a, bc) = \chi (a, b)\chi (a, c) $ and $\chi (a, e) =1 = \chi (e, a)$ for any $a, b, c \in G$, where $e$ is the unit element of $G$. The braiding of $FG$-[YD]{} module $(V, \alpha, \delta)$ is determined by bicharacter $\chi$ if $h\cdot x = \chi (h, g)x$ for any $h, g\in G,$ $x\in V_g.$ In this case, $C( x \otimes y)=\chi(g, h)y\otimes x$ for any homogeneous elements $ x \in V _g, $ $y\in V_h$. Obviously, if the braiding of $FG$-[YD]{} module $(V, \alpha, \delta)$ is determined by bicharacter $\chi$, then the braiding of $(V, \alpha, \delta)$ is diagonal. Conversely, if the braiding of a braided vector space $V$ is diagonal, then $V$ can becomes an $F\mathbb Z[I]$-[YD]{} module, which braiding is determined by a bicharacter (see [@He06]). If $G$ is a finite abelian group and $V$ is a $kG$-[YD]{} module, then the braiding of $V$ is diagonal (see [@ZZC04]). In this paper we only consider the braiding determined by bicharacter $\chi.$ Jacobi identity =============== (See [@Kh99]) If $L$ is braided m-Lie algebra, then Jacobi identity holds: $$\begin{aligned} \label {Jacobi} [[a b] c]-[a [b c]]+\chi(a,b)b [a c]-\chi(b,c)[a c] b=0 \end{aligned}$$ for any homogeneous elements $a, b, c \in L$, where $\chi (a, b)$ denotes $\chi (g, h)$ for $a\in V_g$, $b \in V_h.$ [**Proof.**]{} $$\begin{aligned} \mbox {the left side }&=& abc-\chi(a,b)bac-\chi(ab,c)cab+\chi(ab,c)\chi(a,b)cba\\ & & -abc+\chi(b,c)acb+\chi(a,bc)bca-\chi(a,bc)\chi(b,c)cba \\ & & +\chi(a,b)bac-\chi(a,b)\chi(a,c)bca-\chi(b,c)acb+\chi(b,c)\chi(a,c)cab\\ &=& 0. \Box\end{aligned}$$ Universal enveloping algebras of braided m-Lie algebras and PBW theorem ========================================================================= Let $E$ be a homogeneous basis of braided m-Lie algebra $L$ and $B$ a set. Let $B^*$ denote the set of all words (see [@Lo83]) on $B$ and $\varphi$ a bijective map from $E$ to $B$. Define $[bc]=\varphi([ef])$ for any $b = \varphi(e)$, $c = \varphi(f)$, $e, f \in E$. Let $\prec$ be an order of $B$ and $P = : \{b_{1}b_{2}\cdots b_{n}\ | \ b_{i}\in B, b_n \prec b_{n-1} \prec \cdots \prec b_1, n \in \mathbb N \}$. For any $w\in B^*$, let $\nu (w)$ denote the number of elements in set $\{ (r, s, t) \ | \ w=rasbt; a,b \in B, r, s, t \in B^*, a\prec b,\}$ and $\nu(w)$ is called the index of $w$. Obviously, we have $$v(ubav)=v(uabv)-1$$ for any $a,b \in B, u, v \in B^*, a\prec b$. We also have that $\nu (w)=0$ if and only if $w\in F$. For any a set $X$, let $FX$ denote the vector space spanned by $X$ with basis $X.$ It is clear that $FB^*$ is the free algebra on $B$. Meantime $FB^*$ also is the tensor algebra $T(FB)$ over $FB$. \[3.1\] There exists $\lambda : FB^* \rightarrow FP$ such that [(i)]{} $\lambda(f)= f,$ $f\in P$; [(ii)]{} $\lambda(ubcv)=\chi(b,c)\lambda(ucbv)+\lambda(u[bc]v),u,v\in B^{\ast},b,c\in B $; [(iii)]{} $\lambda(uv)=\lambda(\lambda(u)v)=\lambda(u\lambda(v)),u,v\in FB^*$. [**Proof.**]{} For $w \in B^*$, we define $\lambda (w)$ using an induction first on the length and second on the index. If $w \in B$, define $\lambda (w) = w$. Let the length of $w$ be larger than 1 and define $ \lambda (w) =: \chi(b,c)\lambda(ucbv)+\lambda(u[bc]v)$ for $w=ubcv$ with $b, c\in B$, $u, v \in B^*$. Now we show that the definition is well-defined. For $w=ubcv=u'b'c'v'$ with $b, c, b', c'\in B$, $u, v, u', v' \in B^*$, we only need show that $$\begin{aligned} \label {e3.1}\chi(b,c)\lambda(ucbv)+\lambda(u[bc]v) = \chi(b',c')\lambda(u'c'b'v')+\lambda(u'[b'c']v'). \end{aligned}$$ We show this by following two steps. ($1^\circ$) If $|u| \le |u'|-2$, then $u'=ubct,v=tb'c'v',t\in B^{*}$. By induction hypothesis we have $$\begin{aligned} \mbox {the left side } &=& \chi(b,c)\chi(b',c')\lambda(ucbtc'b'v')+\chi(b,c)\lambda(ucbt[b'c']v')\\ & & +\chi(b',c')\lambda(u[bc]tc'b'v')+\lambda(u[bc]t[b'c']v')\end{aligned}$$ and $$\begin{aligned} \mbox {the right side } &=& \chi(b,c)\chi(b',c')\lambda(ucbtc'b'v')+\chi(b,c)\lambda(ucbt[b'c']v')\\ & & +\chi(b',c')\lambda(u[bc]tc'b'v')+\lambda(u[bc]t[b'c']v').\end{aligned}$$ Thus (\[e3.1\]) holds. ($2^\circ$) If $|u|=|u'|-1$£¬then $u'=ub,c=b',v=c'v'$. We only need show $ \chi(a,b)\lambda(rbacs)+\lambda(r[ab]cs)=\chi(b,c)\lambda(racbs)+\lambda(ra[bc]s) $. By induction hypothesis we have $$\begin{aligned} \mbox {the left side } &=&\chi(a,b)\{\chi(a,c)\lambda(rbcas)+\lambda(rb[ac]s)\}+\lambda(r[ab]cs)\\ &=&\chi(a,b)\chi(a,c)\lambda(rbcas)+\chi(a,b)\lambda(rb[ac]s)+\lambda(r[ab]cs)\\ &=&\chi(a,b)\chi(a,c)\{chi(b,c)\lambda(rcbas)+\lambda(r[bc]as)\}+\chi(a,b)\lambda(rb[ac]s)+\lambda(r[ab]cs)\\ &=&\chi(a,b)\chi(a,c)chi(b,c)\lambda(rcbas)+\chi(a,b)\chi(a,c)\lambda(r[bc]as)+\chi(a,b)\lambda(rb[ac]s)\\ & &+\lambda(r[ab]cs)\end{aligned}$$ and $$\begin{aligned} \mbox {the right side } &=&\chi(b,c)\{\chi(a,c)\lambda(rcabs)+\lambda(r[ac]bs)\}+\lambda(ra[bc]s)\\ &=&\chi(b,c)\chi(a,c)\lambda(rcabs)+\chi(b,c)\lambda(r[ac]bs)+\lambda(ra[bc]s)\\ &=&\chi(b,c)\chi(a,c)\{\chi(a,b)\lambda(rcbas)+\lambda(rc[ab]s)\}+\chi(b,c)\lambda(r[ac]bs)+\lambda(ra[bc]s)\\ &=&\chi(b,c)\chi(a,c)\chi(a,b)\lambda(rcbas)+\chi(b,c)\chi(a,c)\lambda(rc[ab]s)+\chi(b,c)\lambda(r[ac]bs)\\ & &+\lambda(ra[bc]s).\end{aligned}$$ Thus $$\begin{aligned} &&\mbox {the left side } - \mbox {the right side }\\ &=&\{\chi(a,b)\chi(a,c)\lambda(r[bc]as)-\lambda(ra[bc]s)\}+\{\chi(a,b)\lambda(rb[ac]s)-\chi(b,c)\lambda(r[ac]bs)\}\\ & &+\{\lambda(r[ab]cs)-\chi(b,c)\chi(a,c)\lambda(rc[ab]s)\}\\ &=&-\lambda(r[a[bc]]s)+\lambda(\chi(a,b)rb[ac]s-\chi(b,c)r[ac]bs)+\lambda(r[[ab]c]s)\\ &=&0 \ \ \ \ { \mbox {(by Jacobi identity)}}.\end{aligned}$$ For (iii), we use an induction first on the length and second on the index. Assume $|w_1|\neq |w|$ and $w=w_{1}w_{2}$. If $w_{1}=ubct$, $b, c\in B$, $u, t \in B^*$, then $$\begin{aligned} &\lambda(w)&=\chi(b,c)\lambda(ucbtw_{2})+\lambda(u[bc]tw_{2})\\ & &=\chi(b,c)\lambda(\lambda(ucbt)w_{2})+\lambda(\lambda(u[bc]t)w_{2})\\ & &=\lambda(\lambda(w_{1})w_{2})\end{aligned}$$ If $w_{1}=b$, $w_2 = cv$, $b, c \in B$, $v \in B^*$, then $\lambda (w) = \lambda ( \lambda (w_1)w_2).$ $\Box$ Suppose that ${L}$ is a braided m-Lie algebra in $ ({\cal C }, C)$ and $U$ is a algebra with Lie algebra homomorphism $i : L\rightarrow U^-$. $(U, i)$ is called the universal algebra of braided m -Lie algebra $L$, if the following condition holds: If for any an algebra $W$ in $({\cal C }, C)$ with a Lie algebra homomorphism $\psi: L \rightarrow W^-$ in $({\cal C }, C)$, there exists the unique algebra homomorphism $\bar \psi : U \rightarrow W$ in $({\cal C }, C)$ such that the following is commutative: $$\begin {array} {lcccr} {}&\varphi & {}\\ L& \longrightarrow &U \\ & \psi \searrow & \downarrow \bar \psi\\ & & W & . \end {array}.$$ Obviously, $\varphi$ in section above is a Lie algebra monomorphism from $L$ to $FP$ in $ ^{FG}_{FG} {\mathcal YD}$. Let $U(L)=: FP$. Define the multiplication of $U(L)$ as follows: $u * v=\lambda(uv) $ for any $ u, v \in P.$ By Lemma \[3.1\] (iii), $U(L)$ is an associative algebra: $u\ast(v\ast w)=\lambda(u\lambda(vw))=\lambda(uvw)=\lambda(\lambda(uv)w)=(u\ast v)\ast w$ for any $u, v, w \in P.$ Obviously, $\lambda $ is an algebra homomorphism. \[3.4”\] If $(V, \alpha, \delta )$ is an $FG$-[YD]{} module, then tensor algebra $T(V)$ over $V$ is an $FG$-[YD]{} module. [**Proof.**]{} By the universal property of tensor algebra, we can construct the module operation $\alpha ^{(T(V))}$ and comodule operation $\delta ^{(T(V))}$of $T(V)$ as follows: i\) $$\begin {array} {lcccr} {}&\delta^{(T(V))} & {}\\ T(V)& \longrightarrow &FG\otimes T(V)\\ i \uparrow& \nearrow ({\rm id} \otimes i)\delta^{(V)} & \uparrow {\rm id} \otimes i \\ V& \longrightarrow & FG\otimes V \\ & \delta ^{(V )}& \ \ \ \ \ \ \ \ \ . \end {array}$$ ii) $$\begin {array} {lcccr} {}&\alpha_{g} ^{(V )}& {}\\ V& \longrightarrow &V\\ i \downarrow & \searrow i\alpha_{g} ^{(V)} & i\downarrow \\ T(V)& \longrightarrow & T(V) \\ & \alpha _g ^{(T(V))} & \ \ \ \ \ \ \ \ \ , \end {array}$$ where $\alpha ^{(V)}_g (v) =: \alpha (g \otimes v) = g\cdot v$ for any $v \in V,$ $g\in G.$ iii\) For $\forall g\in G,$ $x_{j} \in V_{g_j}$, $1\le j \le r$, See that $$\begin{aligned} \delta(g\cdot(x_{1}\cdot\cdot\cdot x_{r}))&=&\delta (\alpha_{g}(x_{1}\cdot\cdot\cdot x_{r}))\\ &=&\delta((g\cdot x_{1})\cdot\cdot\cdot(g\cdot x_{r}))=\delta(g\cdot x_{1})\cdot\cdot\cdot\delta(g\cdot x_{r})\\ &=&(gg_{1}g^{-1}\otimes (g\cdot x_{1}))\cdot\cdot\cdot(gg_{r}g^{-1}\otimes (g\cdot x_{r}))\\ &=&(gg_{1}g^{-1})\cdot\cdot\cdot(gg_{r}g^{-1})\otimes x_{1}\cdot\cdot\cdot x_{r}\\ &=&g(g_{1}\cdot\cdot\cdot g_{r})g^{-1}\otimes x_{1}\cdot\cdot\cdot x_{r}.\end{aligned}$$ Thus $(T(V), \alpha, \delta)$ is an $FG$-[YD]{} module. Furthermore, considering [(i)]{} and [(ii)]{}, we have that $T(V)$ is an algebra in $^{FG} _{FG} {\mathcal YD}$. $\Box$ \[3.4’\] [(i)]{} $FB^*$ is an $FG$-[YD]{} module. [(ii)]{} $FP$ is an $FG$-[YD]{} sub-module of $FB^*.$ [(iii)]{} $FP$ is an algebra in $^{FG}_{FG}{\cal YD}$. [**Proof.**]{} [(i)]{} It follows from Lemma \[3.4”\]. [(ii) ]{} and [(iii)]{} are clear. $\Box$ \[3.4\](PBW). $(U(L), \varphi$) is the universal enveloping algebra of braided m-Lie algebra $L$. [**Proof.**]{} For any an algebra $W$ in $^{FG} _{FG} {\mathcal YD}$ with a Lie algebra homomorphism $\psi: L \rightarrow W^-$ in $^{FG} _{FG} {\mathcal YD}$, define $\bar \psi : FB^* \rightarrow FP$ such that $\bar \psi \varphi = \psi$ and $\theta = : \bar \psi \mid _{FP}$, the restriction of $\bar \psi$ on $FP.$ It is clear that the following is commutative. $$\begin {array} {lcccr} {}&\varphi & {} & \lambda & {}\\ L& \longrightarrow &FB^* &\longrightarrow & FP\\ & \psi \searrow & \bar \psi \downarrow & \swarrow \theta \\ & &W & \ \ \ \ \ \ . \end {array}$$ Now we show that $\theta$ is an algebra homomorphism, i.e. $$\begin{aligned} \theta(r\ast s)&=& \theta(r) \theta (s) \end{aligned}$$ for any $r, s \in B^*$. We show this using induction by following several steps. $(1^\circ)$ If $rs\in P,$ then $\theta(r\ast s)=\theta(\lambda(rs))=\theta(rs)=\theta(r)\theta(s)$. $(2^\circ)$ $r, s \in B$ and $r\prec s$. See that $$\begin{aligned} \theta(r\ast s)&=& \theta(\lambda (rs))\\ &=&\theta ( \lambda (sr \chi (r,s) + [rs]))\\ &=&\theta ( \lambda (sr ))\chi (r,s) + \theta (\lambda ([rs]))\\ &=&\theta ( \lambda (sr) )\chi (r,s) + \theta ([rs]) \ \ \ ( \mbox { since the length of } [rs] <2 \mbox { and } \nu (sr) < \nu (rs) )\\ &=&\theta (\lambda (sr))\chi (r,s) + \theta ([rs])\\ &=&\theta ( sr)\chi (r,s) + \theta ([rs])\ \ \ \ ( \mbox { since } sr \in P )\\ &=&\theta ( rs) = \theta (r)\theta (s).\end{aligned}$$ $(3^\circ)$ If $r=ub,$ $s=cv, u,v\in B^{\ast}$, $b, c\in B, b\prec c$, $uv\not=1$, then $$\begin{aligned} \theta(r\ast s)&&=\theta(\lambda (rs)) = \chi(b,c)\theta(\lambda(ucbv))+\theta(\lambda(u[bc]v))\\ & &=\chi(b,c)\theta((uc) *(bv))+\theta((u[bc]) *v)\\ & &=\chi(b,c)\theta(uc)\theta(bv)+\theta(u[bc])\theta(v) \ \ \ { \mbox {(by induction hypothesis)}}\\ & &=\chi(b,c)\theta(u)\theta(cb)\theta(v)+\theta(u)\theta([bc])\theta(v)\\ & &=\theta(u)\theta(bc)\theta(v)\\ & &=\theta(u)\theta(b)\theta(c)\theta(v)\\ & &=\theta(r)\theta(s). \ \ \Box\end{aligned}$$ [BD99]{} Y. Bahturin, D. Fischman and S. Montgomery, Bicharacter, twistings and Scheunert’s theorem for Hopf algebra, J. Alg. [**236**]{} (2001), 246-276. Y. Bahturin, D. Fischman and S. Montgomery. On the generalized Lie structure of associative algebras. Israel J. of Math., [**96**]{}(1996) , 27–48. Y. Bahturin, D. Mikhalev, M. Zaicev and V. Petrogradsky, Infinite dimensional Lie superalgebras, Walter de Gruyter Publ. Berlin, New York, 1992. X. Gomez and S. Majid, Braided Lie algebras and bicovariant differential calculi over coquasitriangular Hopf algebras, J. Alg. [**261**]{}(2003), 334–388. D. Gurevich, A. Radul and V. Rubtsov, Noncommutative differential geometry related to the Yang-Baxter equation, Zap. Nauchn. Sem. S.-Peterburg Otdel. Mat. Inst. Steklov. (POMI) [**199** ]{} (1992); translation in J. Math. Sci. [**77** ]{} (1995), 3051–3062. D. I. Gurevich, The Yang-Baxter equation and the generalization of formal Lie theory, Dokl. Akad. Nauk SSSR, [**288**]{} (1986), 797–801. I. Heckenberger, Classification of arithmetic root systems, preprint, arXiv:[math.QA/0605795]{}. V. G. Kac. Lie superalgebras. Adv. in Math., [**26**]{}(1977) , 8–96. V. K. Kharchenko, An existence condition for multilinear quantum operations, J. Alg. [**217**]{} (1999), 188–228. M. Lothaire, Combinatorics on words. London:Cambridge University Press, 1983. S. Majid, Free braided differential calculus, braided binomial theorem, and the braided exponential map. J. Math. Phys., [**34**]{}, 1993, 4843–4856. S. Majid, Quantum and braided Lie algebras, J. Geom. Phys. [**13**]{} (1994), 307–356. S. Majid, Foundations of Quantum Group Theory, Cambradge University Press, 1995. M. Scheunert. Generalized Lie algebras. J. Math. Phys., [**20**]{} (1979), 712–720. B. Pareigis, On Lie algebras in the category of Yetter-Drinfeld modules. Appl. Categ. Structures, [**6**]{} (1998), 151–175. S. L. Woronowicz, Differential calculus on compact matrix pseudogroups(quantum groups). Commun. Math. Phys, [**122**]{}(1989)1, 125-170. S. C. Zhang, Y. Z. Zhang, Braided m-Lie algebras. Letters in Mathematical Physics, [**70**]{} (2004), 155-167. Also in math.RA/0308095. Shouchuan Zhang, Y-Z Zhang, H.X. Chen, Classification of PM Quiver Hopf Algebras, Journal of Algebra and Its Applications, [**6**]{}(2007)4, 1-32. Also in arXiv, math.QA/0410150. Shouchuan Zhang, Braided Hopf Algebras, Hunan Normal University Press, 1999. Also in math.RA/0511251.
{ "pile_set_name": "ArXiv" }
--- author: - 'G. Handler' date: 'Received January 14, 2011; Accepted February 2, 2011' title: '$uvby\beta$ photometry of early type open cluster and field stars[^1]$^,$[^2]' --- [The $\beta$ Cephei stars and slowly pulsating B (SPB) stars are massive main sequence variables. The strength of their pulsational driving strongly depends on the opacity of iron-group elements. As many of those stars naturally occur in young open clusters, whose metallicities can be determined in several fundamental ways, it is logical to study the incidence of pulsation in several young open clusters.]{} [To provide the foundation for such an investigation, Strömgren-Crawford $uvby\beta$ photometry of open cluster target stars was carried out to determine effective temperatures, luminosities, and therefore cluster memberships.]{} [In the course of three observing runs, $uvby\beta$ photometry for 168 target stars was acquired and transformed into the standard system by measurements of 117 standard stars. The list of target stars also included some known cluster and field $\beta$ Cephei stars, as well as $\beta$ Cephei and SPB candidates that are targets of the asteroseismic part of the Kepler satellite mission.]{} [The $uvby\beta$ photometric results are presented. The data are shown to be on the standard system, and the properties of the target stars are discussed: 140 of these are indeed OB stars, a total of 101 targets lie within the $\beta$ Cephei and/or SPB star instability strips, and each investigated cluster contains such potential pulsators.]{} [These measurements will be taken advantage of in a number of subsequent publications.]{} Introduction ============ The $\beta$ Cephei stars are a group of pulsating main sequence variables with early B spectral types. They oscillate in radial and nonradial pressure modes with typical periods of several hours. Stankov & Handler ([@SH05]) provide an overview of those stars. Pigulski & Pojma[ń]{}ski ([@PP08]) doubled the number of class members to about 200. As these are young massive stars (and are thus progenitors of type II supernovae), they naturally occur in the galactic plane, in open clusters, and stellar associations. In general, this statement also holds for the less massive slowly pulsating B (SPB) stars. They neighbour the $\beta$ Cephei stars in the HR diagram, but they are cooler and less luminous, and they pulsate in gravity modes with periods of a few days (see, e.g., De Cat [@PDC07]). The physical origin of pulsation driving of the $\beta$ Cephei and SPB stars is well established (Moskalik & Dziembowski [@MD92]), and is caused by the huge number of transitions inside the thin structure of the electron shells in excited ions of the iron-group elements (Rogers & Iglesias [@RI94]): the $\kappa$ mechanism. Obviously, the power of pulsational driving will strongly depend on the abundance of iron-group elements and on their opacities in the driving zone. Credible pulsational models must reflect the conditions inside the real stars, reproducing all observables such as the extents of the $\beta$ Cephei and SPB instability strips and their metallicity dependence. These depend on the input data used in the models, which can therefore be tested. The metallicities of stellar aggregates can be determined in several fundamental ways. The incidence of core hydrogen-burning B-type pulsators among open cluster stars can then yield important constraints on what abundance of metals (and, by extrapolation, amount of iron group elements) is required to drive their oscillations. The aim of the present and subsequent works is to determine observationally the incidence of $\beta$ Cephei and SPB stars in a number of open clusters exactly for this purpose. Ground-based measurements of stellar variability are hampered by the presence of the Earth’s contaminated atmosphere. Scintillation and variable transparency of the night sky limit the precision of stellar brightness measurements. Therefore, the level at which the presence of oscillations in a given star can be detected is finite. Although there are techniques that optimize the precision of ground-based photometric measurements (again, often taking advantage of stellar clusters), observations from space are superior given a large enough telescope. The [*Kepler*]{} mission, the most powerful instrument for measuring stellar brightness variations to date (Koch et al. [@KBB10]), aims at detecting transits of extrasolar planets in the habitable zone around their host stars. As the only inhabited planet known so far revolves around a middle-aged main sequence G star, the sample of target stars of the [*Kepler*]{} mission was chosen to observe as many similar stars as possible to the highest precision. Stars at such an age do not dominate the population at low galactic latitudes, so the [*Kepler*]{} field was chosen to be some 10off the galactic plane (Batalha et al. [@BBK10] and references therein). $\beta$ Cephei and SPB stars with magnitudes of $V>7.5$ formed in the galactic plane would hardly reach these galactic latitudes within their main sequence life times and are thus expected to be unusual. Therefore, characterizing [*Kepler*]{} $\beta$ Cephei and SPB star candidates is important. The present paper reports the results of a study of bona fide and candidate field and open cluster $\beta$ Cephei stars in the Strömgren photometric system: 168 target stars were measured, 107 of them being open cluster stars, 17 known cluster and field $\beta$ Cephei stars, and 42 Kepler targets. To transform the data into the standard system, 117 Strömgren photometric standards were measured as well. The outcome of this study will be used in subsequent papers. Observations ============ Measurements and reductions --------------------------- The measurements were obtained with the 2.1-m telescope at McDonald Observatory in Texas. Three observing runs were carried out in October 2008, March 2009, and August/September 2010. The first two observing runs were dedicated to stars in open clusters and known field $\beta$ Cephei stars, whereas the third run focused on Kepler targets and on supplementary $H_\beta$ measurements of previous targets missing this information. In all runs, a two-channel photoelectric photometer was used, but only employed channel 1. The same filter set, the same photomultiplier tube, and the same operating voltage were used during all observations. The only variables in the observational setup were the reflectivities of the telescope’s mirrors: the primary mirror was not cleaned or aluminized within the time span of the observing runs, but dust on the secondary mirror is blown off on a monthly basis. Photometric apertures of 14.5 and $29\arcsec$ were used in most cases, depending on the brightness of the target and sky background as well as on crowding of the field. In a few cases of extreme crowding or of a close companion, a $11\arcsec$ aperture and extremely careful (offset) guiding had to be used. As the photometer’s filter wheel can only carry four filters at once, the $uvby$ measurements had to be taken separately from the H$_{\beta}$ data. No H$_{\beta}$ measurements were taken for open cluster targets that were immediately identified as non-OB stars from their Strömgren “bracket quantities” (see Sect. 4 for details). As the measurements aimed at obtaining estimates of the effective temperatures and luminosities of most targets possible rather than establishing new standard stars, most stars were observed only once. A few exceptions were made for standard and target stars that were used for purposes of determining extinction coefficients, for target stars that were deemed the most interesting astrophysically, or where a previous measurement appeared suspicious. Selection of standard stars --------------------------- A set of standard stars was selected to span the whole parameter range of the targets in terms of $(b-y)$, $m_1$, $c_1$, $\beta$, and $E(b-y)$. It was observed for transforming the measurements into the standard system. For reasons of homogeneity in the colour transformations, the majority of the adopted standard Strömgren indices were taken from the work of a single group of researchers. The standard stars were chosen from the papers on NGC 1502 (Crawford [@Cr94]), IC 4665 (Crawford & Barnes [@CB72]), NGC 2169 (Perry, Lee, & Barnes [@PLB78]), NGC 6910 and NGC 6913 (Crawford, Barnes, & Hill [@CBH77]), O-type stars (Crawford [@Cr75]), h and $\chi$ Per (Crawford, Glaspey, & Perry [@CGP70]), Cep OB3 (Crawford & Barnes [@CB70]), Lac OB1 (Crawford & Warren [@CW76]), and on three field stars (Crawford et al. [@CBG72], Knude [@JK77]). Data reduction -------------- The data were reduced in a standard way. The instrumental system’s deadtime of 33 ns was determined by measuring the twilight sky, and then was used to correct for coincidence losses. Sky background subtraction was done next, followed by nightly extinction corrections determined from measurements of extinction stars that also served as standards. The applied extinction coefficients varied between $0.14 - 0.18$ in $y$, $0.054 - 0.069$ in $(b-y)$, $0.050 - 0.064$ in $m_1$, and $0.126 - 0.157$ in $c_1$. Transformation equations ======================== The equation for $(b-y)$ only has two parameters, so we only needed to calculate a linear fit to the data. However, as it turned out, the photometric zeropoints of the three individual observing runs and seasons were different (most likely as a consequence of the large temporal gaps between the observing runs) and had to be determined separately. After adjustment of the zeropoints, the slope of the transformation was re-determined, and the procedure repeated until convergence. The final transformation equation was $$(b-y)=1.0563 (b-y)_N + zpt(b-y),$$ where the subscript $N$ denotes the colour in the natural system, and $zpt(b-y)$ is the zeropoint of the transformation equation, listed in Table 1. The rms residual scatter of a single standard star measurement in $(b-y)$ is an unsatisfactory 13.4 mmag. Observing run Standard stars $zpt (b-y)$ --------------- ---------------------- --------------------- Autumn 2008 O stars 1.3497 $\pm$ 0.0026 Autumn 2008 Cep OB3 1.3514 $\pm$ 0.0034 Autumn 2008 h & $\chi$ Per 1.3413 $\pm$ 0.0025 Autumn 2008 NGC 6910/13 1.3566 $\pm$ 0.0021 Autumn 2008 above combined 1.3485 $\pm$ 0.0014 Spring 2009 NGC 1502, 2169, 2244 1.3916 $\pm$ 0.0037 Autumn 2010 Lac OB1, field 1.3302 $\pm$ 0.0023 : $(b-y)$ colour transformation zeropoints However, this high residual scatter does not mean that the present measurements are imprecise. Some standard stars were measured more than once, which indicates the precision of the data. The average rms scatter of the $(b-y)$ values of standard stars that were measured three times is only 2.0 mmag. It is worth noting that the $(b-y)$ transformation zeropoints are different by up to $6\sigma$ when standard stars from different publications are considered (upper part of Table 1). This comparison only uses data from the most fruitful observing run in Autumn 2008, where several of the different groups of standard stars were measured in the same nights. The total 13.4 mmag residual scatter in $(b-y)$ may therefore be due to a combination of underestimation of the precision of the data and of imperfections in the standard values adopted. Because accuracy is more important than precision (see, e.g., Bevington [@B69] for the distinction between these two terms) in the present case, the same transformation slope was used for all $(b-y)$ measurements, but seasonal (lower part of Table 1) zeropoints were applied. In other words, it is assumed that the changes in the seasonal zeropoints of the colour equations are dominated by variations in the instrumental system. The remaining transformation equations are to be determined by a (simultaneous) three-parameter fit to the measurements of the standard stars as a colour correction by means of the $(b-y)$ data is necessary. The equation for $m_1$ derived by simultaneously fitting three parameters appears biased from correlations in the $m_1$ and $b-y$ indices due to reddening: $E(m_1)=-0.32E(b-y)$. The range spanned by the $m_1$ values of the standard stars is 0.37 mag, the range in $(b-y)$ is 0.89 mag, 2.4 times larger. Therefore, the measured and standard $m_1$ values were linearly fitted first, and only then were the $(b-y)$ correction term and the zeropoint fixed. This resulted in the following transformation equation $$m_1=1.0195 m_{1,N}-0.0162 (b-y)_N-0.8469.$$ Statistically insignificant variations occurred in the zeropoint when different ensembles of comparison stars were considered. The rms residual of a single standard $m_1$ measurement is 12.1 mmag. Concerning $c_1$, correlations between the coefficients in the transformation equation due to reddening are also to be expected, but are less severe than in $m_1$ because the $c_1$ values have a much wider spread than $m_1$ and because $c_1$ is less affected by reddening than $m_1$. A simultaneous three-parameter linear fit yielded $$c_1=1.0025 c_{1,N}+0.1018 (b-y)_N-0.5484.$$ The seasonal zeropoints were roughly, but not fully satisfactorily, consistent. Again, as accuracy is more important than precision, a single zeropoint was adopted for all data sets. The residual scatter of the standard star measurements transformed in this way is 15.6 mmag per single point. No difficulties with varying zeropoints were encountered when determining the transformation equation for the $\beta$ value. This is no surprise as it is a differential measurement at the same effective wavelength. The transformation equation for $\beta$ is $$\beta=0.8302\beta_N-0.0439(b-y)_N+0.9532,$$ leaving a residual scatter of $11.4$ mmag per single measurement. Finally, the transformation equations for the $V$ magnitude require nightly zeropoints (Table 2) to take variable sky transparency into account. As some papers reporting standard Strömgren colour indices do not quote $V$ magnitudes, these values were supplemented by literature data as supplied by the SIMBAD data base and cross-checked with the original references. The final transformation was Civil date $zpt(y)$ ------------- -------------------- 07 Oct 2008 $20.022 \pm 0.004$ 08 Oct 2008 $20.036 \pm 0.006$ 09 Oct 2008 $20.001 \pm 0.007$ 16 Oct 2008 $20.003 \pm 0.004$ 04 Mar 2009 $20.102 \pm 0.005$ 01 Oct 2010 $19.739 \pm 0.005$ 02 Oct 2010 $19.703 \pm 0.008$ : Nightly $V$ magnitude transformation zeropoints $$V=0.9961y_N+0.0425(b-y)_N+zpt(y),$$ resulting in a residual scatter of $22.2$ mmag per single measurement. Observations yielding statistically significant outliers in each of the transformation equations were excluded from the determination of its parameters and are marked as such in the data tables that follow. It cannot be judged whether this indicates a problem with the present measurements or with the standard values used. Results ======= With the transformation equations in place, the colour indices in the standard system can be determined for all standard and target stars. The results are listed in Tables 3 - 6. Table 3 contains the present measurements of the standard stars themselves, transformed into the standard system. Table 4 reports the $uvby\beta$ photometry for open cluster target stars not previously known to pulsate. Table 5 lists the Strömgren-Crawford photometry for known $\beta$ Cephei stars plus a few other targets. Finally, Table 6 contains the results for stars in the [*Kepler*]{} field. In the following, stars in open clusters are always designated with the cluster name followed by their identification in the WEBDA[^3] data base. Measurements of standard stars that were rejected for computing the transformation equations (or where no $V$ magnitudes or $H_{\beta}$ values were available in the literature) were treated in the same way as target star observations and are marked with asterisks in Table 3. Some of the stars used as standards have been shown to be intrinsically variable in the literature. However, standard stars must have temperatures and luminosities similar to the targets that would ideally be pulsating variables. Therefore the use of variable standard stars of low amplitude cannot be avoided. Intrinsic variability of standard stars not exceeding the accuracy of the present data is therefore tolerable and measurements that are significantly off the limits would be rejected anyway. Comments on individual stars ---------------------------- BD+36 4867 was mistakenly observed when intending to measure the $uvby\beta$ standard star BD+36 4868. This error came from confusion of the coordinates of the two stars in the SIMBAD data base at the time of the measurements. The Strömgren indices of BD+36 4867 are listed for completeness in Table 6, indicating a mid G-type star. The published $V$ magnitudes of NGC 1893 196 vary between 12.30 and 12.79. This unusually wide range raises the suspicion of stellar variability. Table 4 lists $V=12.637$ and $\beta=2.441$. The latter value indicates strong hydrogen-line emission, as demonstrated spectroscopically (Marco et al. [@MBN01]). Analysis and discussion ======================= Validity of the transformation equations ---------------------------------------- The ranges in which the transformation equations are valid are examined in Fig. 1. It shows the distributions of the standard and target star measurements with respect to the different $uvby\beta$ colour indices and reddening. The routines by Napiwotzki, Schönberner, & Wenske ([@NSW93]) were used to derive the latter. The $(b-y)$ values of all but one target star (Roslund 2 13, a very red object) are contained within the range spanned by the standard stars. The same comment is true for the $c_1$ parameter and reddening $E(b-y)$. Sixteen (i.e. 10%) of the targets have more positive $m_1$ values than any standard star. These are stars of later spectral types than A0 which are not the prime interest of this work. As far as $H_\beta$ is concerned, five stars with values below 2.55 were observed, including two (supposed) standard and three target stars. Both standard stars were rejected after determining the transformation equations due to high residual deviations. It is suspected that all five of these stars are Be stars. The hydrogen line emission of such stars is often variable (e.g., McSwain, Huang, & Gies [@MHG09]) which explains the high residuals and makes the tabulated values unreliable. They are listed for completeness only. Considering the distribution in $E(b-y)$, about two thirds of the stars with the smallest reddening are among the [*Kepler*]{} targets: the satellite’s field of view deliberately excludes the central galactic plane. Two of the remaining targets are $\beta$ Cephei stars of rather high galactic latitude, and the remainder are cool main sequence stars in the foreground of some of the target open clusters. In Tables 4 - 6 the colour indices that are outside the range of those spanned by the standard stars are marked with colons and should be used with caution. Are the present data on the standard system? -------------------------------------------- Before inferring physical parameters of the targets, it must be made sure that the data are commensurate with the standard system. It is a subtle process to obtain accurate standard photometry of reddened early-type stars, see, e.g., Crawford ([@Cr99]) for a discussion. One test is to compare published $(U-B)$ colours with $(u-b)$ values from Strömgren indices (see Crawford [@Cr94]) and to compare the resulting relation with the one defined by standard stars. This is done in Fig. 2, using the results for target stars with existing UBV photometry. The $(U-B)$ values for the target stars were taken from the General Catalogue of Photometric Data (Mermilliod, Mermilliod, & Hauck [@MMH97]). For easier visual inspection, the slope of the $(U-B)$ vs. $(u-b)$ relation was removed by a linear fit. The residuals are compared with those of the standard values for reddened O-type stars (Crawford [@Cr75]) and for bright stars earlier than B5 (Crawford, Barnes, & Golson [@CBG71]), which are on the average considerably less reddened than the O stars. For better illustration, we only show a fit to the relations defined by the standard stars for comparison with the data of the target stars. ![Comparison of the present measurements and published Johnson photometry. Circles are open cluster targets not yet known to pulsate, diamonds are known $\beta$ Cephei stars with new Strömgren colour indices, and star symbols are early-type targets in the [*Kepler*]{} field. The dotted line is the relation defined by unreddened B stars, whereas the full line is the relation inferred for reddened OB stars. See text for more information.](16507fg2.ps){width="80mm"} The fits for the O and B-type stars in Fig. 2 are somewhat different. However, the relation for the more strongly reddened target stars are not systematically different from the one defined by the reddened O-type standards, and the relation for the less reddened targets shows no systematic offset from the one defined by the mildly reddened B-type standards. The present $uvby\beta$ photometry is therefore on the standard system. Distinguishing OB stars from cooler ones ---------------------------------------- OB stars can be separated from objects of later spectral type by using the reddening independent Strömgren “bracket quantities" $[m_1]=m_1+0.32(b-y)$ and $[c_1]=c_1-0.2(b-y)$. As a rule of thumb, stars with $[m_1]<0.14$ are B type stars and stars with $[m_1]>0.22$ are of spectral type A3 and later. Astrophysically, this separation is caused by the changing curvature of the stellar energy distribution depending on temperature. Figure 3 shows the distribution of the target and standard stars in an $[m_1],[c_1]$ diagram. ![Plot of the Strömgren “bracket quantities”. These reddening-free indices allow an easy separation between OB and cooler stars; all objects with $[m_1]\simgt0.14$ are non-OB stars. One very cool star lies outside the borders of this diagram. Filled circles are for standard stars, open circles for the target stars.](16507fg3.ps){width="80mm"} All but one standard star were chosen to be of no later type than early A: 83% of the targets are in the same domain. Of the 30 target stars that cannot be OB stars, twelve have been associated with the open cluster Berkeley 4, and should therefore be foreground stars. Seven non-OB stars are [*Kepler*]{} targets, and six were mentioned in connection with NGC 7380, therefore also not being cluster members. Effective temperatures and luminosities of the target stars ----------------------------------------------------------- The effective temperatures and absolute magnitudes of the target stars can be determined with the routines by Napiwotzki et al. ([@NSW93], see their paper for accurate descriptions of the calibrations employed). Bolometric corrections by Flower ([@F96]) and a bolometric magnitude of $M_{\rm bol}=4.74$ for the Sun (Livingston [@L00]) were used to derive stellar luminosities. Figure 4 shows the targets’ locations in a $\log T_{\rm eff} - \log L$ diagram, in comparison with theoretical pulsational instability strips (Zdravkov & Pamyatnykh [@ZP08]). All targets previously known as $\beta$ Cephei stars are located within the corresponding instability strip. The catalogue of Galactic $\beta$ Cephei stars (Stankov & Handler [@SH05]) only contains one object with a mass above 17 $M_{\sun}$, which could be a Be star, hence have overestimated luminosity from $H_{\beta}$ photometry. In contrast, the present, considerably smaller, sample contains three stars with $17.5<M/M_{\sun}<21$. There is one [*Kepler*]{} target in the high mass domain, which, however, appears to be a close binary with no pulsational light variation (Balona et al. [@BPD11]). Table 7 summarizes how many of our target stars lie within the $\beta$ Cephei and SPB star instability strips, respectively, and how many are located in either and may therefore show both types of oscillations. Each of the open clusters observed contains potential pulsators and is therefore worthy of a variability search. As expected, the [*Kepler*]{} field contains only a few high-mass stars. Field in $\beta$ Cep strip in SPB strip in both -------------- ---------------------- -------------- --------- ASCC 130 3/7 4/7 0/7 Berkeley 4 16/32 9/32 6/32 NGC 637 6/6 0/6 0/6 NGC 1893 10/12 2/12 1/12 NGC 2244 3/9 4/9 3/9 NGC 7380 12/27 9/27 3/27 Roslund 2 7/10 3/10 2/10 Kepler field 10/42 26/42 8/42 : Numbers of target stars within the $\beta$ Cephei or SPB star instability strip, or both Two stars in Fig. 4 appear to be post main sequence objects. Berkeley 4 513 is also known as LS I +63 98 and has been classified as an OBe star (Hardorp et al. [@HRS59]). The low $H_{\beta}$ value for the star supports this interpretation. A similar comment applies to NGC 7380 4 that has been spectrally classified as B6Vne (Hoag & Applequist [@HA65]). The post main sequence evolutionary status of these two stars may therefore just be apparent: the calibrations of $uvby\beta$ photometry are not applicable to emission line stars. Summary ======= New $uvby\beta$ photometry was acquired for 168 open cluster and field stars, and was transformed into the standard system by means of measurements of 117 standard stars. The data were demonstrated to be on the standard system, and the limits in which these photometric results are valid were determined. Most target stars are indeed OB stars, and each cluster contains several stars that are located in the pulsational instability strips of main sequence B stars. These measurements are required to determine the effective temperatures and luminosities of the targets. Published $uvby\beta$ photometry of the target clusters may now be tied into the standard system, allowing investigations of the clusters themselves, in terms of (differential) reddening, distance, etc. This is the foundation for several forthcoming papers devoted to individual clusters, including searches for stellar variability. Balona et al. ([@BPD11]) discuss the variability of the [*Kepler*]{} targets in detail. This research is supported by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung under grant P20526-N16. This research has made use of the WEBDA database, operated at the Institute for Astronomy of the University of Vienna. Balona, L. A., Pigulski, A., De Cat, P., et al., 2011, MNRAS, in press Batalha, N. M., Borucki, W. J., Koch, D. G., et al., 2010, ApJ, 713, L109 Bevington, P. R., [*Data reduction and error analysis for the physical sciences*]{}, McGraw-Hill, New York, 1969, p. 3 Crawford, D. L., 1975, PASP, 87, 481 Crawford, D. L., 1994, PASP, 106, 397 Crawford, D. L., 1999, in [*CCD Precision Photometry Workshop*]{}, ed. R. Craine et al., ASP Conf. Ser., 189, 6 Crawford, D. L., & Barnes, J. V., 1970, AJ, 75, 952 Crawford, D. L., & Barnes, J. V., 1972, AJ, 77, 862 Crawford, D. L., & Warren, W. H., 1976, PASP, 88, 930 Crawford, D. L., Barnes, J. V., & Golson, J. C., 1971, AJ, 76, 1058 Crawford, D. L., Barnes, J. V., & Hill, G., 1977, AJ, 82, 606 Crawford, D. L., Glaspey, J. W., & Perry, C. L., 1970, AJ, 75, 822 Crawford, D. L., Barnes, J. V., Gibson, J., et al., 1972, A&AS, 5, 109 De Cat, P., 2007, CoAst, 150, 167 Flower, P. J., 1996, ApJ, 469, 355 Hardorp, J., Rohlfs, K., Slettebak, A., & Stock, J., 1959, Publ. Hamburger Sternw., Warner & Swasey Obs., 1 Hoag, A. A., & Applequist, N. L., 1965, ApJS, 12, 215 Knude, J. K., 1977, A&AS, 30, 297 Koch, D. G., Borucki, W. J., Basri, G., et al., 2010, ApJ, 713, L79 Livingston, W. C., 2000, in [*Allen’s Astrophysical Quantities*]{}, 4$^{\rm th}$ edition, ed. A. N. Cox, Springer Verlag, p. 341 Marco, A., Bernabeu, G., & Negueruela, I., 2001, AJ, 121, 2075 McSwain, M. V., Huang, W., & Gies, D. R., 2009, ApJ, 700, 1216 Mermilliod, J.-C., Mermilliod, M., & Hauck, B., 1997, A&AS, 124, 349 Moskalik, P., & Dziembowski, W. A., 1992, A&A, 256, L5 Napiwotzki, R., Schönberner, D., & Wenske, V., 1993, A&A, 268, 653 Perry, C. L., Lee, P. D., & Barnes, J. V., 1978, PASP, 90, 73 Pigulski, A., & Pojma[ń]{}ski, G., 2008, A&A 477, 917 Rogers, F. J., & Iglesias, C. A., 1994, Science, 263, 50 Stankov, A., & Handler, G., 2005, ApJS, 158, 193 Zdravkov, T., & Pamyatnykh, A. A., 2008, JPhCS 118, 012079 [lcccccc]{} Star & $N_{uvby}$ & $V$ & $(b-y)$ & $m_1$ & $c_1$ & $\beta$\ BD$-$10 4682 & $1$ & $9.608*$ & $0.452$ & $-0.098$ & $-0.089$ & $2.602$\ BD+38 4883 & $1$ & $9.471$ & $ 0.014$ & $ 0.108$ & $ 0.751$ & $2.788$\ BD+39 4890 & $1$ & $9.468$ & $ 0.021$ & $ 0.122$ & $ 0.829$ & $2.850$\ BD+39 4926 & $1$ & $9.269$ & $ 0.170$ & $ 0.044$ & $ 1.513$ & $2.312*$\ BD+60 498 & $1$ & $9.938$ & $ 0.484$ & $-0.148*$ & $ 0.060$ & $2.615*$\ BD+60 501 & $1$ & $9.597$ & $ 0.439$ & $-0.138$ & $-0.033$ & $2.598$\ BD+61 2380 & $1$ & $9.141$ & $ 0.142$ & $ 0.081$ & $ 0.931$ & $2.836$\ BD+62 2142 & $1$ & $9.033$ & $ 0.340$ & $-0.033$ & $ 0.259$ & $2.671$\ BD+62 2150 & $1$ & $9.769$ & $ 0.381$ & $-0.043$ & $ 0.377$ & $2.706$\ BD+62 2158 & $1$ & $10.092$& $ 0.194$ & $ 0.057$ & $ 0.937$ & $2.826$\ BD+63 1907 & $1$ & $9.105$ & $ 0.680$ & $-0.133$ & $ 0.033$ & $2.548$\ HD 13268 & $1$ & $8.155$ & $ 0.163$ & $-0.021$ & $-0.084$ & $2.573$\ HD 14633 & $2$ & $7.442$ & $-0.080$ & $ 0.044$ & $-0.138$ & $2.553$\ HD 15137 & $1$ & $7.852$ & $ 0.112$ & $-0.006$ & $-0.079$ & $2.563$\ HD 161923 & $1$ & $9.135$ & $ 0.262$ & $ 0.105$ & $ 1.108$ & $2.879$\ HD 175876 & $2$ & $6.939$ & $-0.018$ & $ 0.047$ & $-0.156$ & $2.572$\ HD 179589 & $1$ & $9.159$ & $ 0.253$ & $ 0.175$ & $ 0.612$ & ...\ HD 186980 & $2$ & $7.490$ & $ 0.131$ & $-0.002$ & $-0.108$ & $2.569$\ HD 201345 & $1$ & $7.775*$ & $-0.027$ & $0.032$ & $-0.121$ & $ 2.567$\ HD 207538 & $1$ & $7.310$ & $ 0.296$ & $-0.042$ & $-0.046$ & $2.597$\ HD 210809 & $2$ & $7.588$ & $ 0.092$ & $ 0.027$ & $-0.122$ & $2.551$\ HD 212883 & $3$ & $6.461$ & $-0.054$ & $ 0.081$ & $ 0.188$ & $2.651$\ HD 213421 & $1$ & $8.243$ & $ 0.069$ & $ 0.163$ & $ 0.994$ & $2.875$\ HD 213801 & $1$ & $8.164$ & $-0.011$ & $ 0.109$ & $ 0.625$ & $2.772$\ HD 213976 & $1$ & $7.018$ & $-0.034$ & $ 0.069$ & $ 0.110$ & $2.640$\ HD 214022 & $1$ & $8.518$ & $-0.006$ & $ 0.094$ & $ 0.466$ & $2.736$\ HD 214168 & $1$ & $6.474$ & $-0.061$ & $ 0.080$ & $0.113*$ & $2.646$\ HD 214180 & $1$ & $9.505$ & $ 0.065$ & $ 0.140$ & $ 1.051$ & $2.888$\ HD 214243 & $1$ & $8.302$ & $-0.037$ & $ 0.090$ & $ 0.328$ & $2.700$\ HD 214263 & $1$ & $6.826$ & $-0.041$ & $ 0.074$ & $ 0.148$ & $2.637$\ HD 214432 & $1$ & $7.572$ & $-0.031$ & $ 0.088$ & $ 0.255$ & $2.673$\ HD 214652 & $1$ & $6.871$ & $-0.028$ & $ 0.081$ & $ 0.183$ & $2.652$\ HD 214783 & $1$ & $8.683$ & $ 0.041$ & $ 0.118$ & $ 1.086$ & $2.781$\ HD 215191 & $1$ & $6.427$ & $-0.025$ & $ 0.067$ & $ 0.092$ & $2.626$\ HD 215211 & $1$ & $8.656$ & $-0.013$ & $ 0.105$ & $ 0.547$ & $2.749$\ HD 215212 & $1$ & $9.251$ & $ 0.037$ & $ 0.088$ & $ 0.648$ & ...\ HD 216534 & $1$ & $8.524$ & $ 0.063$ & $ 0.047$ & $ 0.319$ & ...\ HD 216684 & $1$ & $7.783$ & $ 0.051$ & $ 0.053$ & $ 0.313$ & ...\ HD 216898 & $1$ & $8.018$ & $ 0.463$ & $-0.110$ & $ 0.010$ & $2.607$\ HD 216926 & $1$ & $8.876$ & $ 0.200$ & $ 0.066$ & $ 0.886$ & $2.817$\ HD 217086 & $1$ & $7.644$ & $ 0.536$ & $-0.138$ & $-0.004$ & $2.592$\ HD 217101 & $1$ & $6.168$ & $-0.061$ & $ 0.086$ & $ 0.128$ & $2.645$\ HD 218229 & $1$ & $8.163$ & $ 0.220$ & $-0.021$ & $ 0.860$ & $2.739$\ HD 218407 & $1$ & $6.664$ & $ 0.008$ & $ 0.066$ & $ 0.200$ & $2.654$\ HD 218450 & $1$ & $8.575$ & $ 0.072$ & $ 0.087$ & $ 0.915$ & $2.765$\ HD 218915 & $1$ & $7.239$ & $ 0.063$ & $ 0.024$ & $-0.116$ & $2.553$\ HD 227245 & $1$ & $9.754$ & $ 0.520$ & $-0.121$ & $-0.026$ & $2.590$\ HD 235673 & $1$ & $9.150$ & $ 0.208$ & $-0.006$ & $-0.119$ & $2.575$\ HR 6690 & $1$ & $6.287*$ & $0.019$ & $0.098$ & $0.799$ & ...\ IC 4665 22 & $1$ & $8.722$ & $ 0.081$ & $ 0.084$ & $ 0.807$ & ...\ NGC 869 3 & $1$ & $7.359$ & $ 0.234$ & $-0.058$ & $ 0.026$ & $2.575$\ NGC 869 146 & $1$ & $9.174$ & $ 0.186$ & $-0.039$ & $ 0.061$ & $2.602$\ NGC 869 339 & $1$ & $8.846$ & $ 0.288$ & $-0.087$ & $ 0.066$ & $2.602$\ NGC 869 612 & $1$ & $8.440$ & $ 0.244$ & $-0.064$ & $ 0.055$ & $2.595$\ NGC 869 662 & $1$ & $8.187$ & $ 0.293$ & $-0.084$ & $ 0.064$ & $2.588$\ NGC 869 717 & $1$ & $9.264$ & $ 0.275$ & $-0.066$ & $ 0.091$ & $2.604$\ NGC 869 782 & $1$ & $9.338*$ & $0.275$ & $-0.060$ & $0.170$ & $2.613$\ NGC 869 847 & $1$ & $9.109$ & $ 0.343$ & $-0.097$ & $ 0.159$ & $2.592$\ NGC 869 864 & $1$ & $ 9.969$ & $0.283$ & $-0.059$ & $0.202$ & $2.630$\ NGC 869 950 & $1$ & $ 11.281$ & $0.324$ & $-0.059$ & $0.218$ & $2.654$\ NGC 869 978 & $1$ & $ 10.653$ & $0.310$ & $-0.052$ & $0.184$ & $2.630$\ NGC 869 1004 & $1$ & $ 10.880$ & $0.301$ & $-0.046$ & $0.218$ & $2.640*$\ NGC 869 1162 & $1$ & $ 6.642$ & $0.473$ & $-0.117$ & $0.073$ & $2.555$\ NGC 869 1187 & $1$ & $ 10.822$ & $0.378$ & $-0.072$ & $0.210$ & $2.639$\ NGC 884 2139 & $1$ & $ 11.327$ & $0.302*$ & $-0.061$ & $0.194$ & $2.646$\ NGC 884 2172 & $1$ & $ 8.476$ & $0.211$ & $-0.026$ & $-0.107$ & $2.568$\ NGC 884 2185 & $1$ & $ 10.942$ & $0.298$ & $-0.038$ & $0.385$ & $2.701$\ NGC 884 2196 & $1$ & $ 11.544$ & $0.325*$ & $-0.067*$ & $0.216$ & ...\ NGC 884 2227 & $3$ & $ 8.045$ & $0.358$ & $-0.097$ & $0.110$ & $2.589$\ NGC 884 2232 & $1$ & $ 11.047$ & $0.266$ & $-0.060*$ & $0.172$ & $2.631$\ NGC 884 2246 & $1$ & $ 9.915$ & $0.315$ & $-0.072$ & $0.113$ & $2.616$\ NGC 884 2251 & $1$ & $ 11.549$ & $0.322$ & $-0.047$ & $0.361$ & $2.701$\ NGC 884 2262 & $1$ & $ 10.559$ & $0.366$ & $-0.110$ & $0.179$ & $2.622$\ NGC 884 2284 & $3$ & $ 9.676$ & $0.367$ & $-0.130$ & $-0.017$& $2.416*$\ NGC 884 2296 & $3$ & $ 8.515$ & $0.289$ & $-0.082$ & $0.113$ & $2.591$\ NGC 884 2299 & $1$ & $ 9.130$ & $0.291$ & $-0.076$ & $0.123$ & $2.611$\ NGC 884 2330 & $1$ & $ 11.442$ & $0.276$ & $-0.046$ & $0.242$ & ...\ NGC 884 2572 & $1$ & $ 9.998$ & $0.340$ & $-0.084$ & $0.174$ & $2.629$\ NGC 884 2621 & $3$ & $ 6.959$ & $0.523$ & $-0.122$ & $0.462$ & $2.590$\ NGC 1502 1 & $1$ & $ 6.944$ & $0.432$ & $-0.116$ & $0.027$ & $2.580$\ NGC 1502 2 & $2$ & $7.100*$ & $0.399$ & $-0.105$ & $0.036$ & $2.592$\ NGC 1502 16 & $1$ & $ 11.667$ & $0.493$ & $-0.048$ & $0.622$ & $2.749$\ NGC 1502 26 & $1$ & $ 9.651$ & $0.480$ & $-0.087$ & $0.161$ & $2.631$\ NGC 1502 30 & $1$ & $ 9.646$ & $0.481$ & $-0.083$ & $0.144$ & $2.629$\ NGC 1502 35 & $1$ & $ 10.451$ & $0.428$ & $-0.048$ & $0.271$ & $2.659$\ NGC 1502 36 & $1$ & $ 9.790$ & $0.454$ & $-0.063$ & $0.196$ & $2.638$\ NGC 1502 42 & $1$ & $ 12.593$ & $0.479$ & $-0.007*$ & $0.868*$ & $2.826*$\ NGC 1502 43 & $1$ & $ 11.367$ & $0.455$ & $-0.051$ & $0.466$ & $2.707$\ NGC 1502 45 & $1$ & $ 11.433$ & $0.473$ & $-0.046$ & $0.525$ & $2.722$\ NGC 1502 52 & $1$ & $ 12.271$ & $0.530$ & $-0.047$ & $0.968$ & $2.827*$\ NGC 1893 14 & $0$ & ... & ... & ... & ... & $ 2.596$\ NGC 2169 17 & $1$ & $ 11.658$ & $0.154$ & $ 0.156$ & $1.051$ & $2.908$\ NGC 2169 18 & $1$ & $ 11.821$ & $0.133$ & $ 0.102$ & $0.922$ & $2.867$\ NGC 2244 114 & $2$ & $ 7.631$ & $0.205$ & $-0.029$ & $-0.090$& ...\ NGC 6871 6 & $1$ & $ 8.729$ & $0.354$ & $-0.099$ & $-0.133*$ & ...\ NGC 6871 7 & $1$ & $ 8.788$ & $0.207$ & $-0.014$ & $0.044$ & ...\ NGC 6910 1 & $3$ & $ 8.098$ & $0.090$ & $ 0.013$ & $0.082$ & $2.615$\ NGC 6910 4 & $1$ & $ 8.528$ & $0.704$ & $-0.149$ & $0.064$ & $2.583$\ NGC 6910 5 & $1$ & $ 9.664$ & $0.701$ & $-0.175$ & $0.127$ & $2.599$\ NGC 6910 13 & $1$ & $ 10.307$ & $0.719$ & $-0.168$ & $0.169$ & $2.626$\ NGC 6910 14 & $1$ & $ 10.349$ & $0.660$ & $-0.165$ & $0.115$ & $2.605$\ NGC 6910 18 & $2$ & $ 10.776$ & $0.585$ & $-0.122$ & $0.160$ & $2.625$\ NGC 6910 21 & $1$ & $ 11.756$ & $0.585$ & $-0.094$ & $0.208$ & $2.648$\ NGC 6910 24 & $1$ & $ 11.706$ & $0.626$ & $-0.126$ & $0.217$ & $2.640$\ NGC 6910 27 & $1$ & $ 11.680$ & $0.812$ & $-0.191$ & $0.198$ & $2.630$\ NGC 6910 28 & $1$ & $ 12.241$ & $0.578$ & $-0.044*$ & $0.353$ & $2.668$\ NGC 6913 1 & $1$ & $ 8.871$ & $0.719$ & $-0.162$ & $0.160$ & $2.594$\ NGC 6913 2 & $1$ & $ 8.904$ & $0.620$ & $-0.132$ & $0.097$ & $2.597$\ NGC 6913 3 & $2$ & $ 8.966$ & $0.672$ & $-0.153$ & $0.136$ & $2.600$\ NGC 6913 4 & $1$ & $ 10.190$ & $0.612$ & $-0.128$ & $0.158$ & $2.616$\ NGC 6913 5 & $1$ & $ 9.341$ & $0.627$ & $-0.131$ & $0.122$ & $2.595$\ NGC 6913 7 & $2$ & $ 12.100$ & $0.691$ & $-0.116$ & $0.396*$ & ...\ NGC 6913 9 & $1$ & $ 11.741$ & $0.601$ & $-0.099$ & $0.342$ & $2.652$\ NGC 6913 27 & $1$ & $ 11.389$ & $0.666$ & $-0.101$ & $0.365$ & $2.659$\ NGC 6913 63 & $1$ & $ 10.543$ & $0.176$ & $ 0.027$ & $0.485$ & $2.726$\ NGC 6913 64 & $1$ & $ 10.099$ & $0.157$ & $ 0.021$ & $0.477$ & $2.739$\ NGC 7380 2 & $2$ & $ 8.546$ & $0.329$ & $-0.059$ & $-0.067$& $2.585$\ [lccccccc]{} Star & $N_{uvby}$ & $V$ & $(b-y)$ & $m_1$ & $c_1$ & $\beta$ & $N_{\beta}$\ ASCC 130 1 & $1$ & $10.670$ & $0.230$ & $0.000$ & $0.639$ & $2.711$ & $1$\ ASCC 130 2 & $1$ & $11.618$ & $0.406$ & $-0.080$ & $0.078$ & $2.613$ & $1$\ ASCC 130 4 & $1$ & $11.116$ & $0.282$ & $-0.025$ & $0.446$ & $2.769$ & $1$\ ASCC 130 5 & $1$ & $10.160$ & $0.241$ & $0.001$ & $0.600$ & $2.707$ & $1$\ ASCC 130 6 & $1$ & $11.380$ & $0.267$ & $0.002$ & $0.345$ & $2.666$ & $1$\ ASCC 130 8 & $1$ & $11.243$ & $0.335$ & $-0.039$ & $0.203$ & $2.657$ & $1$\ ASCC 130 19 & $1$ & $ 9.690$ & $0.271$ & $-0.054$ & $-0.026$& $2.597$ & $1$\ Berkeley 4 9 & $1$ & $10.923$ & $0.393$ & $0.203:$ & $0.329$ & ... & $0$\ Berkeley 4 77 & $1$ & $11.162$ & $0.298$ & $0.139$ & $0.885$ & ... & $0$\ Berkeley 4 101 & $1$ & $12.300$ & $0.470$ & $-0.110$ & $0.263$ & $2.652$ & $1$\ Berkeley 4 115 & $1$ & $11.858$ & $0.323$ & $0.143$ & $0.864$ & $2.839$ & $1$\ Berkeley 4 210 & $1$ & $11.490$ & $0.446$ & $-0.075$ & $0.246$ & $2.612$ & $1$\ Berkeley 4 238 & $1$ & $12.294$ & $0.451$ & $-0.060$ & $0.357$ & $2.723$ & $1$\ Berkeley 4 513 & $1$ & $11.996$ & $0.489$ & $-0.124$ & $0.396$ & $2.577$ & $1$\ Berkeley 4 649 & $1$ & $12.165$ & $0.358$ & $-0.034$ & $0.290$ & $2.666$ & $1$\ Berkeley 4 703 & $1$ & $10.633$ & $0.413$ & $-0.084$ & $0.044$ & $2.610$ & $1$\ Berkeley 4 709 & $1$ & $11.929$ & $0.493$ & $0.152$ & $0.401$ & ... & $0$\ Berkeley 4 794 & $1$ & $11.633$ & $0.504$ & $-0.098$ & $0.065$ & $2.629$ & $1$\ Berkeley 4 877 & $1$ & $12.193$ & $0.407$ & $-0.061$ & $0.239$ & $2.654$ & $1$\ Berkeley 4 966 & $1$ & $10.528$ & $0.123$ & $0.113$ & $1.009$ & $2.901$ & $1$\ Berkeley 4 977 & $1$ & $10.771$ & $0.446$ & $0.247:$ & $0.370$ & ... & $0$\ Berkeley 4 1008 & $1$ & $10.798$ & $0.222$ & $0.019$ & $0.822$ & ... & $0$\ Berkeley 4 1084 & $1$ & $10.662$ & $0.458$ & $-0.112$ & $0.175$ & $2.622$ & $1$\ Berkeley 4 1101 & $1$ & $11.086$ & $0.254$ & $0.151$ & $0.806$ & ... & $0$\ Berkeley 4 1110 & $1$ & $10.800$ & $0.463$ & $0.228:$ & $0.342$ & ... & $0$\ Berkeley 4 1142 & $1$ & $11.386$ & $0.343$ & $0.119$ & $0.916$ & ... & $0$\ Berkeley 4 1204 & $1$ & $11.622$ & $0.443$ & $-0.086$ & $0.157$ & $2.635$ & $1$\ Berkeley 4 1253 & $1$ & $12.109$ & $0.329$ & $0.115$ & $0.938$ & ... & $0$\ Berkeley 4 1302 & $1$ & $12.038$ & $0.455$ & $-0.096$ & $0.019$ & $2.621$ & $1$\ Berkeley 4 1317 & $1$ & $10.483$ & $0.440$ & $-0.108$ & $0.183$ & $2.637$ & $1$\ Berkeley 4 1327 & $1$ & $12.262$ & $0.401$ & $0.190:$ & $0.327$ & $2.652$ & $1$\ Berkeley 4 1356 & $1$ & $11.809$ & $0.445$ & $0.115$ & $0.466$ & ... & $0$\ Berkeley 4 1386 & $1$ & $11.724$ & $0.547$ & $-0.132$ & $0.232$ & $2.653$ & $2$\ Berkeley 4 2000 & $1$ & $10.054$ & $0.329$ & $0.141$ & $0.768$ & $2.754$ & $1$\ Berkeley 4 2001 & $1$ & $ 9.845$ & $0.447$ & $-0.123$ & $0.018$ & $2.591$ & $1$\ Berkeley 4 2002 & $2$ & $11.444$ & $0.455$ & $-0.094$ & $0.106$ & $2.638$ & $1$\ Berkeley 4 2003 & $3$ & $ 9.486$ & $0.537$ & $-0.130$ & $0.042$ & $2.580$ & $1$\ Berkeley 4 2005 & $1$ & $11.047$ & $0.468$ & $-0.098$ & $0.291$ & $2.641$ & $1$\ Berkeley 4 2007 & $1$ & $12.128$ & $0.346$ & $0.037$ & $1.027$ & $2.913$ & $1$\ NGC 637 1 & $1$ & $ 9.979$ & $0.358$ & $-0.075$ & $0.090$ & $2.604$ & $1$\ NGC 637 3 & $1$ & $10.578$ & $0.366$ & $-0.072$ & $0.185$ & $2.640$ & $1$\ NGC 637 6 & $1$ & $10.351$ & $0.373$ & $-0.089$ & $0.092$ & $2.613$ & $1$\ NGC 637 7 & $1$ & $10.670$ & $0.360$ & $-0.076$ & $0.129$ & $2.623$ & $1$\ NGC 637 137 & $1$ & $10.787$ & $0.349$ & $-0.058$ & $0.190$ & $2.653$ & $1$\ NGC 637 138 & $1$ & $10.158$ & $0.327$ & $-0.075$ & $0.106$ & $2.619$ & $1$\ NGC 1893 13 & $1$ & $12.463$ & $0.287$ & $-0.004$ & $0.328$ & $2.684$ & $1$\ NGC 1893 33 & $1$ & $12.287$ & $0.277$ & $-0.026$ & $0.143$ & $2.640$ & $1$\ NGC 1893 59 & $1$ & $12.153$ & $0.225$ & $0.030$ & $0.125$ & $2.655$ & $1$\ NGC 1893 106 & $1$ & $12.395$ & $0.303$ & $-0.019$ & $0.123$ & $2.656$ & $1$\ NGC 1893 139 & $1$ & $12.023$ & $0.157$ & $0.055$ & $0.018$ & $2.622$ & $1$\ NGC 1893 140 & $1$ & $12.415$ & $0.191$ & $0.051$ & $0.176$ & $2.668$ & $1$\ NGC 1893 141 & $0$ & ... & ... & ... & ... & $2.620$ & $1$\ NGC 1893 168 & $1$ & $12.329$ & $0.301$ & $-0.022$ & $0.190$ & $2.663$ & $1$\ NGC 1893 196 & $1$ & $12.637$ & $0.420$ & $-0.050$ & $0.104$ & $2.441:$& $1$\ NGC 1893 228 & $1$ & $12.525$ & $0.207$ & $0.044$ & $0.171$ & $2.659$ & $1$\ NGC 1893 256 & $1$ & $11.870$ & $0.228$ & $0.032$ & $0.283$ & $2.629$ & $1$\ NGC 1893 290 & $0$ & ... & ... & ... & ... & $2.612$ & $1$\ NGC 1893 343 & $1$ & $10.916$ & $0.271$ & $-0.011$ & $0.008$ & $2.617$ & $1$\ NGC 1893 345 & $1$ & $10.831$ & $0.282$ & $-0.015$ & $-0.004$ & $2.606$ & $1$\ NGC 2244 172 & $1$ & $11.233$ & $0.274$ & $0.006$ & $0.245$ & $2.671$ & $1$\ NGC 2244 190 & $1$ & $11.274$ & $0.245$ & $0.015$ & $0.278$ & $2.679$ & $1$\ NGC 2244 193 & $0$ & ... & ... & ... & ... & $2.672$ & $1$\ NGC 2244 239 & $1$ & $11.120$ & $0.254$ & $0.035$ & $0.474$ & $2.737$ & $1$\ NGC 2244 241 & $1$ & $11.102$ & $0.236$ & $0.051$ & $0.491$ & $2.751$ & $1$\ NGC 2244 279 & $1$ & $11.305$ & $0.308$ & $-0.020$ & $0.145$ & $2.481:$& $1$\ NGC 2244 280 & $1$ & $10.881$ & $0.289$ & $0.000$ & $0.312$ & $2.674$ & $1$\ NGC 2244 392 & $0$ & ... & ... & ... & ... & $2.694$ & $1$\ NGC 2244 1034 & $1$ & $11.304$ & $0.320$ & $0.030$ & $0.540$ & $2.748$ & $1$\ NGC 2244 1618 & $1$ & $10.972$ & $0.351$ & $0.186:$ & $0.333$ & $2.663$ & $1$\ NGC 2244 3010 & $1$ & $10.925$ & $0.179$ & $0.052$ & $0.478$ & $2.731$ & $1$\ NGC 7380 4 & $1$ & $10.194$ & $0.358$ & $-0.027$ & $0.571$ & $2.602$ & $1$\ NGC 7380 7 & $1$ & $10.665$ & $0.373$ & $0.171:$ & $0.416$ & ... & $0$\ NGC 7380 8 & $1$ & $10.641$ & $0.288$ & $-0.047$ & $0.052$ & $2.620$ & $1$\ NGC 7380 9 & $1$ & $10.688$ & $0.342$ & $-0.044$ & $0.053$ & $2.625$ & $1$\ NGC 7380 31 & $1$ & $10.617$ & $0.387$ & $-0.081$ & $-0.062$& $2.582$ & $1$\ NGC 7380 34 & $1$ & $11.832$ & $0.333$ & $-0.024$ & $0.138$ & $2.642$ & $1$\ NGC 7380 35 & $1$ & $11.869$ & $0.308$ & $-0.033$ & $0.146$ & $2.619$ & $1$\ NGC 7380 36 & $1$ & $11.827$ & $0.296$ & $0.020$ & $0.960$ & $2.834$ & $1$\ NGC 7380 37 & $1$ & $11.963$ & $0.319$ & $-0.046$ & $0.260$ & $2.658$ & $1$\ NGC 7380 40 & $1$ & $12.158$ & $0.340$ & $-0.016$ & $0.374$ & $2.679$ & $1$\ NGC 7380 41 & $1$ & $12.198$ & $0.335$ & $-0.023$ & $0.153$ & $2.642$ & $1$\ NGC 7380 42 & $1$ & $12.290$ & $0.529$ & $-0.096$ & $0.135$ & $2.595$ & $1$\ NGC 7380 134 & $1$ & $ 9.172$ & $0.401$ & $-0.089$ & $-0.042$& $2.581$ & $1$\ NGC 7380 135 & $1$ & $10.347$ & $0.336$ & $-0.075$ & $0.016$ & $2.609$ & $1$\ NGC 7380 136 & $1$ & $10.409$ & $0.447$ & $-0.079$ & $0.081$ & $2.615$ & $1$\ NGC 7380 138 & $1$ & $11.216$ & $0.332$ & $-0.038$ & $0.083$ & $2.644$ & $1$\ NGC 7380 184 & $1$ & $11.020$ & $0.168$ & $0.178:$ & $0.957$ & ... & $0$\ NGC 7380 5476 & $1$ & $11.153$ & $0.377$ & $0.186:$ & $0.261$ & ... & $0$\ NGC 7380 5593 & $1$ & $10.807$ & $0.226$ & $0.175:$ & $0.773$ & $2.815$ & $1$\ NGC 7380 5596 & $1$ & $10.092$ & $0.302$ & $0.135$ & $0.441$ & ... & $0$\ NGC 7380 5666 & $1$ & $11.260$ & $0.277$ & $-0.007$ & $0.649$ & $2.724$ & $1$\ NGC 7380 5678 & $1$ & $11.344$ & $0.265$ & $0.049$ & $1.185$ & $2.860$ & $1$\ NGC 7380 5681 & $1$ & $ 9.825$ & $0.234$ & $0.004$ & $0.731$ & $2.747$ & $1$\ NGC 7380 5755 & $1$ & $10.616$ & $0.244$ & $-0.047$ & $0.064$ & $2.621$ & $1$\ NGC 7380 5759 & $1$ & $10.489$ & $0.229$ & $0.021$ & $0.506$ & $2.715$ & $1$\ NGC 7380 5761 & $1$ & $10.980$ & $0.237$ & $0.019$ & $0.602$ & $2.761$ & $1$\ NGC 7380 5804 & $1$ & $10.614$ & $0.270$ & $0.146$ & $0.718$ & ... & $0$\ Roslund 2 2 & $1$ & $10.495$ & $0.648$ & $-0.155$ & $0.084$ & $2.611$ & $1$\ Roslund 2 6 & $1$ & $10.773$ & $0.239$ & $0.095$ & $1.074$ & $2.868$ & $1$\ Roslund 2 7 & $1$ & $10.671$ & $0.566$ & $-0.102$ & $0.097$ & $2.587$ & $1$\ Roslund 2 11 & $1$ & $ 8.749$ & $0.624$ & $-0.153$ & $-0.021$& $2.578$ & $1$\ Roslund 2 13 & $1$ & $11.383$ & $1.201:$& $ 0.393:$& $0.536:$& ... & $0$\ Roslund 2 14 & $1$ & $11.428$ & $0.448$ & $-0.018$ & $0.903$ & $2.843$ & $1$\ Roslund 2 16 & $1$ & $ 9.274$ & $0.626$ & $-0.169$ & $0.082$ & $2.598$ & $1$\ Roslund 2 17 & $1$ & $11.109$ & $0.570$ & $-0.134$ & $0.171$ & $2.639$ & $1$\ Roslund 2 18 & $1$ & $ 7.843$ & $0.645$ & $-0.169$ & $0.034$ & $2.566$ & $1$\ Roslund 2 21 & $1$ & $12.039$ & $0.778$ & $-0.150$ & $0.187$ & ... & $0$\ [lccccccc]{} Star & $N_{uvby}$ & $V$ & $(b-y)$ & $m_1$ & $c_1$ & $\beta$ & $N_{\beta}$\ BD+36 4867 & $1$ & $10.369$ & $ 0.579$ & $0.281:$ & $0.429$ & $2.609$ & $1$\ GSC 03142-00038 & $1$ & $12.546$ & $ 0.265$ & $0.177:$ & $0.547$ & $2.719$ & $1$\ GSC 06272-01557 & $1$ & $10.739$ & $ 0.656$ & $-0.142$ & $0.176$ & $2.621$ & $1$\ HD 166540 & $1$ & $ 8.126$ & $ 0.197$ & $-0.013$ & $-0.029$ & $2.603$ & $2$\ HD 167743 & $1$ & $ 9.650$ & $ 0.329$ & $-0.034$ & $0.095$ & $2.634$ & $1$\ HD 180642 & $1$ & $ 8.221$ & $ 0.238$ & $-0.043$ & $0.009$ & $2.601$ & $1$\ HD 203664 & $1$ & $ 8.512$ & $-0.086$ & $ 0.040$ & $-0.087$ & $2.572$ & $1$\ HN Aqr & $1$ & $11.408$ & $-0.085$ & $ 0.068$ & $0.043$ & $2.606$ & $1$\ NGC 637 4 & $1$ & $10.782$ & $ 0.393$ & $-0.075$ & $0.150$ & $2.620$ & $1$\ NGC 869 692 & $1$ & $ 9.369$ & $ 0.265$ & $-0.094$ & $0.044$ & $2.600$ & $1$\ NGC 869 839 & $1$ & $ 9.371$ & $ 0.328$ & $-0.078$ & $0.109$ & $2.610$ & $1$\ NGC 869 992 & $1$ & $10.016$ & $ 0.293$ & $-0.067$ & $0.241$ & $2.623$ & $1$\ NGC 884 2085 & $1$ & $11.321$ & $ 0.290$ & $-0.044$ & $0.230$ & $2.624$ & $1$\ NGC 884 2444 & $1$ & $ 9.503$ & $ 0.344$ & $-0.104$ & $0.140$ & $2.617$ & $1$\ NGC 884 2566 & $1$ & $10.548$ & $ 0.439$ & $-0.106$ & $0.051$ & $2.528:$& $1$\ NGC 6910 16 & $1$ & $10.439$ & $ 0.673$ & $-0.151$ & $0.173$ & $2.624$ & $1$\ NGC 6910 25 & $1$ & $11.459$ & $ 0.763$ & $-0.180$ & $0.280$ & $2.637$ & $1$\ NGC 7235 8 & $1$ & $11.906$ & $ 0.526$ & $-0.126$ & $0.139$ & $2.614$ & $1$\ V909 Cas & $1$ & $10.623$ & $ 0.370$ & $-0.072$ & $0.101$ & $2.619$ & $1$\ [lccccccc]{} Star & $N_{uvby}$ & $V$ & $(b-y)$ & $m_1$ & $c_1$ & $\beta$& $N_{\beta}$\ KIC 3240411 & $1$ & $10.271$ & $-0.041$ & $0.079$ & $0.161$ & $2.643$ & $1$\ KIC 3756031 & $1$ & $10.012$ & $-0.005$ & $0.084$ & $0.372$ & $2.696$ & $1$\ KIC 3839930 & $1$ & $10.702$ & $-0.013$ & $0.087$ & $0.321$ & $2.709$ & $1$\ KIC 3848385 & $1$ & $ 8.909$ & $0.043$ & $0.082$ & $0.785$ & $2.739$ & $1$\ KIC 3865742 & $1$ & $11.120$ & $0.028$ & $0.072$ & $0.201$ & $2.662$ & $1$\ KIC 4276892 & $1$ & $ 9.168$ & $0.026$ & $0.124$ & $1.062$ & $2.855$ & $1$\ KIC 4581434 & $1$ & $ 9.111$ & $0.050$ & $0.159$ & $1.095$ & $2.896$ & $1$\ KIC 4909697 & $1$ & $10.703$ & $0.230$ & $0.179:$& $0.987$ & $2.852$ & $1$\ KIC 5130305 & $1$ & $10.143$ & $0.040$ & $0.112$ & $0.931$ & $2.843$ & $1$\ KIC 5217845 & $1$ & $ 9.420$ & $0.081$ & $0.081$ & $0.784$ & $2.738$ & $1$\ KIC 5304891 & $1$ & $ 9.163$ & $0.051$ & $0.085$ & $0.804$ & $2.747$ & $1$\ KIC 5458880 & $4$ & $ 7.762$ & $0.010$ & $0.031$ & $-0.039$& $2.582$ & $1$\ KIC 5479821 & $1$ & $ 9.803$ & $0.041$ & $0.083$ & $0.313$ & $2.699$ & $2$\ KIC 5786771 & $1$ & $ 9.075$ & $-0.007$ & $0.152$ & $1.006$ & $2.867$ & $1$\ KIC 6848529 & $1$ & $10.628$ & $-0.100$ & $0.096$ & $-0.029$& $2.645$ & $1$\ KIC 7548479 & $1$ & $ 8.387$ & $0.141$ & $0.219:$& $0.774$ & $2.824$ & $1$\ KIC 7599132 & $1$ & $ 9.333$ & $0.001$ & $0.123$ & $0.870$ & $2.830$ & $1$\ KIC 7974841 & $3$ & $ 8.167$ & $0.026$ & $0.131$ & $0.838$ & $2.823$ & $2$\ KIC 8018827 & $1$ & $ 8.020$ & $0.004$ & $0.136$ & $0.895$ & $2.833$ & $1$\ KIC 8057661 & $1$ & $11.613$ & $0.207$ & $0.002$ & $0.196$ & $2.656$ & $1$\ KIC 8161798 & $1$ & $10.396$ & $-0.002$ & $0.191:$& $0.630$ & $2.793$ & $1$\ KIC 8177087 & $1$ & $ 8.108$ & $0.008$ & $0.080$ & $0.589$ & $2.707$ & $1$\ KIC 8324268 & $1$ & $ 7.922$ & $-0.019$ & $0.144$ & $0.488$ & $2.744$ & $1$\ KIC 8351193 & $1$ & $ 7.580$ & $-0.029$ & $0.145$ & $0.880$ & $2.863$ & $1$\ KIC 8381949 & $1$ & $11.010$ & $0.073$ & $0.041$ & $0.157$ & $2.633$ & $1$\ KIC 8389948 & $1$ & $ 9.206$ & $0.073$ & $0.121$ & $1.023$ & $2.858$ & $1$\ KIC 8415752 & $1$ & $10.598$ & $0.103$ & $0.219:$& $0.867$ & $2.845$ & $1$\ KIC 8459899 & $1$ & $ 8.674$ & $0.021$ & $0.075$ & $0.398$ & $2.693$ & $1$\ KIC 8488717 & $1$ & $11.658$ & $0.013$ & $0.146$ & $0.921$ & $2.849$ & $1$\ KIC 8692626 & $1$ & $ 8.308$ & $0.058$ & $0.241:$& $0.933$ & $2.889$ & $1$\ KIC 8714886 & $1$ & $10.866$ & $0.118$ & $0.060$ & $0.286$ & $2.694$ & $1$\ KIC 8766405 & $1$ & $ 8.825$ & $0.001$ & $0.081$ & $0.525$ & $2.692$ & $1$\ KIC 9964614 & $1$ & $10.683$ & $-0.013$ & $0.063$ & $0.193$ & $2.638$ & $1$\ KIC 10130954 & $1$ & $11.015$ & $-0.047$ & $0.076$ & $0.238$ & $2.658$ & $1$\ KIC 10285114 & $1$ & $11.121$ & $-0.035$ & $0.088$ & $0.371$ & $2.695$ & $1$\ KIC 10797526 & $1$ & $ 8.300$ & $-0.028$ & $0.060$ & $0.061$ & $2.598$ & $1$\ KIC 10960750 & $1$ & $ 9.833$ & $-0.065$ & $0.080$ & $0.165$ & $2.640$ & $1$\ KIC 11360704 & $1$ & $10.650$ & $-0.032$ & $0.081$ & $0.286$ & $2.662$ & $1$\ KIC 11817929 & $1$ & $10.301$ & $-0.058$ & $0.119$ & $0.637$ & $2.738$ & $1$\ KIC 11973705 & $1$ & $ 9.074$ & $0.148$ & $0.152$ & $0.750$ & $2.777$ & $1$\ KIC 12217324 & $2$ & $ 8.267$ & $-0.038$ & $0.149$ & $0.937$ & $2.828$ & $2$\ KIC 12258330 & $1$ & $ 9.402$ & $-0.050$ & $0.099$ & $0.355$ & $2.706$ & $1$\ [^1]: Based on measurements obtained at McDonald Observatory of the University of Texas at Austin [^2]: Tables 3 - 6 are only electronically available via the CDS [^3]: http://www.univie.ac.at/webda/
{ "pile_set_name": "ArXiv" }
--- abstract: | I show how an $SU(N)^{M}$ quiver gauge theory can accommodate the standard model with three chiral families and unify all of $SU(3)_C$, $SU(2)_L$ and $U(1)_Y$ couplings with high accuracy at one unique scale estimated as $M \simeq 4$ TeV. address: - ' $^{(1)}$ TH Division, CERN, CH1211 Geneva 23, Switzerland.' - '$^{(2)}$ University of North Carolina,Chapel Hill, NC 27599, USA. ' author: - 'Paul H. Frampton $^{(1,2)}$' title: 'Strong-Electroweak Unification at About 4 TeV' --- 6.0in 9.0in \#1\#2\#3 \#1\#2\#3 \#1\#2\#3 \#1\#2\#3 \#1\#2\#3 \#1\#2\#3 \#1\#2\#3 Conformal invariance in two dimensions has had great success in comparison to several condensed matter systems. It is an interesting question whether conformal symmetry can have comparable success in a four-dimensional description of high-energy physics. Even before the standard model (SM) $SU(2) \times U(1)$ electroweak theory was firmly established by experimental data, proposals were made [@PS; @GG] of models which would subsume it into a grand unified theory (GUT) including also the dynamics[@GQW] of QCD. Although the prediction of SU(5) in its minimal form for the proton lifetime has long ago been excluded, [*ad hoc*]{} variants thereof [@FG] remain viable. Low-energy supersymmetry improves the accuracy of unification of the three 321 couplings[@ADF; @ADFFL] and such theories encompass a “desert” between the weak scale $\sim 250$ GeV and the much-higher GUT scale $\sim 2 \times 10^{16}$ GeV, although minimal supersymmetric $SU(5)$ is by now ruled out[@Murayama]. Recent developments in string theory are suggestive of a different strategy for unification of electroweak theory with QCD. Both the desert and low-energy supersymmetry are abandoned. Instead, the standard $SU(3)_C \times SU(2)_L \times U(1)_Y$ gauge group is embedded in a semi-simple gauge group such as $SU(3)^N$ as suggested by gauge theories arising from compactification of the IIB superstring on an orbifold $AdS_5 \times S^5/\Gamma$ where $\Gamma$ is the abelian finite group $Z_N$[@F1]. In such nonsupersymmetric quiver gauge theories the unification of couplings happens not by logarithmic evolution[@GQW] over an enormous desert covering, say, a dozen orders of magnitude in energy scale. Instead the unification occurs abruptly at $\mu = M$ through the diagonal embeddings of 321 in $SU(3)^N$[@F2]. The key prediction of such unification shifts from proton decay to additional particle content, in the present model at $\simeq 4$ TeV. Let me consider first the electroweak group which in the standard model is still un-unified as $SU(2) \times U(1)$. In the 331-model[@PP; @PF] where this is extended to $SU(3) \times U(1)$ there appears a Landau pole at $M \simeq 4$ TeV because that is the scale at which ${\rm sin}^2 \theta (\mu)$ slides to the value ${\rm sin}^2 (M) = 1/4$. It is also the scale at which the custodial gauged $SU(3)$ is broken in the framework of [@DK]. Such theories involve only electroweak unification so to include QCD I examine the running of all three of the SM couplings with $\mu$ as explicated in [*e.g.*]{} [@ADFFL]. Taking the values at the Z-pole $\alpha_Y(M_Z) = 0.0101, \alpha_2(M_Z) = 0.0338, \alpha_3(M_Z) = 0.118\pm0.003$ (the errors in $\alpha_Y(M_Z)$ and $\alpha_2(M_Z)$ are less than 1%) they are taken to run between $M_Z$ and $M$ according to the SM equations $$\begin{aligned} \alpha^{-1}_Y(M) & = & (0.01014)^{-1} - (41/12 \pi) {\rm ln} (M/M_Z) \nonumber \\ & = & 98.619 - 1.0876 y \label{Yrun}\end{aligned}$$ $$\begin{aligned} \alpha^{-1}_2(M) & = & (0.0338)^{-1} + (19/12 \pi) {\rm ln} (M/M_Z) \nonumber \\ & = & 29.586 + 0.504 y \label{2run}\end{aligned}$$ $$\begin{aligned} \alpha^{-1}_3(M) & = & (0.118)^{-1} + (7/2 \pi) {\rm ln} (M/M_Z) \nonumber \\ & = & 8.474 + 1.114 y \label{3run}\end{aligned}$$ where $y = {\rm log}(M/M_Z)$. The scale at which ${\rm sin}^2 \theta(M) = \alpha_Y(M)/ (\alpha_2(M) + \alpha_Y(M))$ satisfies ${\rm sin}^2 \theta (M) = 1/4$ is found from Eqs.(\[Yrun\],\[2run\]) to be $M \simeq 4$ TeV as stated in the introduction above. I now focus on the ratio $R(M) \equiv \alpha_3(M)/\alpha_2(M)$ using Eqs.(\[2run\],\[3run\]). I find that $R(M_Z) \simeq 3.5$ while $R(M_{3}) = 3$, $R(M_{5/2}) = 5/2$ and $R(M_2)=2$ correspond to $M_3, M_{5/2}, M_2 \simeq 400 {\rm GeV}, ~~ 4 {\rm TeV}, {\rm and}~~ 140 {\rm TeV}$ respectively. The proximity of $M_{5/2}$ and $M$, accurate to a few percent, suggests strong-electroweak unification at $\simeq 4$ TeV. There remains the question of embedding such unification in an $SU(3)^N$ of the type described in [@F1; @F2]. Since the required embedding of $SU(2)_L \times U(1)_Y$ into an $SU(3)$ necessitates $3\alpha_Y=\alpha_H$ the ratios of couplings at $\simeq 4$ TeV is: $\alpha_{3C} : \alpha_{3W} : \alpha_{3H} :: 5 : 2 : 2$ and it is natural to examine $N=12$ with diagonal embeddings of Color (C), Weak (W) and Hypercharge (H) in $SU(3)^2, SU(3)^5, SU(3)^5$ respectively. To accomplish this I specify the embedding of $\Gamma = Z_{12}$ in the global $SU(4)$ R-parity of the ${\cal N} = 4$ supersymmetry of the underlying theory. Defining $\alpha = {\rm exp} ( 2\pi i / 12)$ this specification can be made by ${\bf 4} \equiv (\alpha^{A_1}, \alpha^{A_2}, \alpha^{A_3}, \alpha^{A_4})$ with $\Sigma A_{\mu} = 0 ({\rm mod} 12)$ and all $A_{\mu} \not= 0$ so that all four supersymmetries are broken from ${\cal N} = 4$ to ${\cal N} = 0$. Having specified $A_{\mu}$ I calculate the content of complex scalars by investigating in $SU(4)$ the ${\bf 6} \equiv (\alpha^{a_1}, \alpha^{a_2}, \alpha^{a_3}, \alpha^{-a_3}, \alpha^{-a_2},\alpha^{-a_1})$ with $a_1 = A_1 + A_2, a_2 = A_2 + A_3, a_3 = A_3 + A_1$ where all quantities are defined (mod 12). Finally I identify the nodes (as C, W or H) on the dodecahedral quiver such that the complex scalars $$\Sigma_{i=1}^{i=3} \Sigma_{\alpha=1}^{\alpha=12} \left( N_{\alpha}, \bar{N}_{\alpha \pm a_i} \right) \label{scalars}$$ are adequate to allow the required symmetry breaking to the $SU(3)^3$ diagonal subgroup, and the chiral fermions $$\Sigma_{\mu=1}^{\mu=4} \Sigma_{\alpha=1}^{\alpha=12} \left( N_{\alpha}, \bar{N}_{\alpha + A_{\mu}} \right) \label{fermions}$$ can accommodate the three generations of quarks and leptons. It is not trivial to accomplish all of these requirements so let me demonstrate by an explicit example. For the embedding I take $A_{\mu} = (1, 2, 3, 6)$ and for the quiver nodes take the ordering: $$- C - W - H - C - W^4 - H^4 - \label{quiver}$$ with the two ends of (\[quiver\]) identified. The scalars follow from $a_i = (3, 4, 5)$ and the scalars in Eq.(\[scalars\]) $$\Sigma_{i=1}^{i=3} \Sigma_{\alpha=1}^{\alpha=12} \left( 3_{\alpha}, \bar{3}_{\alpha \pm a_i} \right) \label{modelscalars}$$ are sufficient to break to all diagonal subgroups as $$SU(3)_C \times SU(3)_{W} \times SU(3)_{H} \label{gaugegroup}$$ The fermions follow from $A_{\mu}$ in Eq.(\[fermions\]) as $$\Sigma_{\mu=1}^{\mu=4} \Sigma_{\alpha=1}^{\alpha=12} \left( 3_{\alpha}, \bar{3}_{\alpha + A_{\mu}} \right) \label{modelfermions}$$ and the particular dodecahedral quiver in (\[quiver\]) gives rise to exactly [*three*]{} chiral generations which transform under (\[gaugegroup\]) as $$3[ (3, \bar{3}, 1) + (\bar{3}, 1, 3) + (1, 3, \bar{3}) ] \label{generations}$$ I note that anomaly freedom of the underlying superstring dictates that only the combination of states in Eq.(\[generations\]) can survive. Thus, it is sufficient to examine one of the terms, say $( 3, \bar{3}, 1)$. By drawing the quiver diagram indicated by Eq.(\[quiver\]) with the twelve nodes on a “clock-face” and using $A_{\mu} = (1, 2, 3, 6)$ in Eq.(\[fermions\]) I find five $(3, \bar{3}, 1)$’s and two $(\bar{3}, 3, 1)$’s implying three chiral families as stated in Eq.(\[generations\]). After further symmetry breaking at scale $M$ to $SU(3)_C \times SU(2)_L \times U(1)_Y$ the surviving chiral fermions are the quarks and leptons of the SM. The appearance of three families depends on both the identification of modes in (\[quiver\]) and on the embedding of $\Gamma \subset SU(4)$. The embedding must simultaneously give adequate scalars whose VEVs can break the symmetry spontaneously to (\[gaugegroup\]). All of this is achieved successfully by the choices made. The three gauge couplings evolve according to Eqs.(\[Yrun\],\[2run\],\[3run\]) for $M_Z \leq \mu \leq M$. For $\mu \geq M$ the (equal) gauge couplings of $SU(3)^{12}$ do not run if, as conjectured in [@F1; @F2] there is a conformal fixed point at $\mu = M$. The basis of the conjecture in [@F1; @F2] is the proposed duality of Maldacena[@Maldacena] which shows that in the $N \rightarrow \infty$ limit ${\cal N} = 4$ supersymmetric $SU(N)$gauge theory, as well as orbifolded versions with ${\cal N} = 2,1$ and $0$[@bershadsky1; @bershadsky2] become conformally invariant. It was known long ago [@Mandelstam] that the ${\cal N} = 4$ theory is conformally invariant for all finite $N \geq 2$. This led to the conjecture in [@F1] that the ${\cal N} = 0$ theories might be conformally invariant, at least in some case(s), for finite $N$. It should be emphasized that this conjecture cannot be checked purely within a perturbative framework[@FMink]. I assume that the local $U(1)$’s which arise in this scenario and which would lead to $U(N)$ gauge groups are non-dynamical, as suggested by Witten[@Witten], leaving $SU(N)$’s. This is a non-gravitational theory with conformal invariance when $\mu > M$ and where the Planck mass it taken to be infinitely large. The ubiquitous question is: What about gravity which breaks conformal symmetry in the ultraviolet (UV)? This is a question about the holographic principle for flat spacetime. From the phenomenological viewpoint the equal couplings of $SU(3)^{12}$ can, instead of remaining constant at energies $\mu > M$, decrease smoothly by asymptotic freedom to a conformal fixed point as $\mu \rightarrow \infty$. This possibility is less restrictive and may fit in better with the AdS/CFT correspondence. The desert resides in the unexplored domain of the orders of magnitude in energy scale between 4 TeV and the gravitational scale, $M_{Planck}$. As for experimental tests of such a TeV GUT, the situation at energies below 4 TeV is predicted to be the standard model with a Higgs boson still to be discovered at a mass predicted by radiative corrections [@PDG] to be below 267 GeV at 99% confidence level. There are many particles predicted at $\simeq 4$ TeV beyond those of the minimal standard model. They include as spin-0 scalars the states of Eq.(\[modelscalars\]). and as spin-1/2 fermions the states of Eq.(\[modelfermions\]), Also predicted are gauge bosons to fill out the gauge groups of (\[gaugegroup\]), and in the same energy region the gauge bosons to fill out all of $SU(3)^{12}$. All these extra particles are necessitated by the conformality constraints of [@F1; @F2] to lie close to the conformal fixed point. One important issue is whether this proliferation of states at $\sim 4$ TeV is compatible with precision electroweak data in hand. This has been studied in the related model of [@DK] in a recent article[@Csaki]. Those results are not easily translated to the present model but it is possible that such an analysis including limits on flavor-changing neutral currents could rule out the entire framework. As alternative to $SU(3)^{12}$ another approach to TeV unification has as its group at $\sim 4$ TeV $SU(6)^3$ where one $SU(6)$ breaks diagonally to color while the other two $SU(6)$’s each break to $SU(3)_{k=5}$ where level $k=5$ characterizes irregular embedding[@DM]. The triangular quiver $-C - W - H - $ with ends identified and $A_{\mu} = (\alpha, \alpha, \alpha, 1)$, $\alpha = {\rm exp} (2 \pi i / 3)$, preserves ${\cal N} = 1$ supersymmetry. I have chosen to describe the ${\cal N} = 0$ $SU(3)^{12}$ model in the text mainly because the symmetry breaking to the standard model is more transparent. The TeV unification fits ${\rm sin}^2\theta$ and $\alpha_3$, predicts three families, and partially resolves the GUT hierarchy. If such unification holds in Nature there is a very rich level of physics one order of magnitude above presently accessible energy. Is a hierarchy problem resolved in the present theory? In the non-gravitational limit $M_{Planck} \rightarrow \infty$ I have, above the weak scale, the new unification scale $\sim 4$ TeV. Thus, although not totally resolved, the GUT hierarchy is ameliorated. The gravitational hierarchy problem is not addressed. My final remark is on the non-appearance of relevant deformations which break conformal invariance above 4 TeV. This is an assumption I make by analogy to several other systems in Nature with a large scaling region, [*e.g.*]{} superfluid helium where there is a comparable non-appearance of relevant operators over many orders of magnitude in scale size. This work was supported in part by the Office of High Energy, US Department of Energy under Grant No. DE-FG02-97ER41036. [99]{} J.C. Pati and A. Salam, Phys. Rev. [**D8,**]{} 1240 (1973); [*ibid*]{} [**D10,**]{} 275 (1974). H. Georgi and S.L. Glashow, Phys. Rev. Lett. [**32,**]{} 438 (1974). H. Georgi, H.R. Quinn and S. Weinberg, Phys. Rev. Lett. [**33,**]{} 451 (1974). P.H. Frampton and S.L. Glashow, Phys. Lett. [**B131,**]{} 340 (1983). U. Amaldi, W. De Boer and H. Furstenau, Phys. Lett. [**B260,**]{} 447 (1991).\ J.R. Ellis, S. Kelley and D.V. Nanopoulos, Phys. Lett. [**B260,**]{} 131 (1991).\ S. Dimopoulos, F. Wilczek and S. Raby, Phys. Rev. [**D24,**]{} 1681 (1981). U. Amaldi, W. De Boer , P.H. Frampton, H. Furstenau and J.T. Liu, Phys. Lett. [**B281,**]{} 374 (1992). H. Murayama and A. Pierce, Phys. Rev. [**D65,**]{} 055009 (2002). P.H. Frampton, Phys. Rev. [**D60,**]{} 041901 (1999). P.H. Frampton and W.F. Shively, Phys. Lett. [**B454,**]{} 49 (1999). P.H. Frampton and C. Vafa, [hep-th/9903226]{}. P.H. Frampton, Phys. Rev. [**D60,**]{} 085004 (1999). F. Pisano and V. Pleitez, Phys. Rev. [**D46,**]{} 410 (1992). P.H. Frampton, Phys. Rev. Lett. [**69,**]{} 2889 (1992). S. Dimopoulos and D. E. Kaplan, Phys. Lett. [**B531,**]{} 127 (2002). J.M. Maldacena, Adv. Theor. Math. Phys. [**2,**]{} 231 (1998). M. Bershadsky, Z. Kakushadze and C. Vafa, Nucl. Phys. [**B523,**]{} 59 (1998). M. Bershadsky and A. Johansen, Nucl. Phys. [**B536,**]{} 141 (1998). S. Mandelstam, Nucl. Phys. [**B213,**]{} 149 (1983). P.H. Frampton and P. Minkowski, [hep-th/0208024]{}. E. Witten, JHEP [**9812:012**]{} (1998). Particle Data Group (K. Hagiwara [*et al.*]{}), Phys. Rev. [**D66,**]{} 010001 (2002). C. Csaki, J. Erlich, G.D. Kribs and J. Terning, Phys. Rev. [**D66,**]{} 075008 (2002). K.R. Dienes and J. March-Russell, Nucl. Phys. [**B479,**]{} 113 (1996).
{ "pile_set_name": "ArXiv" }
--- author: - Elena Agliari - Francesco Alemanno - Adriano Barra - Alberto Fachechi title: 'Dreaming neural networks: rigorous results' --- Introduction ============ Statistical mechanics of spin glasses [@MPV] has been playing a primary role in the investigation of neural networks, as for the description of both their learning phase [@angel-learning; @sompo-learning] and their retrieval properties [@Amit; @Coolen]. Along the past decades, beyond the bulk of results achieved via the so-called replica-trick [@MPV], a considerable amount of rigorous results exploiting alternative routes (possibly mathematically more transparent) were also developed (see e.g. [@Agliari-Barattolo; @ABT; @Bovier1; @Bovier2; @Bovier3; @Albert1; @Barra-JSP2010; @bipartiti; @Dotsenko1; @Dotsenko2; @Tala1; @Tala2; @Tirozzi; @Pastur] and references therein). This paper goes in the latter direction and focuses on a generalization of the Hopfield model [@Albert2] that is able to saturate the optimal storage capacity and whose main characteristics are summarized hereafter. In [@Albert2] the Hebbian kernel underlying the Hopfield model was revised to account also for [*reinforcement*]{} and [*removal*]{} processes. The resulting kernel can be interpreted as the effect of a [*daily routine*]{}: during the [*awake*]{} state, the network is fed with inputs (i.e. [*patterns*]{} of information) that are stored in an Hebbian fashion[^1], then, during the [*asleep*]{} state, it weeds out the (combinatorial[^2]) proliferation of the spurious mixtures (unavoidably created as metastable states in the free-energy landscape of the system during the learning stage) and it consolidates the pure states (making their free-energy minima deeper in this landscape picture). Remarkably, after these procedures, the network is able to saturate the storage capacity $\alpha$ (that is the amount of stored patterns $P$ over the amount of available neurons $N$, in the thermodynamic limit, i.e. $\alpha = \lim_{N \to \infty}P/N$) to its upper bound[^3] which, for symmetric networks, is $\alpha_c=1$ [@Gardner]. Further, in the retrieval phase of its parameter space, pure states are global minima up to $\alpha \sim 0.85$ (see Figure \[fig:criticallines\]), that is a much broader range with respect to the classical Hopfield counterpart, where they remain global minima solely for $\alpha < 0.05$. In this work, we first show the equivalence between the aforementioned generalized neural network and a tripartite (or “three-layers” in a machine-learning jargon) spin-glass, where couplings between neurons of different layers exhibit correlations and the third layer is a [*spectral layer*]{} equipped with imaginary numbers (see Fig. \[fig:GeneralizedBoltmannMachine\] and Remark \[remark:quarto\]). Then, we generalize the stochastic stability technique, introduced in [@AC1; @CG1] to address Sherrington-Kirkpatrick spin-glass and later developed in [@Barra-JSP2010] to account also for bipartite spin-glasses (namely restricted Boltzmann machines or Hopfield networks [@BarraEquivalenceRBMeAHN] in a machine learning jargon [@DLbook; @Hugo]), so that it can as well deal with the present tripartite and correlated spin-glass. Next, by using this novel approach -that is mathematically well controllable at any stage of the calculations- we obtain the expression of the quenched replica-symmetric free energy related to the model (as well as the set of self-consistent equations for the order parameters) and we show that the resulting picture sharply coincides with that obtained via the replica-trick analysis [@Albert2]. This implies, in a cascade fashion, that all the results previously heuristically derived are actually proved (the most remarkable one being the saturation of the critical capacity). Finally, we extend our analysis to order-parameter fluctuations in order to investigate ergodicity breaking: interestingly, as suggested also by the self-consistencies, we find that -without sleeping- ergodicity breaks as predicted by Amit-Gutfreund-Sompolinsky [@Amit] (as it should), but -as sleeping takes place- the spin-glass region shrinks and ultimately the network phase-diagram exhibits only retrieval and ergodic phases (see Fig.s \[fig:erglines\],\[fig:phasediag\]). This paper is structured as follows: in Sec. $2$, once the model is introduced and embedded in its statistical mechanical framework, we calculate its quenched free energy by introducing a novel interpolating structure à la Guerra and this provides a first picture of the phase diagram of the model (as we can identify the transition between the retrieval and the spin-glass regions). Next, in Sec. $3$, we study the fluctuations of the order parameters to inspect where ergodicity is spontaneously broken as this is a signature of the critical line, namely the transition between the ergodic and the spin-glass regions): by combining the two results a full picture of the phase diagram of the model can be finally deduced. Sec. $4$ is left for conclusions. Technical details and further remarks on the interpolation approach are provided in the appendices. ![Critical line for the transition between retrieval and spin-glass phases for various values of the unlearning time. From the left to the right: $t=0$ (Hopfield, black dashed line), $0.1$, $1$ and $1000$. The inset shows two curves tracing the boundary of the maximal retrieval regions where patterns are global free energy minima (inner boundary) or local free energy minima (outer boundary) in the long sleep limit.[]{data-label="fig:criticallines"}](retrieval_lines.pdf){width="\textwidth"} Replica symmetric free energy analysis ====================================== Definition of the Model {#replica-simmetric-theory} ----------------------- Driven by the works of Personnaz, Guyon, Dreyfus [@Personnaz] and of Dotsenko et al. [@Dotsenko1; @Dotsenko2], in [@Albert2] we introduced the following generalization of the standard Hopfield paradigma [@Hopfield], referred to as “reinforcement&removal” (RR) algorithm: consider a network composed by $N$ Ising neurons $\{ \sigma_i \}_{i=1,...,N}$ and $P$ patterns $\{\xi^{\mu}\}_{\mu=1,...,P}$ (namely quenched random vectors of the same length $N$), and denote with $t \in \mathbb{R}^+$ the sleep extent (such that for $t=0$ the network has never slept, while for $t\to \infty$ an entire sleeping session has occurred), we can then introduce the following The Hamiltonian of the reinforcement$\&$removal model reads as:[^4] \[new-model\] H\_[N,P]{}\^(|,t):= - \_[i=1]{}\^[N]{}\_[j=1]{}\^[N]{}\_[=1]{}\^[P]{}\_[=1]{}\^[P]{}\_i\^\_j\^( )\_[,]{} \_i \_j, where $\sigma_i = \pm 1 \ \forall i \in (1,...,N)$, $\xi^1$ -that is the pattern candidate to be retrieved- has binary entries $\xi_i^{1} \in \{-1,+1\}$ drawn from $P(\xi_i^{\mu}=+1) = P(\xi_i^{\mu}=-1) = \frac12$, while the remaining $P-1$ patterns $\{\xi^{\mu}\}_{\mu=2,...,P}$, have i.i.d. standard Gaussian entries $\xi_i^{\mu} \sim \mathcal{N}[0,1]$, and the correlation matrix $\boldsymbol{C}$ is defined as $$C_{\mu,\nu} := \frac{1}{N}\sum_{i=1}^{N}\xi_i^{\mu}\xi_i^{\nu}.$$ \[Marco2\] We stress that, for the sake of mathematical convenience, as deepened in [@Agliari-Barattolo], we take solely the pattern candidate for retrieval (i.e. the [*signal*]{}) to be Boolean, while all the remaining ones (acting as [*slow noise*]{} on the retrieval) are chosen as Gaussian: although neural networks, in general, do not exhibit the universality properties of spin glasses [@Genovese], this is no longer true if we confine our focus solely to the structure of the slow noise generated by patterns[^5]. Note that the matrix ${\xi}^T \left( \frac{1+t}{\mathbb{I}+ t {C}} \right) {\xi}$, encoding the neuronal coupling, recovers the Hebbian kernel for $t=0$ , while it approaches the pseudo-inverse matrix for $t \rightarrow \infty$ (see [@Albert2] for the proof). Accordingly, the model described by the Hamiltonian (\[new-model\]) spans, respectively, from the standard Hopfield model $(t \to 0)$ to the Kanter-Sompolinksy model [@KanterSompo] $(t \to \infty)$. During the sleeping session, both reinforcement and remotion take place: oversimplifying, in the generalized synaptic coupling appearing in (\[new-model\]), the denominator ([*i.e.*]{}, the term $\propto (1+tC)^{-1}$) yields to the remotion of unwanted mixture states, while the numerator ([*i.e.*]{}, the term $\propto 1+t$) reinforces the pure memories. We are interested in obtaining the phase diagram of the model coded by the cost function (\[new-model\]), solely in the thermodynamic limit and under the replica symmetric assumption. To achieve this goal the following definitions are in order. Using $\beta \in \mathbb{R}^+$ as a parameter tuning the level of [*fast noise*]{} in the network (with the physical meaning of inverse temperature, i.e. calling $T$ the temperature, $\beta \equiv T^{-1}$ in proper units,), the partition function of the model (\[new-model\]) is introduced as \[lazoccola\] Z\_[N,P]{}(|,t) := \_[{}]{} e\^[-H\_[N,P]{}\^[(RR)]{}(|,t)]{} = \_[{ }]{}. Denoting with $\mathbb{E}_{\xi}$ the average over the quenched patterns, for a generic function $O(\sigma,\xi)$ of the neurons and the couplings, we can define the Boltzmann $\langle O(\sigma,\xi)\rangle$ as $$\begin{aligned} \langle O(\sigma,\xi) \rangle &:=& \frac{\sum_{\{\sigma\}} O(\sigma,\xi) e^{-\beta H^{(RR)}_{N,P}(\sigma|\xi,t)}}{Z_{N,P}(\sigma|\xi,t)},\\ $$ such that its quenched average reads as $\mathbb{E}_{\xi} \langle O(\sigma,\xi) \rangle$. Once introduced the partition function $Z_{N,P}(\sigma|\xi,t)$, we can define the infinite volume limit of the intensive quenched free-energy $F_N(\alpha, \beta, t)$ and of the intensive quenched pressure $A(\alpha,\beta,t)$ associated to the model (\[new-model\]) as \[freeEnergy\] - F(, , t) A(,,t) := \_[N ]{} 1[N]{} Z\_[N,P]{}(|,t). As anticipated, the pressure of the model (\[new-model\]) was analyzed in [@Albert2] via replica-trick [@Coolen] (corroborated by extensive numerical simulations), showing that (at the replica symmetric level of description) the maximal critical capacity of this neural network saturates the Gardner’s bound [@Gardner] (i.e. $\alpha_c =1$, for symmetric noiseless networks). \[remark:quarto\] The partition function defined in (\[lazoccola\]) can be represented in Gaussian integral form as \[eq:boltzmann\] Z\_[N,P]{}(|,t) &= \_[{ } ]{} (\_[=1]{}\^[P]{} d (z\_) )(\_[i=1]{}\^[N]{} d (\_i) )\ &(\_[, i]{}\^[P,N]{}z\_\^\_i \_i +i \_[, i]{}\^[P,N]{}z\_\^\_i \_i ), where $d \mu(z_\mu)$ and $d \mu(\phi_i)$ are the standard Gaussian measures. This relation shows that the partition function of the reinforcement$\&$removal model is equivalent to the partition function of a tripartite spin-glass where the intermediate party (or [*hidden layer*]{} to keep a machine learning jargon) is made of real neurons $\{ z_{\mu}\}_{\mu=1,...,P}$ with $z_{\mu} \sim \mathcal{N}[0,1], \forall \mu$, while the external layers are made, respectively, of a set of Boolean neurons $\{\sigma_i \}_{i=1,...,N}$ (the [*visible layer*]{}) and of a set of imaginary neurons with magnitude $\{ \phi \}_{i=1,...,N}$, being $\phi_{i} \sim \mathcal{N}[0,1], \forall i$ (the [*spectral layer*]{}), see Fig. \[fig:GeneralizedBoltmannMachine\]. ![Stylized representation of the generalized Hopfield network (left) and its dual generalized (restricted) Boltzmann machine (right), namely the three-partite spin-glass under study: in machine learning jargon these parties are called [*layers*]{} and, here, they are respectively the visible, hidden and spectral layers. Note further that, as it should, when $t \to 0$ the duality above reduces to the standard picture of Hopfield networks and restricted Boltzmann machines [@Agliari-Barattolo; @BarraEquivalenceRBMeAHN].[]{data-label="fig:GeneralizedBoltmannMachine"}](MapUnlearning.pdf){width="\textwidth"} Guerra’s interpolating framework for the free energy ---------------------------------------------------- Once expressed the partition function (\[lazoccola\]) in its integral representation (\[eq:boltzmann\]), we can introduce the related tripartite spin glass Hamiltonian as $$H_{N,P}=\frac{a}{\sqrt{N}}\sum_{i=1}^N \sum_{\mu=1}^P z_\mu \xi_i^\mu k_i, \label{eq:energycostf}$$ where we introduced the “multi-spin” $k_i=\sigma_i+b \phi_i$ and where a=,b=i. \[eq:defspins\] Note that the cost function (\[eq:energycostf\]) and the one associated to the original model (\[new-model\]) share the same partition function and therefore exhibit the same Thermodynamics. By a practical perspective, the latter is more suitable for understanding the retrieval capabilities of the network, the former for dealing with its learning skills [@BarraEquivalenceRBMeAHN; @Barra-RBMsPriors1]. In the following we consider the challenging case with $P=\alpha N$ for large $N$ and we aim to obtain an expression for the quenched pressure (\[freeEnergy\]) in terms of the order parameters introduced in the next The natural order parameters for the neural network model (\[new-model\]) -as suggested by its integral representation (\[eq:energycostf\])- are the overlaps $q_{ab}$ and $p_{ab}$ between the $k$’s and the $z$’s variables, respectively, as functions of two replicas (a,b) of the system, and the generalized Mattis overlap[^6] $m_1$, namely q\_[ab]{}&:=\_[i=1]{}\^N k\_i\^[(a)]{} k\_i\^[(b)]{},\ p\_[ab]{}&:=\_[2]{} z\_\^[(a)]{} z\_\^[(b)]{},\ m\_1&:=\_[i=1]{}\^N \_i\^1 k\_i. \[eq:orderparam\] The replica symmetric approximation (RS) is imposed by requiring that the order-parameters of the theory do not fluctuate in the thermodynamic limit[^7], i.e. q\_[ab]{}& W \_[ab]{}+ q (1-\_[ab]{}),\ p\_[ab]{}& X \_[ab]{}+ p (1-\_[ab]{}),\ m\_1& m, \[eq:rsorderparam\] where we called, respectively, $W,q,X,p,m$ the replica symmetric values of the diagonal and off-diagonal overlap $q$, the diagonal and off-diagonal overlap $p$ and the Mattis magnetization $m_1$. Now the plan is to get an explicit expression for the pressure (\[freeEnergy\]) in terms of these order parameters, to extremize the former over the latter and get a phase diagram for the network. To reach this goal we generalize a Guerra’s interpolation scheme [@Barra-JSP2010]: the idea is to compare the original system, as represented in eq. (\[eq:energycostf\]) (namely a three-layer correlated spin glass), with three random single-layers, where each layer experiences, statistically, the same mean-field that would have been produced by the other layers over it. To this aim we introduce the following Being $s \in [0,1]$ an interpolating parameter, $\{ \eta_i \}_{i \in (1,...,N)}$ a set of $N$ i.i.d. Gaussian variables, $\{ \lambda_{\mu} \}_{\mu \in (2,...,P)}$ a set of $P-1$ i.i.d. Gaussian variables, and the scalars $C_1,C_2,C_3,C_4,C_5$ to be set a posteriori, we use as interpolating pressure the following quantity $$\begin{aligned} \label{eq:interpfunc} \mathcal{A}(s)&:=&\frac{1}{N}{\operatorname{\mathbb{E}_{\xi,\eta,\lambda}}} \ln\sum_{\sigma}\int\dm[z,\phi] \exp \Big[ \sqrt{s}\frac{a}{\sqrt{N}}\sum_{i,\mu\ge 2} z_\mu \xi_i^\mu k_i+\sqrt{s}\frac{a}{\sqrt{N}}\sum_{i} z_1 \xi_i^1 k_i\\ & + &\sqrt{1-s}\Big( C_1 \sum_i^N \eta_i k_i + C_2 \sum_{\mu\ge2} \lambda_\mu z_\mu \Big)+\frac{1-s}{2}\Big( C_3 \sum_{\mu\ge 2} z_\mu^2 + C_4 \sum_i k_i^2 + C_5 a \sum_i \xi_i^1 k_i \Big)\Big]. \nonumber\end{aligned}$$ When $s=1$ we recover the original model, namely $A(\alpha,\beta,t)=\lim_{N \to \infty}\mathcal{A}(s=1)$, while for $s \to 0$ we are left with a one-body problem, and, consequently, the probabilistic structure of $\mathcal{A}(s=0)$ is more tractable. We note the importance of splitting the sum on the $\xi$’s into $\xi^1$ (i.e. the [*signal*]{}) and the $\xi^2\cdots\xi^P$ (i.e. the [*quenched noise*]{}) since the quenched average treats them differently, and so we will need to address them separately. The infinite volume limit of the quenched pressure related to the model (\[new-model\]) can be obtained by using the Fundamental Theorem of Calculus as $$A(\alpha,\beta,t)\equiv \lim_{N \to \infty} \mathcal{A}(s=1)= \lim_{N \to \infty} \left( \mathcal{A}(s=0)+\int_0^1 \frac{d \mathcal{A}(s)}{ds}\, ds\right). \label{eq:sumrule}$$ To follow this approach, two calculations are in order: the streaming $d_s \mathcal{A}(s)$ (and its successive back-integration) and the evaluation of the Cauchy condition $\mathcal{A}(s=0)$. Let us start with $d_s \mathcal{A}(s)$: =&. We can proceed further by using Wick’s Theorem \[$\mathbb{E}_{x}xF(x)=\mathbb{E}_{x} (x^2) \cdot \mathbb{E}_{x} \partial_xF(x)$\] on the fields $z^1, \,\,\xi^{2\cdots P},\,\,\lambda_\mu,\,\, \eta_i$, obtaining =&. Using the definition of the order parameters we can write $d_s \mathcal{A}(s)$ as =&. It is now convenient to fix the free scalars $C_{1,..,5}$ as C\_1\^2=a\^2p, C\_2\^2=a\^2 q, C\_3=a\^2( W- q), C\_4=a\^2( X - p), C\_5=2 m a, \[eq:interpcoeff\] such that we can recast the streaming $d_s\mathcal{A}(s)$ as =&+\ &+( q p- W X)- m\^2. \[eq:streamfunc\] \[Marco6\] When requiring replica symmetry, we have that $\langle q_{11} \rangle \to W$, $\langle p_{11} \rangle \to X$, $\langle m_1 \rangle \to m$, $\langle q_{12} \rangle \to q$ and $\langle p_{12} \rangle \to p$, hence the evaluation of the integral in eq. (\[eq:sumrule\]) becomes trivial as the r.h.s. of eq. (\[eq:streamfunc\]) reduces to \[famolafinita\] d\_s (s)=( q p- W X)- m\^2 that does not depend on $s$ any longer. We must now evaluate the one-body contribution $\mathcal{A}(s=0)$: this can be done by directly setting $s=0$ in (s=0)=& \_. Performing standard Gaussian integrations we obtain (s=0)=&-(1-C\_3)-(1-C\_4 b\^2)++++\ &+b\^2+2. \[eq:onebody\] Keeping in mind the expressions for the parameters $C_1,...,C_5$ as prescribed in the relations \[eq:interpcoeff\], by plugging eq. (\[famolafinita\]) and eq.  into the sum rule we finally get an expression for the quenched pressure of the model (\[new-model\]) in terms of the replica-symmetric order parameters A\_[[[RS]{}]{}]{}(,,t)=&( q p- W X)- m\^2--+\ &++( X- p)++\ &+2+. To match exactly the notation in [@Albert2] there is still a short way to go: it is convenient to re-scale $ m$, $ p$ and $ X$ as X X,p p,m m, as this allows us to introduce the composite order parameter $ \Delta=1-\alpha \beta^2 b^2( X- p)$ used in [@Albert2]. After these transformations, remembering the definition of the free energy (see (\[freeEnergy\])) and the definition of $(a,b)$ (see ), we obtain exactly the same expression for the quenched free energy as that achieved in [@Albert2] via the replica trick, as stated by the next main In the infinite volume limit, the replica symmetric free energy related to the neural network defined by eq. (\[new-model\]) can be expressed in terms of the natural order parameters of the theory (see def.s (\[eq:orderparam\])) as F\_[[[RS]{}]{}]{}(,,t)=&- (1+)-W - p ( W- q)\ &-(+)-\ &--+ +2. \[eq:qRSfreeenergy\] Using the standard variational principle $\vec{\nabla}F_{{{\rm \scriptscriptstyle RS}}}=0$ on the free energy (\[eq:qRSfreeenergy\]), namely by extremizing the latter over the order parameters, we obtain the following set of self-consistent equations for these parameters, whose behavior is outlined in the plots of Fig. \[fig:jumpVST\]. m &= ,\ p &=,\ &=1+,\ q &=W+-\^[-2]{},\ W \^2 &=1-+ -\^[-2]{}. \[eq:sceqs\] We stress that we obtained exactly the same self-consistencies previously appeared in [@Albert2], thus all the consequences stemming by them, as reported in that paper, are here entirely confirmed. ![[**Retrieval state solution for the order parameters and free energy at $t=1000$.**]{} First row: on the left, the plot shows the Mattis magnetization $m$ as a function of the temperature for various storage capacity values ($\alpha=0$, $0.05$, $0.2$ and $0.5$, going from the right to the left). The vertical dotted lines indicates the jump discontinuity identifying the critical temperature $T_c(\alpha)$ which separates the retrieval region from the spin-glass phase; on the right, the plot shows the solutions of the non-diagonal overlap $q$ (normalized to the zero-temperature value $q_0=q(T=0)$), for the same capacity values. The solution is computed in the retrieval region ([*i.e.*]{} $T<T_c(\alpha)$). Second row: on the left, the plot shows the solution for the diagonal overlap $-W$ in the retrieval region for $\alpha=0$, $0.05$, $0.2$ and $0.5$, finally, on the right the plot shows the free-energy as a function of the temperature for various storage capacity values ($\alpha=0.05$, $0.2$ and $0.5$, going from the bottom to the top) for both the retrieval (red solid lines) and spin-glass (black dashed lines) states.[]{data-label="fig:jumpVST"}](all_plots.pdf) Study of the overlap fluctuations ================================= As proved in the previous section, the reinforcement$\&$removal algorithm makes the retrieval region in the $(\alpha,\beta)$ plane wider and wider as $t$ is increased (see Fig. \[fig:criticallines\]). As the retrieval region pervades the spin-glass region, one therefore naturally wonders whether the opposite boundary of the spin-glass region (namely the critical line depicting the transition where ergodicity breakdowns) is as well deformed. To address this point, we now study the behavior of the overlap fluctuations, suitably centered around the thermodynamic values of the overlaps and properly rescaled in order to allow them to diverge when the system approaches the critical line. In fact, they are meromorphic functions and their poles identify the evolution of the critical surface $\beta_c(\alpha,t)$ (if any). It is worth recalling that the critical line for the standard Hopfield model [@Hopfield] as predicted by the AGS theory [@Amit] is $\beta_c (\alpha, t=0) = (1+\sqrt{\alpha})^{-1}$. Guerra’s interpolating framework for the overlap fluctuations ------------------------------------------------------------- The idea is the same exploited in the previous section, namely to use the generalized Guerra’s interpolation scheme (see eq. (\[eq:interpfunc\])) to evaluate the evolution of the order parameter’s correlation functions from $s=0$ (where they do not represent the real fluctuations in the system, but their evaluation should be possible) up to $s=1$ (where they reproduce the true fluctuations). To achieve this goal for the generic correlation function $O$, we need to evaluate the Cauchy condition $\langle O(s=0) \rangle$ and the derivative $\partial_s \langle O(s) \rangle$. However, in contrast with the previous section where we imposed replica symmetry, here -as we just want to infer the critical line- we impose ergodic behavior, namely, we assume that the system is approaching this boundary from the high fast-noise limit. This allows us to set all the mean values of the overlaps to zero and to achieve explicit solutions. The centered and rescaled overlap fluctuations $\theta_{lm}$ and $\rho_{lm}$ are introduced as \_[lm]{}&=\ \_[lm]{}&=. As we will address the problem of the overlap fluctuations in the ergodic region, the signal is absent, thus there is no need to introduce a rescaled Mattis order parameter: only the boundary between the ergodic region and the spin-glass region is under study here. It is convenient to introduce the $r-$replicated interpolating pressure $\calA_J^r(s)$, where we further added a source field $J$, coupled to an observable $O$ (that is a smooth function of the neurons of the $r$-replicas) as \[eq:repinterp\] \^r\_J(s)=& \_[\_R]{}. where $k_i$ is the same as in Definition $5$ and the interpolation constants $C_{1,2,3,4}$ are the same given in the previous section (see eq. ()). By definition [O(s)]{}=[. \_[J=0]{}]{}, \_s[O(s)]{}=[. \_[J=0]{}]{}. \[eq:repgenfunc\] Therefore, in order to evaluate the fluctuations of $O$ we need to evaluate first $\partial_s \calA^r_J$ and, by a routine calculation, we get \_s \^r\_J=(1+t)\_[l,m=1]{}\^r,g\_[l,m]{}=\_[l,m]{}\_[l,m]{}. To evaluate the fluctuations of a general operator $O$, function of $r-$replicas, we must use the results and perform the same rescaling that we did in the previous section, namely (X,p)(X,p). Overall this brings to the next Given $O$ as a smooth function of $r$ replica overlaps $\left( q _ { 1 } , \ldots , q _ { r } \right)$ and $\left( p _ { 1 } , \ldots , p _ { r } \right) ,$ the following streaming equation holds: d\_ [O]{}=\_ [ a , b ]{} \^ [ r ]{} [O g\_ [ a , b ]{}]{}- r \_ [ a = 1 ]{} \^ [ r ]{} [O g \_ [ a , r + 1 ]{} ]{} + [O g\_ [ r + 1 , r + 2 ]{} ]{} - [O g\_ [ r + 1 , r + 1 ]{} ]{}, \[eq:fluctstream\] where we used the operator $d_{\tau}$ defined as d\_ = , in order to simplify calculations and presentation. Criticality and ergodicity breaking ----------------------------------- To study the overlap fluctuations we must consider the following correlation functions (it is useful to introduce and link them to capital letters in order to simplify their visualization): [\_[12]{}\^2 ]{}\_s &= A(s), &[\_[12]{}\_[13]{}]{}\_s &= B(s), &[\_[12]{}\_[34]{}]{}\_s &= C(s),\ [\_[12]{}\_[12]{} ]{}\_s &= D(s), &[\_[12]{}\_[13]{} ]{}\_s &= E(s), &[\_[12]{}\_[34]{}]{}\_s &= F(s),\ [\_[12]{}\^2]{}\_s &= G(s), &[\_[12]{}\_[13]{} ]{}\_s &= H(s), &[\_[12]{}\_[34]{}]{}\_s &= I(s),\ [\_[11]{}\^2 ]{}\_s &= J(s), &[\_[11]{}\_[11]{} ]{}\_s &= K(s), &[\_[11]{}\^2]{}\_s &= L(s),\ [\_[11]{}\_[12]{} ]{}\_s &= M(s), &[\_[11]{}\_[12]{} ]{}\_s &= N(s), &[\_[11]{}\_[12]{}]{}\_s &= O(s),\ [\_[11]{}\_[12]{}]{}\_s &= P(s),&[\_[11]{}\_[22]{} ]{}\_s &= Q(s),&[\_[11]{}\_[22]{} ]{}\_s &= R(s).\ [\_[11]{}\_[22]{}]{}\_s &= S(s), Since we are interested in finding the critical line for ergodicity breaking [*from above*]{} we can treat $\theta_{a,b},\rho_{a,b}$ as Gaussian variables with zero mean (this allows us to apply Wick-Isserlis theorem inside averages) as we can also treat both the $k_i$ and $z_\mu$ as zero mean random variables in the ergodic region (thus all averages involving uncoupled fields are vanishing): this considerably simplifies the evaluation of the critical line (as expected since we are approaching criticality from the [*trivial*]{} ergodic region [@BarraGuerra-JMP2008]). We can thus reduce the analysis to [\_[12]{}\^2 ]{}\_s &= A(s), &[\_[12]{}\_[12]{} ]{}\_s &= D(s), &[\_[12]{}\^2 ]{}\_s &= G(s),\ [\_[11]{}\^2 ]{}\_s &= J(s), &[\_[11]{}\_[11]{} ]{}\_s &= K(s), &[\_[11]{}\^2]{}\_s &= L(s),\ [\_[11]{}\_[22]{} ]{}\_s &= Q(s),&[\_[11]{}\_[22]{} ]{}\_s &= R(s), &[\_[11]{}\_[22]{}]{}\_s &= S(s). According to and to the previous reasoning we obtain: d\_&=2AD,\ d\_&=D\^2+AG,\ d\_&=2GD. \[eq:pde\] Suitably combining $A$ and $G$ in we can write d\_=0A()=r\^2G(), r\^2=. Now we are left with d\_&=D\^2+r\^2G\^2,\ d\_&=2GD. \[eq:pde2\] The trick here is to complete the square by summing $d_\tau D + r d_\tau G$ thus obtaining d\_Y&=Y\^2,\ Y&=D+rG,\ d\_&=2G(Y-rG). \[eq:pde3\] The solution is trivial and it is given by Y()=,Y\_0=D(0)+. \[eq:fluctY\] ![[**Ergodicity breaking critical line.**]{} The plot shows a comparison between the theoretical predictions (black dashed lines) for the ergodicity breaking critical line according to Eq. and numerical solutions for spin glass states (red markers). The latter are evaluated by solving the self-consistency equations with $m=0$ with $\alpha$ fixed and searching for the temperature $T$ above which the solution has $q=0$. Going from top to bottom of the plot, the sleep extent is $t=0.1$, $1$ and $2$.[]{data-label="fig:ergo"}](ergodicity.pdf) So we are left with the evaluation of the correlations at $s=0$: namely the Cauchy conditions related to the solution coded in eq. (\[eq:fluctY\]). To this task we introduce a one-body generating function for the momenta of $z,k$: this can be done by setting inside $s=0,r=1$ and adding source fields $(j_i, J_{\mu})$ coupled respectively to $(k_i,z_\mu)$, with $i \in (1,...,N),\,\mu \in (1,...,P)$. Since we are approaching the critical line from the high fast noise limit we can set $m,p,q=0$ (when we explicitly make use of the coefficients ), overall writing \[eq:onebodygenf\] F(j,J)=&\_. Clearly, we took great advantage in approaching the ergodic region from above, since even the one-body problem (for the Cauchy condition) has been drastically simplified: showing only the relevant terms in $j,J$ we have F(j,J)=\_i j\_i\^2+\_J\_\^2 +O(j\^3). As anticipated, all the observable averages needed at $s=0$ can now be calculated simply as derivatives of $F(j,J)$, thus the $s=0$ correlation functions are finally given by D(0)&=[. (\_j F)\^2(\_J F)\^2 \_[j,J=0]{}]{}=0,\ A(0)&=[. (\^2\_j F)\^2 \_[j,J=0]{}]{}=\^2=W\^2,\ G(0)&=[. (\^2\_J F)\^2 \_[j,J=0]{}]{}=(1-(1+t)W)\^[-2]{}. Inserting this result in , we get Y()=. Upon evaluating $Y(\tau)$ for $\tau=\beta(1+t)\sqrt{\alpha} s,\, s=1$ and reporting the relevant ergodic self-consistent equations we obtain the following system: \[polo\] Y(s=1)&=,\ W\^2 &= 1-,\ &=1+. Since we are interested in obtaining the critical temperature for ergodicity breaking, where fluctuations (in this case $Y$) grow arbitrarily large we can check where the denominator at the r.h.s. of the first eq. (\[polo\]) becomes zero and recast this observation as follows The ergodic region of the model defined by the cost function (\[new-model\]) is delimited by the following critical surface in the $(\alpha,\beta,t)$ space of the tunable parameters $$\label{eq:ergodicityline} \beta_c=\frac{1}{1+t}\Big[\frac{\Delta^2}{1+\sqrt{\alpha}}+t\Delta\Big]\quad \text{with}\quad \Delta=1+\sqrt{\alpha}(1+\sqrt{\alpha})t.$$ At $t=0$, where the model reduces to Hopfield’s scenario, the critical surface correctly collapses over the Amit-Gutfreund-Sompolinsky critical line $\beta_c=(1+\sqrt{\alpha})^{-1}$, but in the large $t$ limit the ergodic region collapses to the axis $T=0$: this may have a profound implication, namely that the ergodic region -during the sleep state- [*phagocytes*]{} the spin-glass region. Since we have already seen that also the retrieval region [*phagocytes*]{} the spin-glass region [^8] this means that spurious states are entirely suppressed with a proper rest, allowing the network to achieve perfect retrieval, as suggested in the pioneering study by Kanter and Sompolinsky [@KanterSompo]. ![Critical lines for ergodicity breaking (dotted curves) and retrieval region boundary (solid curves) for various values of the unlearning time. From the top to the bottom: $t=0$ (black lines, i.e. the Hopfield phase diagram), $t=0.1$ (red lines), $1$ (blue lines) and $1000$ (green lines).[]{data-label="fig:erglines"}](critical.pdf){width="70.00000%"} ![The phase diagram is depicted for different choices of $t$, namely, from left to right, $t=0, 0.1, 1, 1000$. Notice that, as $t$ grows, the retrieval region (blue) and the ergodic region (yellow) get wider at the cost of the spin-glass region (red) which progressively shrinks up to collapse as $t \rightarrow \infty$. Also notice the change in the concavity of the critical line which separates ergodic and spin-glass region.[]{data-label="fig:phasediag"}](PhaseDiagram.pdf){width="\textwidth"} Conclusions and outlooks ======================== In recent years Artificial Intelligence, mainly due to the impressive skills of Deep Learning machines and the GPU-related revolution [@DL1], has attracted the attention of the whole Scientific Community. In particular, the latter includes mathematicians involved in the statistical mechanics of complex systems which has proved to be a fruitful tool in the investigation of neural networks and machine learning, since the early days (not by chance [*Boltzmann machines*]{} are named after [*Boltzmann*]{} [@BM1]). Among the various fields of Artificial Intelligence where, in the present years, statistical mechanics extensively contributed to the cause (e.g. statistical inference and signal processing [@Simona; @Lenka1], combinatorial and computational complexity [@Lenka2; @Zecchina; @Monasson], supervised or unsupervised learning [@Zecchina2; @Huang], deep learning [@Chiara; @Metha], compositional capabilities [@Agliari-PRL1; @Monasson2], and really much more...) the one we deepened in this work deals with the phenomenon of [*dreaming and sleeping*]{}[^9]. In the current work we mathematically described the phenomena of reinforcement and remotion, as pioneered by Crick $\&$ Mitchinson [@Crick], by Hopfield [@HopfieldUnlearning] and by many others in the neuroscience literature, see e.g [@Neuro-1; @Neuro0; @Neuro1; @Neuro2]): interestingly, such mechanisms have been evidenced to lead to an improvement of the retrieval capacity of the system. In particular, in [@Albert2], we showed that the system reaches the expected upper critical capacity $\alpha_c=1$, still preserving robustness with respect to fast noise. However, the statistical mechanical analysis, set at the standard replica symmetric level of description, was carried out via non-rigorous approaches (e.g., replica trick and numerical simulations). In this work we extended a Guerra’s interpolation scheme [@Barra-JSP2010], originally developed to deal with the standard Hopfield model (i.e. equipped with the canonical Hebbian synaptic coupling), to deal with this generalization: at first we showed the equivalence of this model with a three-layer spin-glass where some links among different layers are cloned (hence introducing correlation in the network and in the random fields required for the interpolation) and the third, and novel (w.r.t. the standard equivalence between Hopfield models and two-layers Boltzmann machines [@BarraEquivalenceRBMeAHN; @Barra-RBMsPriors2]), layer is equipped with imaginary real-valued neurons (best suitable to perform spectral analysis[^10]). As a consequence, the resulting interpolating architecture is rather tricky, by far richer than its classical limit yet it turns out to be managable and actually a sum rule for the quenched free energy related to the model can be written and even integrated, under the assumption of replica symmetry: such an expression, as well as those stemming from its extremization for the order parameters, sharply coincides with previous results [@Albert2], confirming them in each detail. We remark that such theorems state also the validity of other previous investigation -all replica trick derived- on unlearning in neural networks (see e.g. [@Dotsenko1; @unlearning1; @KanterSompo]). Beyond confirming previous results, we further systematically developed a fluctuation analysis of the overlap correlation functions, searching for critical behaviour, in order to inspect where ergodicity breaks down and in this investigation we found a very interesting result: as long as the Hopfield model is awake, the critical line is the one predicted by Amit-Gutfreund-Sompolinksy (as it should and as it is known by decades). However, as the network sleeps, the ergodic region starts to invade the spin glass region, ultimately destroying the spin glass states entirely, thus allowing the network (at the end of an entire sleep session) to live [*solely*]{} within a -quite large- retrieval region, surrounded by ergodicity: noticing that at this final stage of sleeping the network approached the Kanter-Sompolinsky model [@KanterSompo], it shines why these Authors called their model [*associative recall of memory without errors*]{}. Acknowledgments {#acknowledgments .unnumbered} =============== The Authors acknowledge partial financial fundings by MIUR, via [*FFABR2018-(Barra)*]{} and via [*Rete Match - Progetto Pythagoras*]{} (CUP:J48C17000250006) and by INFN. [99]{} D.H. Ackley, G.E. Hinton, T.J. Sejnowski, [*A learning algorithm for Boltzmann machines*]{}, Cognitive Sci. **9**.1:147-169, (1985). E. Agliari, et al., [*Multitasking associative networks*]{}, Phys. Rev. Lett. **109**, 268101, (2012). E. Agliari, A. Barra, C. Longo, D. Tantari, [*Neural Networks retrieving binary patterns in a sea of real ones*]{}, J. Stat. Phys. **168**, 1085, (2017). E. Agliari, A. Barra, B. Tirozzi, [*Free energies of Boltzmann Machines: self-averaging, annealed and replica symmetric approximations in the thermodynamic limit*]{}, J. Stat., in press. E. Agliari, et al., [*Multitasking attractor networks with neuronal threshold noises*]{}, Neural Networks **49**, 19, (2013). E. Agliari, et al., [*Parallel retrieval of correlated patterns: From Hopfield networks to Boltzmann machines*]{}, Neural Networks **38**, 52, (2013). E. Agliari, et al, [*Immune networks: multitasking capabilities near saturation*]{}, J.Phys.A: Math. $\&$ Theor. **46**(41):415003, (2003). M. Aizenman, P. Contucci, [*On the stability of the quenched state in mean-field spin-glass models*]{}, J. Stat. Phys. **92**(5-6):765, (1998). D.J. Amit, [*Modeling brain functions*]{}, Cambridge Univ. Press (1989). D. Amit, H. Gutfreund, H. Sompolinsky, [*Spin-glass models of neural networks*]{}, Phys. Rev. A **32**.2:1007, (1985). D. Amit, H. Gutfreund, H. Sompolinsky, [*Storing infinite numbers of patterns in a spin-glass model of neural networks*]{}, Phys. Rev. Lett. **55**.14:1530, (1985). A. Engel, C. Van den Broeck, [*Statistical mechanics of learning*]{}, Cambridge University Press (2001). C. Baldassi, A. Braunstein, N. Brunel, and R. Zecchina, [*Efficient supervised learning in networks with binary synapses*]{}, Proc. Natl. Acad. Sci. **104**, 11079, (2007). M. Baity-Jesi, et al., [*Comparing dynamics: Deep neural networks versus glassy systems*]{}, preprint arXiv:1803.06969, (2018). A. Barra, M. Beccaria, A. Fachechi,[*A new mechanical approach to handle generalized Hopfield neural networks*]{}, Neural Networks (2018). A. Barra, et al., [*On the equivalence among Hopfield neural networks and restricted Boltzman machines*]{}, Neural Networks **34**, 1-9, (2012). A. Barra, et al., [*Phase transitions of Restricted Boltzmann Machines with generic priors*]{}, Phys. Rev. E **96**, 042156, (2017). A. Barra, et al., [*Phase Diagram of Restricted Boltzmann Machines $\&$ Generalized Hopfield Models*]{}, Phys. Rev. E **97**, 022310, (2018). A. Barra, G. Genovese, F. Guerra, [*The replica symmetric approximation of the analogical neural network*]{}, J. Stat. Phys. **140**(4):784, (2010). A. Barra, G. Genovese, F. Guerra, [*Equilibrium statistical mechanics of bipartite spin systems*]{}, J. Phys. A **44**, 245002, (2011). A. Barra, F. Guerra, [*About the ergodic regime of the analogical Hopfield neural network*]{}, J. Math. Phys. **49**, 125217, (2008) A. Bovier, V. Gayrard, [*Hopfield models as generalized random mean field models*]{}, Mathematical aspects of spin glasses and neural networks, 3-89, Birkhauser, Boston (1998). A. Bovier, V. Gayrard, P. Picco, [*Gibbs states of the Hopfield model in the regime of perfect memory*]{}, Prob. Theor. $\&$ Rel. Fields **100**(3):329, (1994). A. Bovier, V. Gayrard, P. Picco, [*Gibbs states of the Hopfield model with extensively many patterns*]{}, J. Stat. Phys. **79**(1-2):395, (1995). P. Carmona, Y. Hu, [*Universality in Sherrington–Kirkpatrick’s spin glass model*]{}, Ann. Henri Poincarè **42**, 2, (2006). S. Cocco, R. Monasson, [*Adaptive cluster expansion for inferring Boltzmann machines with noisy data*]{}, Phys. Rev. Lett. **106**.9: 090601, (2011). A.C.C. Coolen, R. Kuhn, P. Sollich, [*Theory of neural information processing systems*]{}, Oxford Press (2005). P. Contucci, C. Giardinà, [*Spin-Glass Stochastic Stability: a Rigorous Proof*]{}, Annales Henri Poincar[é]{} **6**:915-923, (2005). F. Crick, G. Mitchinson, [*The function of dream sleep*]{}, Nature **304**, 111, (1983). S. Diekelmann, J. Born, [*The memory function of sleep*]{}, Nature Rev. Neuroscience **11**(2):114, (2010). V. Dotsenko, N.D. Yarunin, E.A. Dorotheyev, [*Statistical mechanics of Hopfield-like neural networks with modified interactions*]{}, J. Phys. A **24**, 2419, (1991). V. Dotsenko, B. Tirozzi, [*Replica symmetry breaking in neural networks with modified pseudo-inverse interactions*]{}, J. Phys. A **24**:5163-5180, (1991). A. Fachechi, E. Agliari, A. Barra, [*Dreaming neural networks: forgetting spurious memories and reinforcing pure ones*]{}, submitted to Neural Nets available at arXiv:1810.12217 (2018). E. Gardner, [*The space of interactions in neural network models*]{}, J. Phys. A **21**(1):257, (1988). G. Genovese, [*Universality in bipartite mean field spin glasses*]{}, J. Math. Phys. **53**(12):123304, (2012). I. Goodfellow, Y. Bengio, A. Courville, [*Deep Learning*]{}, M.I.T. press (2017). A. Hern, [*Yes, androids do dream of electric sheep*]{}, The Guardian, Technology and Artificial Intelligence (2015). J.A. Hobson, E.F. Pace-Scott, R. Stickgold, [*Dreaming and the brain: Toward a cognitive neuroscience of conscious states*]{}, Behavioral and Brain Sciences **23**, (2000). J.J. Hopfield, [*Neural networks and physical systems with emergent collective computational abilities*]{}, Proceedings of the national academy of sciences 79.8 (1982): 2554-2558. J.J. Hopfield, D.I. Feinstein, R.G. Palmer, [*Unlearning has a stabilizing effect in collective memories*]{}, Nature Lett. **304**, 280158, (1983). J.A. Horas, P.M. Pasinetti, [*On the unlearning procedure yielding a high-performance associative memory neural network*]{}, J. Phys. A **31**, L463-L471, (1998). H. Huang, K. Y. Michael Wong, and Y. Kabashima, [*Entropy landscape of solutions in the binary perceptron problem*]{}, J. Phys. A **46**, 375002, (2013). I. Kanter, H. Sompolinsky, [*Associative recall of memory without errors*]{}, Phys. Rev. A **35**.1:380, (1987). F. Krzakala, M. Mezard, F. Sausset, Y.F. Sun, L. Zdeborova, [*Statistical-physics-based reconstruction in compressed sensing*]{}, Phys. Rev. X **2**(2), 021005, (2012). F. Krzakała, A. Montanari, F. Ricci-Tersenghi, G. Semerjian, L. Zdeborova, [*Gibbs states and the set of solutions of random constraint satisfaction problems*]{}, Proc. Natl. Acad. Sci. **104**:(25),10318, (2007). Y. Le Cun, Y. Bengio, G. Hinton, [*Deep learning*]{}, Nature **521**:436-444, (2015). P. Maquet, [*The role of sleep in learning and memory*]{}, Science **294**.5544:1048, (2001). J.L. McGaugh, [*Memory - a century of consolidation*]{}, Science **287**.5451:248-251, (2000). M. Mezard, G. Parisi, M.A. Virasoro, [*Spin glass theory and beyond: an introduction to the replica method and its applications*]{}, World Scientific, Singapore (1987) M. Mezard, G. Parisi, R. Zecchina, [*Analytic and algorithmic solution of random satisfiability problems*]{}, Science **297**.5582:812-815, (2002). P. Mehta, D.J. Schwab, [*An exact mapping between the variational renormalization group and deep learning*]{}, preprint, arXiv:1410.3831, (2014). R. Monasson, R. Zecchina, S. Kirkpatrick, B. Selman, L. Troyansky, [*Determining computational complexity from characteristic phase transitions*]{}, Nature **400**(6740), 133, (1999). K. Nokura, [*Spin glass states of the anti-Hopfield model*]{}, J. Phys. A **31**, 7447, (1998). K. Nokura, [*Paramagnetic unlearning in neural network models*]{}, Phys. Rev. E **54**(5):5571, (1996). L. Pastur, M. Shcherbina, B. Tirozzi, [*The replica-symmetric solution without replica trick for the Hopfield model*]{}, J. Stat. Phys. **74**(5-6):1161, (1994). L. Pastur, M. Shcherbina, B. Tirozzi, [*On the replica symmetric equations for the Hopfield model*]{}, J. Math. Phys. **40**(8): 3930, (1999). L. Personnaz, I. Guyon, G. Dreyfus, [*Information storage and retrieval in spin-glass like neural networks*]{}, J. Phys. Lett. **46**, L-359:365, (1985). R. Salakhutdinov, G. Hinton, [*Deep Boltzmann machines*]{}, Artificial Intelligence and Statistics (2009). R. Salakhutdinov, H. Larochelle, [*Efficient learning of deep Boltzmann machines*]{}, Proc. thirteenth int. conf. on artificial intelligence and statistics, 693, 2010. H.S. Seung, H. Sompolinsky, N. Tishby, [*Statistical mechanics of learning from examples*]{}, Phys. Rev. A **45**(8):6056, (1992). M. Talagrand, [*Rigorous results for the Hopfield model with many patterns*]{}, Prob. Theor. $\&$ Rel. Fiel. **110**(2):177, (1998). M. Talagrand, [*Exponential inequalities and convergence of moments in the replica-symmetric regime of the Hopfield model*]{}, Ann. Prob. 1393-1469, (2000). J. Tubiana, R. Monasson, [*Emergence of Compositional Representations in Restricted Boltzmann Machines*]{}, Phys. Rev. Lett. **118**.13:138301, (2017). S. Wimbauer, J. Leo van Hemmen, [*Hebbian unlearning*]{}, Analysis of Dynamical and Cognitive Systems, Springer, Berlin, 1995. [^1]: We stress that, given the equivalence between restricted Boltzmann machines and Hopfield neural networks [@BarraEquivalenceRBMeAHN], also learning via e.g. [*contrastive divergence*]{} [@Hinton1] ultimately falls into the Hebbian category [@Agliari-Dantoni; @Agliari-Isopi]. [^2]: The growth in the number of spurious states is roughly exponential in the number of stored patterns, namely -in the high storage regime- in the number of neurons. [^3]: Actually the network seems to perform even [*better*]{}, returning its maximal capacity to be $\alpha_c \sim 1.07 > 1$: this is obviously not possible and, as explained by Dotsenko and Tirozzi [@Dotsenko1; @Dotsenko2], it is a chimera of the replica-symmetric regime at which the theory is developed. [^4]: As a matter of notation, we stress that the denominator $1/(\mathbb I+tC)$ in the generalized kernel is intended as the inverse matrix $(\mathbb I+tC)^{-1}$. [^5]: As extensively discussed in [@Barra-RBMsPriors1; @Barra-RBMsPriors2] by varying the nature of the neurons as well as of the pattern entries, for instance ranging from Boolean (Ising) to standard Gaussians, the retrieval performances of the network vary sensibly and, in some limits, are entirely lost: in this sense neural networks do not share [*universality*]{} with standard spin-glasses. [^6]: We arbitrarily (but with no loss of generality) nominated the first pattern as the retrieved one. [^7]: This request is obviously perfectly consistent with the replica-symmetric ansatz when approaching the problem via the replica trick [@Coolen; @Albert2]. [^8]: Note that the ergodic line does not affect the retrieval region, they simply [*fade*]{} one into the other. This is because the critical surface is calculated assuming an ergodic regime (hence, it does not takes into account the signal) and, more importantly, the retrieval region is delimited by a first order phase transition, that is not detected by a second order inspection as that needed for criticality. [^9]: We point out that dreaming has been recently connected to compositional capabilities [@Guardian], the latter being natural properties of diluted retricted Boltzmann machines [@Agliari-Dantoni; @Agliari-Immune; @Monasson2]. [^10]: We plan to report soon on the learning algorithms for this generalized restricted Boltzmann machine, where the properties of the spectral layers will spontaneously shine.
{ "pile_set_name": "ArXiv" }
--- author: - 'Ronald A. Remillard, Edward H. Morgan (MIT)' - 'Jeffrey E. McClintock (CFA), Charles D. Bailyn (Yale)' - 'Jerome A. Orosz (Penn State) & Jochen Greiner (AIP, Potsdam)' title: 'Multifrequency Observations of the Galactic Microquasars GRS1915+105 and GROJ1655-40' --- \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} Introduction ============ The two sources of superluminal radio jets [@mirrod] [@tin] [@hj95] in the Galaxy, GRS1915+105 and GRO J1655-40 have been quite active during 1996. These X-ray sources were originally detected during May 1992 and July 1994, respectively, and they have persisted well beyond the typical time scale for X-ray transients [@clg]. Optical study of the companion star in GROJ1655-40 has yielded a binary mass function (3.2 ) that indicates an accreting black hole [@cb] [@ob]. In the case of GRS1915+105, interstellar extinction limits optical/IR studies to weak detections at wavelengths $> 1$ micron [@mir94]. The compact object in this system is supsected of being a black hole due to the spectral and temporal similarities with GROJ1655-40 and other black hole binaries. Both of these microquasars have now been detected with OSSE [@g97] out to photon energies of 600 keV. Investigations of microquasars are motivated by several broad and interrelated purposes: to search for clues regarding the origin of relativistic jets, to probe the properties of the compact objects, and to understand the various spectral components and their evolution as the sources journey through different accretion states. Several research programs are described herein, with emphasis on new results from the Rossi X-ray Timing Explorer (RXTE). RXTE Observations of GRS1915+105 ================================ The RXTE All Sky Monitor [@lev96] began operation during 1996 Jan 5-13, and continuous observing with a 40% duty cycle has been achieved since 1996 Feb 20. GRS1915+105 was found to be bright and incredibly active [@mor], as ASM time series data revealed high amplitude modulations at 10-50 s. These results initiated a series of weekly pointings for the PCA and HEXTE instruments. The yield is approaching ten billion photons in an immensely complex and exciting archive that is fully available as ‘public’ data. The ASM light curve of GRS1915+105 (1996 Feb 20 – 1997 Jan 23) is shown in Fig. 1. These results are derived using version 2 (1/97) of the model for the instrumental response to X-ray shadows through the coded masks. The top panel shows the normalized intensity for the full range (2–12 keV) of the ASM cameras, in which the Crab nebula produces 75.5 c/s. The vertical lines in the upper region show the times of the PCA / HEXTE observations in the public archive. Below this light curve, one of the ASM hardness ratios is displayed; $HR2$ is the ratio of normalized flux at 5–12 keV relative to the flux in the 3–5 keV band. The spectrum of GRS1915+105 is harder than the Crab ($HR2$ = 1.07). Since there is an anticorrelation between the count rate and $HR2$ in GRS1915+105, we caution against the presumption that the ASM flux is a direct measure of X-ray luminosity. During 1997, significant progress is expected from efforts to combine the ASM results with those of BATSE and radio monitors, including the newly organized Greenbank Interferometer project. This effort will build on earlier work [@h95] to investigate the multifrequency evolution of X-ray outbursts and radio flares. The PCA observations of GRS1915+105 immediately showed dramatic intensity variations [@gmr] with a complex hierarchy of quasi-periodic dips on time scales from 10 s to hours. Complex and yet repeatable ‘stalls’ in the light curve were preceeded by rapid dips in which the count rate dropped by as much as 90% in a few seconds. These variations were interpreted as an inherent accretion instability, rather than absorption effects, since there was spectral softening during these dips. There were also occasions of flux overshooting after X-ray stalls. These repetitive, sharp variations and their hierarchy of time scales are entirely unrelated to the phenomenology of absorption dips [@gmr]. The dips represent large changes in an absolute sense; the pre-dip or post-dip luminosity in GRS1915+105 is as high as $2\times 10^{39} ~{\rm ergs}~{\rm cm}^{-2}~{\rm s}^{-1}$ at 2-60 keV, assuming the distance of 12.5 kpc inferred from 21 cm HI absorption profiles [@mirrod]. The phenomenology of wild source behavior in GRS1915+105 has expanded since the first series of observations. Three examples are shown in Fig. 2. The Oct 7 display of quasiperiodic stalls preceeded by rapid dips (middle panel) is highly organized and repetitive, while the Jun 16 light curve (top panel) shows complex, interrupted stalls that are not preceeded by rapid dips. In the bottom panel, an entirely new type of oscillatory instability is displayed; hundreds of these ringing features were recorded during Oct 13 and 15 with a recurrence time near 70 s. During Oct 15 the recurrence time increases (see Fig. 2), leading to a long X-ray stall and subsequent flux overshoot. The nature of these astonishing X-ray instabilities is currently a mystery. Note, however, that most of the PCA observations show ‘normal’ light curves with variations limited to rapid flickering at 10-20 % of the mean rate. A penetrating analysis of GRS1915+105 was made by investigating the X-ray power spectra and comparing them with the characteristics of the ASM light curve [@mrg]. The shape of the broad-band power continuum and the properites of rapid QPOs (0.01 to 10 Hz) are correlated with the brightness, spectral hardness, and the long-term variations seen with the ASM. Four emission states were found, labelled in Fig. 1 as chaotic (CH), bright (B), flaring (FL), and low-hard (LH). We see QPOs and nonthermal spectral components during all four states, implying that they are new variants of the ‘very high state’ rarely seen in other X-ray binaries [@vdk94] [@vdk96]. The combination of the intense QPOs and the high throughput of the PCA enabled phase tracking of individual oscillations. Four QPO cases were chosen from three different states [@mrg], with frequencies ranging from 0.07 to 2.0 Hz. The results are remarkably similar: the QPO arrival phase (relative to the mean frequency) exhibits a random walk with no correlation between the amplitude and the time between subsequent events. Furthermore the mean ‘QPO-folded’ profiles are roughly sinusoidal with increased amplitude at higher energy, and with a distinct phase lag of $\approx$ 0.03 between 3 and 15 keV. At photon energies above 10 keV, the high amplitudes and sharp profiles of the QPOs are inconsistent with any scenario in which the phase delay is caused by scattering effects. Alternatively, it appears that the origin of the hard X-ray spectrum itself (i.e. the creation of energetic electrons in the inverse Compton model) is functioning in a quasiperiodic manner. These results fundamentally link X-ray QPOs with the most luminous component of the X-ray spectrum in GRS1915+105. In addition to the frequent X-ray QPOs below 10 Hz, a transient yet ‘stationary’ QPO at 67 Hz has been discovered [@mrg]. This feature is seen on 6 of the first 31 PCA observations of GRS1915+105. Typically, the amplitude is 1% of the flux and the QPO width is 3.5 Hz. This QPO exhibits a strong energy dependence, rising (e.g. on 1996 May 6) from 1.5 % at 3 keV to 6% at 15 keV. One may attempt to associate this frequency with the mass and spin rate of an accreting black hole, but the competing models include such concepts as instabilities at the minimum stable orbit of $3 Rs$, implying a mass of 33  for a nonrotating black hole [@mrg], to relativistic modes of oscillation in the inner accretion disk, implying 10  for a nonrotating black hole [@now]. Recent Observations of GRO J1655-40 =================================== During much of 1995 and early 1996, GRO J1655-40 was in a low or quiescent accretion state, permitting a clear optical view of the companion star (near F4 IV). Orosz and Bailyn [@ob] improved the determinations of the binary period (2.62157 days) and the mass function. They further measured the ‘ellipsoidal variations’ arising from the rotation of the gravitationally distorted companion star. Their analysis, using B,V,R, and I bandpasses, provide an exceptionally good fit for the binary inclination angle (69.5 deg) and the mass ratio. From these results, they deduce masses of $7.0 \pm 0.2$ and $2.34 \pm 0.12$  for the black hole and companion star, respectively. The ASM recorded a renewed outburst from GRO J1655-40 that began on 1996 April 25. The ASM light curve (Feb 1996 to Jan 1997) is shown in the lower half of Fig. 1. With great fortune, our optical campaign had lasted until April 24, and Orosz has shown [@ob2] that optical brightening preceeded the X-ray ascent by 6 days, beginning first in the I band and then accelerating quickly in blue light. These results provide concrete evidence favoring the accretion disk instability as the cause of the X-ray nova. Theorists may now attempt to model the brightness gradients and delay times in the effort to develop a deeper understanding of this outburst. The ASM $HR2$ measures (Fig. 1) show an initially soft spectrum that becomes brighter and harder for several months during mid outburst. The PCA observations from our GO program confirm this evolution, as the power-law component (photon index $\approx 2.6$) dominates the spectrum during the brightest cases. The great majority of PCA measurments of GRO J1655-40 follow single tracks on the intensity:color and color:color diagrams, with a positive correlation between hardness and brightness. PCA power spectra show transient QPOs in the range of 8–22 Hz that are clearly associated with the strength of the power-law component. Using a PCA-based hardness ratio, $PCA\_HR2$ = flux above 9.6 keV / flux at 5.2–7.0 keV, we detect QPO in the range of 8–22 Hz whenever $PCA\_HR2 > 0.22$. Furthermore, in the 7 ‘hardest’ observations ($PCA\_HR2 > 0.3$), there is evidence of a high-frequency QPO near 300 Hz. In Fig. 3 we show the sum of PCA power spectra in these 3 intervals of $PCA\_HR2$, illustrating the QPO centered at 298 Hz. The Poisson noise has been subtracted, with inclusion of deadtime effects [@mrg]. The integrated feature has a significance of $14 \sigma$, a width of 120 Hz, and an amplitude near 0.8%. Applying the ‘last stable orbit’ model to this feature yields a mass of 7.4  for a non-rotating black hole. While this is astonishingly similar to the optically determined mass, we caution that other models can give similar results in the case of significant black hole rotation. We further note that none of the models discussed [@mrg] for the high-frequency QPOs in GROJ1655-40 and GRS1915+105 adequately address the spectral signature of this oscillation, which is more directly associated with the power law component rather than the disk (thermal) component. Nevertheless, the fact of these QPOs, which almost certainly originate very near the accreting compact objects, will remain a vigorous research topic throughout the RXTE Mission. 1.5cm 1.0cm References {#references .unnumbered} ========== [99]{} C.D. Bailyn, J.A. Orosz, J.E., McClintock, and R.A. Remillard, . W. Chen, M. Livio, and N. Gehrels, . J. Greiner, E.H. Morgan, and R. A. Remillard, . E. Grove, this proceedings. B.A. Harmon , . R.M. Hjellming & M.P. Rupen, . A.M. Levine, H. Bradt, W. Cui, J.G. Jernigan, E.H. Morgan, R.A. Remillard, R.E. Shirey, and D.A. Smith, . I.F. Mirabel, , . I.F. Mirabel and L.F. Rodriguez, . E.H. Morgan, R. A. Remillard, and J. Greiner, . E.H. Morgan, R. A. Remillard, and J. Greiner, IAUC 6392, May 2, 1996. M.A. Nowak, R.V. Wagoner, M.C. Begelman, and D.E. Lehr, ApJ, submitted. J.A. Orosz and C.D. Bailyn, . J.A. Orosz, R. A. Remillard, C.D. Bailyn, & J. E. McClintock, . S.J.Tingay , . M. van der Klis in [*X-ray Binaries*]{}, eds. W. Lewin, J. van Paradijs, and E. van den Heuvel (Cambridge University Press, Cambridge, 1996) p. 252. M. van der Klis, .
{ "pile_set_name": "ArXiv" }
--- abstract: | We study the classical flat full causal bulk viscous FRW cosmological model through the factorization method. The method shows that there exists a relationship between the viscosity parameter $s$ and the parameter $\gamma$ entering the equations of state of the model. Also, the factorization method allows to find some new exact parametric solutions for different values of the viscous parameter $s$. Special attention is given to the well known case $s=1/2$, for which the cosmological model admits scaling symmetries. Furthermore, some exact parametric solutions for $s=1/2$ are obtained through the Lie group method. **Keywords**: Exact solutions, Full Causal Bulk viscosity, factorization method, Lie groups. author: - 'O. Cornejo-Pérez' - 'J. A. Belinchón' date: ': ' title: | Exact solutions of a Flat Full Causal Bulk viscous FRW cosmological model\ through factorization --- Introduction. ============= Factorization of linear second order differential equations is a well established method to find exact solutions through algebraic procedures. It was widely used in quantum mechanics and developed since Schrodinger’s works on the factorization of the Sturm-Liouville equation. At the present time, very good informative reviews on the factorization method can be found in open literature (see for instance [@mielnik; @rosu2]). However, in recent times the factorization method has been applied to find exact solutions of nonlinear ordinary differential equations (ODE) [@berkovich; @cornejo1; @wang1; @cornejo2; @estevez]. In [@cornejo1], based on previous Berkovich’s works [@berkovich], it has been provided a systematic way to apply the factorization method to nonlinear second order ODE. In [@wang1], Wang and Li extended the application to more complex nonlinear second and third order ODE. The factorization of some ODE may be restricted due to constraints which appear in a natural way within the factorization procedure. However, here it is shown that by performing transformation of coordinates, one can be able to get exact parametric solutions of an ODE which does not allow its factorization or presents cumbersome constraints. The purpose of the present work is to apply the factorization method to study the full causal bulk viscous cosmological model with flat FRW symmetries. Since the Misner [@Mi66] suggestion stressing the fact that the observed large scale isotropy of the Universe may be due to the action of the neutrino viscosity when the Universe was about one second old, there have been numerous works pointing out the importance of the physical processes involving viscous effects in the evolution of the Universe (see for instance [@ChJa96]). Due to such assumption, dissipative processes are supposed to play a fundamental role in the evolution of the early Universe. The theory of relativistic dissipative fluids, created by Eckart [@Ec40] and Landau and Lifshitz [@LaLi87] has many drawbacks, and it is known that it is incorrect in several respects mainly those concerning causality and stability. Israel [@Is76] formulates a new theory in order to solve these drawbacks. This theory was latter developed by Israel and Stewart [@IsSt76] into what is called transient or extended irreversible thermodynamics. The best currently available theory for analyzing dissipative processes in the Universe is the full causal thermodynamics developed by Israel and Stewart [@IsSt76], Hiscock and Lindblom [@HiLi89] and Hiscock and Salmonson [@HiSa91]. The full causal bulk viscous thermodynamics has been extensively used to study the evolution of the early Universe and some astrophysical process [@HiLi87; @Ma95]. The paper is organized as follows. In Section II, we start by reviewing the main components of a flat bulk viscous FRW cosmological model, and introduce the factorization technique as applied to the cosmological model. Field equations (FE) of the classical bulk viscous FRW cosmological model [@Ma95] reduce to a single nonlinear second order ODE, the fundamental dynamical equation for the Hubble rate. By performing a transformation of both the dependent and independent variables and using the factorization method, this equation is transformed into a nonlinear first order ODE. The order reduction of the equation for the Hubble rate allows to find a variety of new exact parametric solutions of the FE for the viscous FRW cosmological model. Furthermore, the factorization technique provides relationships for parameters entering the factorized equation. Then, a noteworthy result is that the viscosity parameter $s$ is not longer assumed to be independent of the values of parameter $\gamma$. Such parameter relationships have not been previously reported. In Section III, several particular models for $s\neq1/2$ are studied. We obtain new exact parametric solutions through factorization and compare with the ones obtained by several authors [@C1; @Ch97; @H1; @H2; @H3; @H4; @H5; @H6] who use different approaches. Section IV is devoted to the special case $s=1/2$, for which the model admits scaling symmetries. The scaling solution, previously studied by many authors is obtained. In order to obtain more new solutions and compare the solutions obtained through factorization for $s=1/2$, we consider the Lie group method for this special case in Section V. Some conclusions end up the paper in Section VI. The model. ========== We consider a flat FRW Universe with line element $$ds^{2}=-dt^{2}+f^{2}(t)\left( dx^{2}+dy^{2}+dz^{2}\right) , \label{1}$$ where the energy-momentum tensor of a bulk viscous cosmological fluid is given by [@Ma95]: $$T_{i}^{k}=\left( \rho+p+\Pi\right) u_{i}u^{k}+\left( p+\Pi\right) \delta_{i}^{k}, \label{2}$$ where $\rho$ is the energy density, $p$ the thermodynamic pressure, $\Pi$ the bulk viscous pressure and $u_{i}$ the four-velocity satisfying the condition $u_{i}u^{i}=-1$. We use the units $8\pi G=c=1$. The gravitational field equations together with the continuity equation, $T_{i;k}^{k}=0,$ are given as follows$$\begin{aligned} 2\dot{H}+3H^{2} & =-p-\Pi,\label{fe1}\\ 3H^{2} & =\rho,\\ \Pi+\tau\dot{\Pi} & =-3\xi H-\frac{1}{2}\tau\Pi\left( 3H+\frac{\dot{\tau}}{\tau}-\frac{\dot{\xi}}{\xi}-\frac{\dot{T}}{T}\right) ,\\ \dot{\rho} & =-3\left( \gamma\rho+\Pi\right) H, \label{fe4}$$ where $H=\dot{f}/f.$ In order to close the system of equations we are assuming the following equations of state [@Ma95] $$p=\left( \gamma-1\right) \rho,\quad\xi=\alpha\rho^{s},\quad T=\beta\rho ^{r},\quad\tau=\xi\rho^{-1}=\alpha\rho^{s-1}, \label{steq1}$$ where $T$ is the temperature, $\xi$ the bulk viscosity coefficient and $\tau$ the relaxation time. The parameters satisfy $\gamma\in\left[ 1,2\right] ,$ $s\geq0$, and $r=\left( 1-\frac{1}{\gamma}\right) $. The growth of entropy has the following behavior$$\Sigma\left( t\right) \thickapprox-3k_{B}^{-1}\int_{t_{0}}^{t}\Pi Hf^{3}T^{-1}dt.$$ The Israel-Stewart-Hiscock theory is derived under the assumption that the thermodynamical state of the fluid is close to equilibrium, i.e., the non-equilibrium bulk viscous pressure should be small when compared to the local equilibrium pressure $|\Pi|<<p=(\gamma-1)\rho$. Then, we may define the $l(t)$ parameter as: $l=|\Pi|/p.$ If this condition is violated then one is effectively assuming that the linear theory also holds in the nonlinear regime far from equilibrium. For a fluid description of the matter, the condition ought to be satisfied. To see if a cosmological model inflates or not it is convenient to introduce the deceleration parameter $q=dH^{-1}/dt-1$. The positive sign of the deceleration parameter corresponds to standard decelerating models, whereas the negative sign indicates inflation. The fundamental dynamical equation for the Hubble rate is given by [@Ma95] $$\ddot{H}-A\frac{\dot{H^{2}}}{H}+\left( 3H+CH^{2-2s}\right) \dot{H}+DH^{3}+EH^{4-2s}=0, \label{eq1}$$ where $$A=\left( 1+r\right) =2-\frac{1}{\gamma},\quad B=3,\quad C=3^{1-s},\quad D=\frac{9}{4}\left( \gamma-2\right) ,\quad E=\frac{1}{2}3^{2-s}\gamma.$$ Let us perform the following transformation of the dependent and independent variables $$H=y^{1/2},\qquad d\eta=y^{1/2}dt, \label{cv1}$$ then Eq. (\[eq1\]) turns into $$\frac{d^{2}y}{d\eta^{2}}-\frac{A}{2y}\left( \frac{dy}{d\eta}\right) ^{2}+\left( 3+Cy^{\frac{1}{2}-s}\right) \frac{dy}{d\eta}+2y(D+Ey^{\frac {1}{2}-s})=0. \label{eq2}$$ Let us consider now the following factorization scheme [@cornejo1; @wang1]. The nonlinear second order equation $$y^{\prime\prime}+f\left( y\right) y^{\prime2}+g(y)y^{\prime}+h(y)=0, \label{eq2-2}$$ where $y^{\prime}=\frac{dy}{d\eta}=D_{\eta}y$, can be factorized in the form $$\left[ D_{\eta}-\phi_{1}(y)y^{\prime}-\phi_{2}(y)\right] \left[ D_{\eta }-\phi_{3}(y)\right] y=0, \label{eq2-3}$$ under the conditions$$\begin{aligned} & f\left( y\right) =-\phi_{1},\\ & g(y)=\phi_{1}\phi_{3}y-\phi_{2}-\phi_{3}-\frac{d\phi_{3}}{dy}y,\label{eq2-4}\\ & h(y)=\phi_{2}\phi_{3}y.\end{aligned}$$ If we assume $\left[ D_{\eta}-\phi_{3}(y)\right] y=\Omega(y)$, then the factorized Eq. (\[eq2-3\]) can be rewritten as $$\begin{aligned} y^{\prime}-\phi_{3}y & =\Omega,\label{eq3}\\ \Omega^{\prime}-\left( \phi_{1}y^{\prime}+\phi_{2}\right) \Omega & =0. \label{eq4}$$ We can introduce the functions $\phi_{i}$ by comparing Eqs. (\[eq2\]) and (\[eq2-2\]). Then, $\phi_{1}=\frac{A}{2y}$, $\phi_{2}=a_{1}^{-1}$ and $\phi_{3}=2a_{1}(D+Ey^{\frac{1}{2}-q})$, where $a_{1}(\neq0)$ is an arbitrary constant, are proposed. Eq. (\[eq4\]) can be easily solved for the chosen factorizing functions obtaining as result $\Omega=\kappa_{1}e^{\eta/a_{1}}y^{A/2}$, where $\kappa_{1}$ is an integration constant. Then, Eq. (\[eq3\]) turns into the equation $$y^{\prime}-2a_{1}\left( D+Ey^{\frac{1}{2}-s}\right) y-\kappa_{1}e^{\eta/a_{1}}y^{A/2}=0, \label{PALOMA}$$ whose solution is also solution of Eq. (\[eq2\]). Furthermore, the following relationship is obtained from Eq. (\[eq2-4\]), $$Aa_{1}D-a_{1}^{-1}-2a_{1}D + a_{1}E(A-3+2s)y^{\frac{1}{2}-s}= 3 + Cy^{\frac {1}{2}-s}. \label{eq2-5}$$ Eq. (\[eq2-5\]) is a noteworthy result which provides the explicit form of $a_{1}$ and the relationship among the parameters entering Eq. (\[eq2\]). Then, the viscous parameter $s$ as a function of parameter $\gamma$ is obtained. By comparing both sides of Eq. (\[eq2-5\]) and assuming $r=1-\frac{1}{\gamma}$, leads to obtain:$$s\left( \gamma\right) _{\pm}=\frac{\pm\sqrt{2}+\gamma^{3/2}}{2\gamma^{3/2}}. \label{lisa1}$$ Then, $s_{-}\in\lbrack0,.25]$ $\forall\gamma\in\lbrack1.2599,2]$, and $s_{+}\in(.75,1.2071068]$ $\forall\gamma\in\lbrack1,2)$. Also, the explicit form of $a_{1}$ is $$a\left( \gamma\right) _{1\pm}=\pm\frac{2\gamma^{1/2}}{3(\sqrt{2}\mp \gamma^{1/2})}. \label{lisa2}$$ Then, $a_{1-}\in\lbrack-1/3,-.29499]$ $\forall\gamma\in\lbrack1.2599,2]$, and $a_{1+}\in\lbrack1.60947,\infty)$ $\forall\gamma\in\lbrack1,2)$. We find the following significative values $$\begin{array} [c]{|c|c|c|c|c|}\hline \gamma & s_{-} & a_{1-} & s_{+} & a_{1+}\\\hline\hline 1 & & & 1.2071 & 1.6095\\\hline \frac{4}{3} & 4.0721\times10^{-2} & -0.299\,66 & 0.95928 & 2.9663\\\hline 2 & \frac{1}{4} & -\frac{1}{3} & & \\\hline \end{array}$$ The main difference of these results from other approaches is expressed through Eq. (\[lisa1\]), which represents an advantage of the factorization method as opposed to different approaches studied by other authors. This equation provides the relationship between the parameters $s$ and $\gamma$ in such a way that by fixing $s$ we get a particular value of $\gamma$. The main dynamical variables of the FE are given in parametric form as follows $$\begin{aligned} f\left( \eta\right) & =f_{0}\exp\left( \eta-\eta_{0}\right) ,\label{para1}\\ H\left( \eta\right) & = y^{1/2}\left( \eta\right) ,\\ q(\eta) & = y^{1/2}\left( \eta\right) \frac{d}{d\eta}\left( \frac {1}{H\left( \eta\right) }\right) -1,\\ \rho\left( \eta\right) & = 3y\left( \eta\right) ,\\ p\left( \eta\right) & = 3\left( \gamma-1\right) y\left( \eta\right) ,\\ \Pi\left( \eta\right) & = -\left( 3\gamma y\left( \eta\right) +\frac{dy}{d\eta}\right) ,\\ l\left( \eta\right) & = \frac{\left\vert \Pi\right\vert }{p},\\ \Sigma\left( \eta\right) & = -3k_{B}\int\Pi(\eta)f^{3}(\eta)H(\eta )T(\eta)^{-1}y(\eta)^{-1/2}d\eta. \label{para2}$$ The authors have not been able to find the most general solution of Eq. (\[PALOMA\]). However, this equation can be studied for some specific cases providing particular solutions of physical interest. In Sections III and IV, the cosmological solutions as obtained for the viscosity parameter $s\neq1/2$ and $s=1/2$ are studied. Solution with $s\neq1/2$. ========================= In this section, some particular cases of Eq. (\[PALOMA\]) for $s\neq1/2$ are studied to obtain exact particular solutions of FE (\[fe1\])-(\[fe4\]). By setting $\kappa_{1}=0$, Eq. (\[PALOMA\]) simplifies as $$y^{\prime}-2a_{1}\left( D+Ey^{\frac{1}{2}-s}\right) y=0, \label{PALOMA2}$$ whose solution is given by$$y(\eta)=\left( \kappa_{2}e^{a_{1}D(2s-1)\eta}-\frac{E}{D}\right) ^{2/(2s-1)}, \label{sec3-eq2}$$ where $\kappa_{2}$ is an integration constant. Therefore, the parametric form of the time function is obtained from Eq. (\[cv1\]) as follows$$t\left( \eta\right) =\int y^{-1/2}(\eta)d\eta=\int\left( \kappa_{2}e^{a_{1}D(2s-1)\eta}-\frac{E}{D}\right) ^{1/(1-2s)}d\eta. \label{sec3-eq3}$$ Case $s=0$. ----------- The first special case considered corresponds to $s=0$, which means that the bulk viscosity coefficient $\xi=const$. The following particular solution is obtained $$y(\eta)=\left( \kappa_{2}e^{-a_{1}D\eta}-\frac{E}{D}\right) ^{-2},\qquad\text{and\qquad}t\left( \eta\right) =-\frac{1}{Da_{1}}\left( \kappa_{2}e^{-\eta Da_{1}}+\eta Ea_{1}\right) . \label{eq3-13}$$ Eqs. (\[lisa1\]) and (\[lisa2\]) provide the corresponding constant parameters $a_{1-}=-0.295$ and $\gamma=\sqrt[3]{2}$, respectively. A particular equation of state is obtained once again through Eq. (\[lisa1\]). In Figs. \[sec3bpic1\] and \[sec3bpic2\], the behavior of the FE main quantities for different values of constant $\kappa_{2}$ is plotted. ![Solution with $s=0.$ Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Dashed line for $\kappa_{2}=-1.$ Solid line for $\kappa_{2}=-2.$ Long dashed line for $\kappa_{2}=-3.$[]{data-label="sec3bpic1"}](sec3Bpic1.eps){height="1.2228in" width="5.9352in"} ![Solution with $s=0.$ Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Dashed line for $\kappa_{2}=-1.$ Solid line for $\kappa_{2}=-2.$ Long dashed line for $\kappa_{2}=-3.$[]{data-label="sec3bpic2"}](sec3Bpic2.eps){height="1.211in" width="4.7684in"} As we can see, the solution for $\kappa_{2}=-1$ is non-singular since $\rho(0)=const$. For $\kappa_{2}=-2$ and $\kappa_{2}=-3$, the energy density has a singular behavior when $t=0$, since it runs to infinity when time tends to zero, i.e., $\rho(0)\rightarrow\infty$. The bulk viscosity, $\Pi$, is negative for all values of $t$, i.e., $\Pi\left( t\right) <0$ $\forall t\in\mathbb{R}^{+},$ which is a thermodynamically consistent result as expected for $\kappa_{2}=-1$. For $\kappa_{2}=-2$ and $\kappa_{2}=-3,$ the solution is valid only when $t>t_{0},$ i.e., $\Pi\left( t\right) <0$ $\forall t>t_{0}$, while $\Pi\left( t\rightarrow0\right) >0$. Then, for this interval of time, $t\in\left( 0,t_{0}\right) $, the solution has no physical meaning. The entropy behaves like a strictly growing time function; then, there are a large amount of comoving entropy during the expansion of the universe. The deceleration parameter runs from $q(0)=-0.5$ to $q(t)=-1$. Then, the solution is accelerating, i.e., it is inflationary. The deceleration parameter tends to $-1$ as $t\rightarrow\infty$ (accelerating solutions) but shows a singular behavior when time runs to zero. The parameter $l(t)$ shows that all the plotted solutions are far from equilibrium since they are inflationary solutions, which is a consistent result. To the best of our knowledge this solution is new. Case $s=1/4$. ------------- The second case considered corresponds to $s=1/4$. In this case, Eqs. (\[lisa1\]) and (\[lisa2\]) provide $a_{1}=-1/3$ and $\gamma=2$. Therefore, Eq. (\[PALOMA\]) simplifies as $$y^{\prime}+2(3)^{3/4}y^{5/4}-\kappa_{1}e^{-3\eta}y^{3/4}=0. \label{berta}$$ If we perform the transformation $z=y^{1/4}$ in Eq. (\[berta\]), then we get the Riccati equation $$z^{\prime}+\frac{3^{3/4}}{2}z^{2}-\frac{1}{4}\kappa_{1}e^{-3\eta}=0, \label{eq3-15}$$ whose general solution is given in terms of Bessel $J_{n}$ and Neumman $N_{n}$ functions, $$z(\eta)=-\xi(\eta)\frac{J_{1}(\xi(\eta))+\kappa_{2}N_{1}(\xi(\eta))}{J_{0}(\xi(\eta))+\kappa_{2}N_{0}(\xi(\eta))}, \label{eq3-16}$$ where $\xi(\eta)=\frac{\sqrt{2\kappa_{1}}}{2\cdot3^{3/8}}e^{-3\eta/2}$ and $\kappa_{2}$ is an integration constant. Therefore, the following special solution for Eq. (\[berta\]) is obtained: $$y(\eta)=\left( \xi(\eta)\frac{J_{1}(\xi(\eta))+\kappa_{2}N_{1}(\xi(\eta ))}{J_{0}(\xi(\eta))+\kappa_{2}N_{0}(\xi(\eta))}\right) ^{4},\quad t(\eta)=\int^{\eta}\left( \xi(\eta)\frac{J_{1}(\xi(\eta))+\kappa_{2}N_{1}(\xi(\eta))}{J_{0}(\xi(\eta))+\kappa_{2}N_{0}(\xi(\eta))}\right) ^{-2}d\eta. \label{eq3-17}$$ In order to study the behavior of the FE dynamical variables in their parametric form, the calculation of Eq. (\[eq3-17\]) has been numerically addressed. The solution depends strongly on the value of the numerical constants, in such a way that our solution is physical only for $\kappa_{2}<0$ and for negative and relatively small values ($<20$) of $\kappa_{1}$. Numerical analysis of the solution plotted in Fig. \[sec3c1pic1\] shows that the solution is singular since the energy density tends to infinity when $t\rightarrow0.$ The bulk viscosity is positive, $\Pi>0$, in the region $\left( 0,t_{\ast}\right) $ so the solution has physical meaning only when $t>t_{\ast}$, for this era $\Pi$ becomes negative as expected from the thermodynamical point of view and tending to zero in the large time limit. In the same interval of time $\left( 0,t_{\ast }\right) $ the entropy production is negative, $\Sigma(t)<0$ (unphysical situation), nevertheless when $t>t_{\ast},$ a large amount of comoving entropy is produced during the expansion of the universe. ![Solution with $s=1/4$ and $\gamma=2.$ Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Dashed line for $\kappa_{1}=4$, $\kappa_{2}=-10.$ Solid line for $\kappa_{1}=19$, $\kappa_{2}=-3.$ Long dashed line for $\kappa_{1}=3$, $\kappa_{2}=-0.7.$[]{data-label="sec3c1pic1"}](sec3C1pic1.eps){height="1.4981in" width="6.294in"} Regarding the dynamical behavior of solution (\[eq3-17\]), in Fig. \[sec3c1pic2\] the behavior of parameters $q$ and $l$ has been plotted. As we can see, the deceleration parameter shows that the universe starts in a non-inflationary phase, but quickly entering a inflationary one since $q<0.$ The plots of $l(t)$ are consistent with this behavior, showing that the solution starts in a thermodynamical equilibrium but in a finite time they are far from equilibrium since they are inflationary solutions. ![Solution with $s=1/4$ and $\gamma=2.$ Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Dashed line for $\kappa_{1}=4$, $\kappa_{2}=-10.$ Solid line for $\kappa_{1}=19$, $\kappa_{2}=-3.$ Long dashed line for $\kappa_{1}=3$, $\kappa_{2}=-0.7.$[]{data-label="sec3c1pic2"}](sec3C1pic2.eps){height="1.5416in" width="5.1738in"} A similar solution has been obtained by Mak et al [@H6] but, as we have shown, our solution is qualitatively different, with a very different physical meaning. ### A particular solution for the case $s=1/4$. If we set $\kappa_{1}=0$ in Eq. (\[berta\]), then we get the very simple ODE $$y^{\prime}+2\left( 3\right) ^{3/4}y^{5/4}=0,$$ whose solution is given as $$y(\eta)=\left( \frac{\left( 3\right) ^{3/4}}{2}\eta+\kappa_{2}\right) ^{-4},\qquad\text{and\qquad}t\left( \eta\right) =\frac{1}{4}\sqrt{3}\eta ^{3}+\frac{1}{2}3^{\frac{3}{4}}\eta^{2}\kappa_{2}+\eta\kappa_{2}^{2},$$ where $\kappa_{2}$ is an integration constant. In Figs. \[sec3cpic1\] and \[sec3cpic2\] the behavior of the FE main quantities has been plotted for different values of the constant $\kappa_{2}.$ ![Particular solution for $s=1/4$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Dashed line for $\kappa_{2}=0.$ Long dashed line for $\kappa_{2}=1.$ Solid line for $\kappa_{2}=2.$[]{data-label="sec3cpic1"}](sec3Cpic1.eps){height="1.2986in" width="6.1554in"} ![Particular solution for $s=1/4$. Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Dashed line for $\kappa_{2}=0.$ Long dashed line for $\kappa_{2}=1.$ Solid line for $\kappa_{2}=2.$[]{data-label="sec3cpic2"}](sec3Cpic2.eps){height="1.1804in" width="4.2035in"} The solution has been plotted for three different values of constant $\kappa_{2}$. The energy density presents a singular behavior only for $\kappa_{2}=0$, while the other two solutions show a non-singular behavior when $t=0$. The solution for $\kappa_{2}=2$ runs quickly to zero. The bulk viscosity is always a negative time function for $\kappa_{2}=1$ and $\kappa_{2}=2$, but the solution for $\kappa_{2}=0$ is valid only for $t>t_{0}$ since $\Pi(t\rightarrow0)>0$, which means that it lacks of physical meaning in the interval of time $t\in\left( 0,t_{0}\right) $. The entropy always behaves like a growing time function but for the case $\kappa_{2}=0$ the universe starts with a non-vanishing entropy, i.e., $\Sigma(0)=const.$, while for the other two solutions $\Sigma(0)\rightarrow0.$ The plots in Fig. \[sec3cpic1\] show that a large amount of entropy is produced during the expansion of the universe. Regarding the deceleration parameter, the plotted solutions run to an acceleration region since $q(t)\rightarrow-1/2$ in a finite time. For this reason, the solution starts in an equilibrium regimen but quickly run to a non-equilibrium state as shown by plots of $l(t)$. A particular solution of this case has been studied by Harko et al [@H3] obtaining different behavior of the FE main quantities. Case $s=1$. ----------- The second important case considered corresponds to $s=1$. According to Eqs. (\[lisa1\]) and (\[lisa2\]), this solution is valid only for the equation of state with $\gamma=\sqrt[3]{2}\thickapprox1.25992$. Other authors have already studied similar cases for $s=1$, but with different equation of state (see for instance [@H4] with $\gamma=2$) obtaining different results. Then, according to Eqs. (\[sec3-eq2\]) and (\[sec3-eq3\]), the following particular parametric solution is obtained: $$y(\eta)=\left( \kappa_{2}e^{a_{1}D\eta}-\frac{E}{D}\right) ^{2},\qquad\text{and}\qquad t\left( \eta\right) =\frac{1}{Ea_{1}}\left[ \ln\left( -\frac{E}{Dk_{2}}+e^{\eta Da_{1}}\right) -\eta Da_{1}\right] . \label{eq3-11}$$ Then, the FE main dynamical variables can be explicitly obtained through Eqs. (\[para1\])-(\[para2\]). In Figs. \[sec3Apic1\] and \[sec3Apic2\], the behavior of the main quantities by giving different values to the constant $\kappa_{2}$ has been plotted. ![Solution with $s=1.$ Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Dashed line for $\kappa_{2}=0.1.$ Long dashed line for $\kappa_{2}=10.$ Solid line for $\kappa_{2}=100.$[]{data-label="sec3Apic1"}](sec3Apic1.eps){height="1.4791in" width="6.3187in"} ![Solution with $s=1.$ Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Dashed line for $\kappa_{2}=0.1.$ Long dashed line for $\kappa_{2}=10.$ Solid line for $\kappa_{2}=100.$[]{data-label="sec3Apic2"}](sec3Apic2.eps){height="1.6063in" width="5.4389in"} As we can see the solution is valid only for $t>t_{0}.$ The energy density is a decreasing function, but the function behaves like a constant for a $t>t_{c}$. The behavior of the bulk viscous parameter shows that the solution is valid only for $t>t_{0}$ since the solution is positive when $t\rightarrow 0$, decreasing and going to a negative constant value during the cosmological evolution, which is consistent from the thermodynamical point of view. In the same way, the entropy behaves like a growing function only for $t>t_{0}$, showing that a large amount of comoving entropy is produced. Nevertheless, the deceleration parameter shows that the universe starts in a non-inflationary phase, but quickly entering a inflationary one since $q\rightarrow-1$ $\forall\kappa_{2}.$ The plots of $l(t)$ show that plotted solutions are far from equilibrium since they are inflationary solutions. Solution with $s=1/2$. ====================== We consider now the very special case $s=1/2$. This has been the most important and studied case (see for example example [@C1],[@Ch97],[@H1],[@H2]) within the framework of the bulk viscous cosmological models since, as it has been pointed out for several authors, this solution is stable from the dynamical systems point of view [@CoHoMa96] as well as from the renormalization group approach [@TonyRG]. In this case, Eq. (\[eq1\]) reduces to:$$\ddot{H}-A_{1}\frac{\dot{H}^{2}}{H}+\left( 3+C_{1}\right) H\dot{H}+\left( D_{1}+E_{1}\right) H^{3}=0, \label{nHarko1}$$ where $A_{1}=\left( 1+r\right) =2-\frac{1}{\gamma}$, $C_{1}=\sqrt{3}$, $D_{1}=\frac{9}{4}\left( \gamma-2\right) $, $E_{1}=\frac{3}{2}\sqrt{3}\gamma$, and $r=1-1/\gamma$. Since the coordinate transformation given by Eq. (\[cv1\]) leads to obtain several unphysical solutions for $s=1/2$, we perform the more suitable change of variables given as follows (see also [@C1]), $$H=y^{1/2},\qquad d\eta=3\left( 1+\frac{1}{\sqrt{3}}\right) Hdt. \label{ncv_CH}$$ Then, Eq. (\[nHarko1\]) turns into$$y^{\prime\prime}-\frac{A_{1}}{2y}y^{\prime2}+y^{\prime}+2\gamma by=0, \label{nhelen1}$$ where $\gamma b=\frac{\sqrt{3}}{8}\left( \gamma+6\right) -\frac{3}{2}$. Eq. (\[nhelen1\]) can be solved by factorization providing new exact parametric solutions for $s=1/2$. Eq. (\[nhelen1\]) admits the factorization $$\left[ D-\frac{A}{2y}y^{\prime}-a_{1}^{-1}\right] \left[ D-2a_{1}\gamma b\right] y=0,$$ which can be rewritten in the form$$\begin{aligned} y^{\prime}-2a_{1}\gamma by & =\Omega,\label{neq4}\\ \Omega^{\prime}-\left( \frac{A}{2y}y^{\prime}-a_{1}^{-1}\right) \Omega & =0, \label{neq5}$$ or equivalently, $$y^{\prime}-2a_{1}\gamma by-\mathrm{k}_{1}e^{\eta/a_{1}}y^{A/2}=0,$$ where $\mathrm{k}_{1}$ is an integration constant, with solution given as $$y\left( \eta\right) =e^{2a_{1}\gamma b\eta}\left( \frac{a_{1}\mathrm{k}_{1}e^{\left( a_{1}^{-1}-a_{1}b\right) \eta}}{2\gamma(1-a_{1}^{2}b)}+C_{1}\right) ^{2\gamma}, \label{sol}$$ where $C_{1}$ is an integration constant, and the parameter $a_{1}$ is restricted to values given by $$a_{1\pm}=-\frac{4\sqrt{3}\gamma\pm\sqrt{\gamma^{2}\left( 72\sqrt {3}-60\right) -9\gamma^{3}+\gamma\left( 432\sqrt{3}-756\right) }}{3\left( \gamma-4\sqrt{3}+6\right) }, \label{kyla}$$ i.e. , $a_{1+}\in\left[ -64.31,-8.38\right] ,$ and $a_{1-}\in\left[ -0.23,-0.01\right] $. In the following Subsections IV.A and IV.B several possible cases of interest are studied. General solution. ----------------- In this case it is possible to find a explicit parametric equation for $t$ (from Eq. (\[sol\])) with $C_{1}\neq0.$ It is given as follows$$t\left( \eta\right) =\frac{\left( \sqrt{3}-3\right) a_{1}\left( 1+\frac{C_{1}\exp\left( \frac{\eta}{2a_{1}\gamma}\left( 2\gamma -a_{1}B\right) \right) }{a_{1}\mathrm{k}_{1}}\right) ^{\gamma}\,}{6\gamma y^{1/2}}\,_{2}F_{1}\left( \frac{-2\gamma^{2}}{a_{1}B-2\gamma},\gamma ,1-\frac{2\gamma^{2}}{a_{1}B-2\gamma},-\frac{C_{1}\exp\left( \frac{\eta }{2a_{1}\gamma}\left( 2\gamma-a_{1}B\right) \right) }{a_{1}\mathrm{k}_{1}}\right) .\label{sol_t}$$ To the best of our knowledge the solution given by Eqs. (\[sol\]) and (\[sol\_t\]) has not been previously reported. The FE main dynamical variables are given in parametric form as follows $$\begin{aligned} f\left( \eta\right) & =f_{0}\exp\left( \eta-\eta_{0}\right) ,\\ H\left( \eta\right) & =y^{1/2}\left( \eta\right) ,\\ q(\eta) & =-\frac{\left( 2+B\right) C_{1}+2\left( a_{1}+\gamma\right) \mathrm{k}_{1}\exp\left( \frac{\eta}{2a_{1}\gamma}\left( 2\gamma -a_{1}B\right) \right) }{2\left( C_{1}+a_{1}\mathrm{k}_{1}\exp\left( \frac{\eta}{2a_{1}\gamma}\left( 2\gamma-a_{1}B\right) \right) \right) },\\ \rho\left( \eta\right) & =3y\left( \eta\right) ,\\ p\left( \eta\right) & =3\left( \gamma-1\right) y\left( \eta\right) ,\\ \Pi\left( \eta\right) & =\frac{C_{1}(B+3\gamma)+\left( 2+3a_{1}\right) \gamma\mathrm{k}_{1}\exp\left( \frac{\eta}{2a_{1}\gamma}\left( 2\gamma -a_{1}B\right) \right) }{C_{1}+a_{1}\mathrm{k}_{1}\exp\left( \frac{\eta }{2a_{1}\gamma}\left( 2\gamma-a_{1}B\right) \right) }y\left( \eta\right) ,\\ \Sigma\left( \eta\right) & =\gamma e^{3\eta}\left( 3y\right) ^{1/\gamma },\\ l(\eta) & =\left\vert \frac{\Pi\left( \eta\right) }{p\left( \eta\right) }\right\vert .\end{aligned}$$ In Figs. \[nfpic1\] and \[nfpic2\], the behavior of the FE main quantities has been plotted. The following constant values have been chosen: $a_{1+}$ as given in Eq. (\[kyla\]) while $B=\frac{\sqrt{3}a_{1}}{4}\left( \gamma+6\right) -3a_{1}$, $\mathrm{k}_{1}=2,\,$ $C_{1}=-1$, and $\gamma=1,4/3,2$ as usual. The solutions with $a_{1-}$ are unphysical. ![Solution for $s=1/2$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfpic1"}](nfpic1.eps){height="1.313in" width="6.5976in"} ![Solution for $s=1/2.$ Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfpic2"}](nfpic2.eps){height="1.295in" width="4.8992in"} The energy density shows a singular behavior as $t\rightarrow0$, but in a finite time it behaves as a decreasing time function. This solution is valid for all values of time and $\gamma$. The bulk viscous pressure, $\Pi$, is a negative decreasing time function during the cosmological evolution, $\Pi<0$ $\forall t\in \mathbb{R}^{+}$, as it is expected from a thermodynamical point of view. The viscous pressure also evolves from a singular era but it quickly tends to zero, i.e., in the large limit the viscous pressure vanishes as the viscous coefficient, which also becomes negligible small. The comoving entropy behaves as a growing time function. There exists a fast growth of entropy for $\gamma=4/3$, while for $\gamma=1$ the entropy grows slowly. The entropy evolves from a non-singular state, i.e., $\Sigma(0)=0,$ but it quickly grows in such a way that a large amount of entropy is produced during the cosmological evolution. The picture of parameter $q(t)$ shows that all the plotted solutions start in a non-inflationary phase, but they quickly run to an inflationary era since this quantity runs to $-1$ for all the equations of state. For this reason, the parameter $l(t)$ shows that the solutions are far from equilibrium since they are inflationary solutions. Particular solution ------------------- In the case, it is possible to find a particular solution for $t$ from Eq. (\[sol\]) with $C_{1}=0.$ For this case, the solution simplifies as follows$$y\left( \eta\right) =\exp\left( B\eta\right) \left( \frac{a_{1}\mathrm{k}_{1}\exp\left( \frac{\eta}{2a_{1}\gamma}\left( 2\gamma -a_{1}B\right) \right) }{2\gamma-a_{1}B}\right) ^{2\gamma},\text{\qquad and\qquad}t\left( \eta\right) =\left( \sqrt{3}-3\right) \frac{a_{1}}{6\gamma}y^{-1/2}. \label{sc0}$$ Then, the FE main quantities are given in the following form:$$\begin{aligned} f\left( \eta\right) & =f_{0}\exp\left( \eta-\eta_{0}\right) ,\label{sc1}\\ H\left( \eta\right) & =y^{1/2}\left( \eta\right) ,\\ q(\eta) & =-\frac{\left( a_{1}+\gamma\right) }{a_{1}},\\ \rho\left( \eta\right) & =3y\left( \eta\right) ,\\ p\left( \eta\right) & =3\left( \gamma-1\right) y\left( \eta\right) ,\\ \Pi\left( \eta\right) & =-\frac{\left( 2+3a_{1}\right) \gamma}{a_{1}}y\left( \eta\right) ,\\ l\left( \eta\right) & =\frac{1}{3}\left\vert \frac{\left( 2+3a_{1}\right) \gamma}{a_{1}\left( \gamma-1\right) }\right\vert ,\\ \Sigma\left( \eta\right) & =\gamma e^{3\eta}\left( 3y\left( \eta\right) \right) ^{1/\gamma}, \label{sc8}$$ It is possible to recover the known scaling solution studied by several authors [@ZB],[@DZ] and [@Tony] from Eqs. (\[sc1\])-(\[sc8\]):$$\begin{aligned} f & =f_{0}t^{H_{0}}\\ H\left( t\right) & =H_{0}t^{-1},\\ q(t) & =H_{0}^{-1}-1,\\ \rho\left( t\right) & =\rho_{0}t^{-2},\\ p\left( t\right) & =3\left( \gamma-1\right) \rho_{0}t^{-2},\Pi\left( t\right) =-\Pi_{0}\rho\left( t\right) ,\\ l\left( t\right) & =\frac{\Pi_{0}}{3\left( \gamma-1\right) },\\ \Sigma\left( t\right) & \thickapprox\frac{\gamma\Sigma_{0}}{3\gamma H_{0}-2}\left( t^{\frac{1}{\gamma}\left( 3\gamma H_{0}-2\right) }-t_{0}^{\frac{1}{\gamma}\left( 3\gamma H_{0}-2\right) }\right) ,\end{aligned}$$ where $H_{0}=\frac{6\gamma}{\left( \sqrt{3}-3\right) a_{1}}$, $k_{B}^{-1}=1$, $\Sigma_{0}=-3\Pi_{0}H_{0}f_{0}^{3}\rho_{0}^{\frac{1}{\gamma}}\left( \frac{1}{t_{0}}\right) ^{3H_{0}}>0$, and $\Pi_{0}>0$. In Figs. \[nfc2pic1\] and \[nfc2pic2\], the FE main quantities have been plotted using the same numerical values of Figs. \[nfpic1\] and \[nfpic2\]. ![Solution for $s=1/2$ and $C_{1}=0$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfc2pic1"}](nfc2pic1.eps){height="1.1975in" width="6.2068in"} ![Solution for $s=1/2$ and $C_{1}=0$. Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfc2pic2"}](nfc2pic2.eps){height="1.1551in" width="4.1349in"} All the plotted solutions have physical meaning $\forall$ $t$. These solutions and the ones presented in the last solution (with $C_{1}\neq0)$ have a similar behavior. We get the following numerical values for parameters $q(t)$ and $l(t)$: $q_{1}=-0.98569$, $q_{4/3}=-0.92081$, $q_{2}=-0.796\,58$, while $l_{4/3}=3.84$ and $l_{2}=1.86$. Solutions through the Lie group method for $s=1/2$ ================================================== In order to find new solutions and compare with the ones obtained through the factorization method, we study the Hubble rate Eq. (\[nHarko1\]) with $s=1/2$ through Eq. (\[nhelen1\]) by applying the Lie group method [@Lie]. Eq. (\[nhelen1\]) admits the following symmetries: $$\begin{aligned} \xi_{1} & = \left[ 1,0\right] ,\qquad\xi_{2}=\left[ 0,y\right] ,\qquad\xi_{3}=\left[ 1,y\right] ,\nonumber\\ \xi_{4,5} & =\left[ 0,y^{\frac{A}{2}}\exp\left( \frac{\eta}{2}\left( \pm a-1\right) \right) \right] ,\nonumber\\ \xi_{6,7} & =\left[ y^{\left( 1-\frac{A}{2}\right) }\exp\left( \frac{\eta}{2}\left( 1\mp a\right) \right) ,\frac{1\pm a}{A-2}\left( y^{\left( 1-\frac{A}{2}\right) }\exp\left( \frac{\eta}{2}\left( 1\mp a\right) \right) \right) \right] ,\end{aligned}$$ where $a=\sqrt{1-8B+4AB}$, and $B=\frac{\sqrt{3}}{8}\left( \gamma+6\right) -\frac{3}{2}$. The non-zero constants $C_{ij}^{k}$ verifying the relationship $\left[ \xi_{i},\xi_{j}\right] =C_{ij}^{k}\xi_{k}$ are$$\left[ \xi_{1},\xi_{4}\right] =C_{14}^{4}\xi_{4},\qquad\left[ \xi_{1},\xi_{5}\right] =C_{15}^{5}\xi_{5},\qquad\left[ \xi_{2},\xi_{4}\right] =C_{24}^{4}\xi_{4},\qquad\left[ \xi_{2},\xi_{5}\right] =C_{25}^{5}\xi_{5}.$$ Then, we shall try to find a suitable change of variables with the symmetries $\xi_{4}$ and $\xi_{5}$. These symmetries, $\xi_{4,5}=\left[ 0,y^{A/2}e^{\eta/2\left( \pm a-1\right) }\right] $, bring us to get the following cv that will transform the original ODE into a quadrature. Following the the standard procedure we get: $$i=\eta,\,\qquad u(i)=\frac{1}{A-2}\left( e^{\eta/2\left( a\mp1\right) }\left( y^{1-A/2}\left( \pm a-1\right) +y^{A/2}y^{\prime}\left( A-2\right) \right) \right)$$ which lead us to obtain the following ODE and the corresponding solution:$$u^{\prime}=\mp au\qquad\Longrightarrow\qquad u=C_{1}e^{\mp ai}.$$ Then, the solution to Eq. (\[nhelen1\]) is given as follows$$y_{\mp}=\left( \mp\frac{1}{2\gamma a}C_{1}e^{\frac{1}{2}\eta\left( \mp a-1\right) }+C_{2}e^{\frac{1}{2}\eta\left( \pm a-1\right) }\right) ^{2\gamma} \label{sol_m}$$ where $a=\sqrt{1+4BA-8B}$, $A=2-\frac{1}{\gamma}$ and $B=\frac{\sqrt{3}}{8}\left( \gamma+6\right) -\frac{3}{2}$. In the following Subsections V.A-V.D, the solutions provided in Eq. (\[sol\_m\]) are separately studied. Solution $y_{-}$ with $C_{2}\neq0$ ---------------------------------- For the solution $$y_{-}\left( \eta\right) =\left( -\frac{1}{2\gamma a}C_{1}e^{\frac{1}{2}\eta\left( -a-1\right) }+C_{2}e^{\frac{1}{2}\eta\left( a-1\right) }\right) ^{2\gamma}, \label{lg1}$$ with $C_{2}\neq0$, it is possible to find an explicit parametric equation for $t$ through Eq. (\[ncv\_CH\]). It is given as$$t_{-}\left( \eta\right) =\frac{\left( 3-\sqrt{3}\right) \left( 1-\frac{2a\gamma C_{2}e^{a\eta}}{C_{1}}\right) ^{\gamma}\,}{3\left( 1+a\right) \gamma y_{-}^{1/2}}\,_{2}F_{1}\left[ \gamma,\frac{\left( 1+a\right) \gamma}{2a},\frac{\gamma+a\left( 2+\gamma\right) }{2a},\frac{2a\gamma C_{2}e^{a\eta}}{C_{1}}\right] . \label{lg1a}$$ As we can see, a similar solution to the one obtained through the factorization method has been found. However, as it is shown below, they present several important differences. The FE main dynamical variables are given in parametric form as follows $$\begin{aligned} f\left( \eta\right) & =f_{0}\exp\left( \eta-\eta_{0}\right) ,\\ H\left( \eta\right) & =y_{-}^{1/2}\left( \eta\right) ,\label{eq3-5}\\ q(\eta) & =\frac{2a\gamma C_{2}e^{a\eta}\left( 2+\gamma\left( a-1\right) \right) +C_{1}\left( \gamma\left( a+1\right) -2\right) }{2\left( C_{1}-2a\gamma C_{2}e^{a\eta}\right) },\\ \rho\left( \eta\right) & =3y_{-}\left( \eta\right) ,\\ p\left( \eta\right) & =3\left( \gamma-1\right) y_{-}\left( \eta\right) ,\label{eq3-8}\\ \Pi\left( \eta\right) & =\frac{\gamma\left( 2a\gamma\left( a+2\right) C_{2}e^{a\eta}+C_{1}\left( a-2\right) \right) }{C_{1}-2a\gamma C_{2}e^{a\eta}}y_{-}\left( \eta\right) ,\\ \Sigma\left( \eta\right) & =\gamma e^{3\eta}\left( 3y_{-}\left( \eta\right) \right) ^{1/\gamma},\\ l\left( \eta\right) ) & =\frac{\left\vert \Pi\left( \eta\right) \right\vert }{p\left( \eta\right) } ,\end{aligned}$$ In Figs. \[nfvgl1pic1\] and \[nfvgl1pic2\] the behavior of the FE main quantities has been plotted. ![Solution for $y_{-}$ with $C_{2}\neq0$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl1pic1"}](nfvgl1pic1.eps){height="1.5034in" width="6.1563in"} ![Solution for $y_{-}$ with $C_{2}\neq0$. Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl1pic2"}](nfvgl1pic2.eps){height="1.4267in" width="5.1799in"} As it is shown in Fig. \[nfvgl1pic1\], the solution is not valid for $\gamma=4/3$. For $\gamma=1$ (matter predominance) and $\gamma=2$ (ultra-stiff matter), the energy density behaves as a decreasing time function during the cosmological evolution. This solution is valid for all values of time, except in the case $\gamma=4/3$, where $\rho_{4/3}<0$. The bulk viscosity is a negative increasing time function, except in the case $\gamma=4/3$, where $\Pi_{4/3}>0$. The energy-density, bulk viscosity and entropy have a very similar behavior for the cases $\gamma=1$ and $\gamma=2$. The solution has a singular origin since the energy density tends to infinity as $t\rightarrow0$. The entropy is a growing time function which shows a large amount of comoving entropy during the expansion of the universe. In the case $\gamma=4/3$, the entropy starts growing at $t=60$, although we have ruled out this case. The behavior of parameter $q(t)$ shows that the solution for $\gamma=2$ starts in a non-inflationary phase, but after a period of time the solution enters an inflationary era. Nevertheless, the solution for $\gamma=1$ is inflationary for all values of $t$. The behavior of parameter $l(t)$ shows that the solution for $\gamma=2$ is close to equilibrium, which is thermodynamically consistent. Solution $y_{-}$ with $C_{2}=0$ ------------------------------- For the case $y_{-}$ with $C_{2}=0$, the solution is given by (after simplifying)$$y_{-}\left( \eta\right) =\left( -\frac{1}{2\gamma a}C_{1}e^{\frac{1}{2}\eta\left( -a-1\right) }\right) ^{2\gamma},\qquad\text{and \qquad}t_{-}\left( \eta\right) =\int\left( y_{-}\right) ^{-1/2}d\eta =\frac{\left( 3-\sqrt{3}\right) }{3\left( 1+a\right) \gamma}y_{-}^{-1/2}. \label{lg4}$$ and the FE main dynamical variables are given in parametric form as follows $$\begin{aligned} f\left( \eta\right) & =f_{0}\exp\left( \eta-\eta_{0}\right) ,\\ H\left( \eta\right) & =y_{-}^{1/2}\left( \eta\right) ,\\ q(\eta) & =\frac{1}{2}\left( \left( 1+a\right) \gamma-2\right) ,\\ \rho\left( \eta\right) & =3y_{-}\left( \eta\right) ,\\ p\left( \eta\right) & =3\left( \gamma-1\right) y_{-}\left( \eta\right) ,\\ \Pi\left( \eta\right) & =\left( a-2\right) \gamma y_{-}\left( \eta\right) ,\\ l\left( \eta\right) & =\frac{1}{3}\left\vert \frac{\left( a-2\right) \gamma}{\left( \gamma-1\right) }\right\vert ,\\ \Sigma\left( \eta\right) & =\gamma e^{3\eta}\left( 3y_{-}\left( \eta\right) \right) ^{1/\gamma}.\end{aligned}$$ The behavior of the FE main quantities has been plotted in Figs. \[nfvgl1c2pic1\] and \[nfvgl1c2pic2\]. As it is observed, in this case, we may recover the scaling solution. ![Solution $y_{-}$ with $C_{2}=0$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl1c2pic1"}](nfvgl1c2pic1.eps){height="1.5061in" width="7.0244in"} ![Solution $y_{-}$ with $C_{2}=0$. Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl1c2pic2"}](nfvgl1c2pic2.eps){height="1.5666in" width="5.2638in"} In this case, as in the last solution with $C_{2}\neq0$, the solution for $\gamma=4/3$ is unphysical. All the main quantities behave in the same way as the last solution with $C_{2}\neq0$ described above. Nevertheless, it is found that $q_{1}=-0.015$ for $\gamma=1$ which represents an inflationary solution, and $q_{2}=0.732$ for $\gamma=2$ which represents a non-inflationary behavior, while $l_{2}=0.845<1$, i.e., the solution is within an equilibrium regime. As it has been shown, most of the known exact solutions of the gravitational FE with a viscous fluid do not satisfy the condition $l<1$, i.e., the condition of thermodynamic consistency, since they show an inflationary behavior. In the case for $\gamma=2$, we have obtained a solution which is thermodynamically consistent and it may describe the early dynamics of a super-dense post-inflationary era when the dissipative effects produced by the bulk viscosity may play an important role. Solution $y_{+}$ with $C_{2}\neq0$ ---------------------------------- For the solution given by $$y_{+}\left( \eta\right) =\left( \frac{1}{2\gamma a}C_{1}e^{\frac{1}{2}\eta\left( a-1\right) }+C_{2}e^{-\frac{1}{2}\eta\left( a+1\right) }\right) ^{2\gamma},\label{lg2}$$ with $C_{2}\neq0$, we get the explicit parametric equation for the time function $$t_{+}\left( \eta\right) =\frac{\left( 3-\sqrt{3}\right) }{3}\frac{\left( 1+\frac{C_{1}e^{a\eta}}{2a\gamma C_{2}}\right) ^{\gamma}\,}{\left( 1+a\right) \gamma y_{-}^{1/2}}\,_{2}F_{1}\left[ \gamma,\frac{\left( 1+a\right) \gamma}{2a},\frac{\gamma+a\left( 2+\gamma\right) }{2a},-\frac{C_{1}e^{a\eta}}{2a\gamma C_{2}}\right] .\label{lg2a}$$ The main dynamical variables of the FE are given in parametric form as follows $$\begin{aligned} f_{+}\left( \eta\right) & =f_{0}\exp\left( \eta-\eta_{0}\right) ,\\ H_{+}\left( \eta\right) & =y_{+}^{1/2}\left( \eta\right) ,\\ q_{+}(\eta) & =\frac{2a\gamma C_{2}\left( \gamma\left( a+1\right) -2\right) -C_{1}e^{a\eta}\left( \gamma\left( a-1\right) +2\right) }{2\left( C_{1}-2a\gamma C_{2}e^{a\eta}\right) },\\ \rho_{+}\left( \eta\right) & =3y_{+}\left( \eta\right) ,\\ p_{+}\left( \eta\right) & =3\left( \gamma-1\right) y_{+}\left( \eta\right) ,\\ \Pi_{+}\left( \eta\right) & =\frac{\gamma\left( 2a\gamma\left( a-2\right) C_{2}-C_{1}\left( a+2\right) e^{a\eta}\right) }{C_{1}e^{a\eta }+2a\gamma C_{2}}y_{+},\\ l & =\frac{1}{3}\left\vert \frac{\gamma\left( 2a\gamma\left( a-2\right) C_{2}-C_{1}\left( a+2\right) e^{a\eta}\right) }{\left( \gamma-1\right) \left( C_{1}e^{a\eta}+2a\gamma C_{2}\right) }\right\vert ,\\ \Sigma_{+}\left( \eta\right) & =\gamma e^{3\eta}\left( 3y_{+}\right) ^{1/\gamma}$$ We have plotted the behavior of the FE main quantities in Figs. \[nfvgl2pic1\] and \[nfvgl2pic2\]. ![Solution $y_{+}$ with $C_{2}\neq0$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl2pic1"}](nfvgl2pic1.eps){height="1.4078in" width="6.7176in"} ![Solution $y_{+}$ with $C_{2}\neq0$. Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl2pic2"}](nfvgl2pic2.eps){height="1.4655in" width="6.2086in"} This solution shows a behavior quite similar to the one obtained through the factorization method. The energy density is a decreasing time function and it is valid for all values of time. The bulk viscous pressure is a negative increasing time function, while the entropy is a positive growing time function. As in the case of the factorization method, the obtained solution is valid for all the possible values of parameter $\gamma$. We find a fast growth of entropy for $\gamma=2$, while it grows slowly during the evolution of the universe for $\gamma=1$. The behavior of parameter $q(t)$ shows that all the plotted solutions start in an inflationary phase, since this quantity is close to $-1$ for every value of $\gamma$. The behavior of parameter $l(t)$ shows that the solutions are far from equilibrium since these are inflationary solutions. Solution $y_{+}$ with $C_{2}=0$ ------------------------------- In the case of solution $y_{+}$ with $C_{2}=0$ we get $$y_{+}\left( \eta\right) =\left( \frac{1}{2\gamma a}C_{1}e^{\frac{1}{2}\eta\left( a-1\right) }\right) ^{2\gamma},\qquad\text{and\qquad}t_{+}\left( \eta\right) =\frac{\left( \sqrt{3}-3\right) }{3\left( a-1\right) \gamma}y_{+}^{-1/2}. \label{lg3}$$ The FE main dynamical variables are given in parametric form as follows: $$\begin{aligned} f\left( \eta\right) & =f_{0}\exp\left( \eta-\eta_{0}\right) ,\\ H\left( \eta\right) & =y_{+}^{1/2}\left( \eta\right) ,\\ q(\eta) & =\frac{1}{2}\left( \left( 1-a\right) \gamma-2\right) ,\\ \rho\left( \eta\right) & =3y_{+}\left( \eta\right) ,\\ p\left( \eta\right) & =3\left( \gamma-1\right) y_{+}\left( \eta\right) ,\\ \Pi\left( \eta\right) & =\left( a+2\right) \gamma y_{+},\\ l & =\frac{1}{3}\left\vert \frac{\left( a+2\right) \gamma}{\left( \gamma-1\right) }\right\vert ,\\ \Sigma\left( \eta\right) & =\gamma e^{3\eta}\left( 3y_{+}\right) ^{1/\gamma}.\end{aligned}$$ We may recover the scaling solution as above. ![Solution $y_{+}$ with $C_{2}=0$. Plots of energy density $\rho(t)$, bulk viscosity $\Pi(t)$ and entropy $\Sigma(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl2c2pic1"}](nfvgl2c2pic1.eps){height="1.4466in" width="5.8802in"} ![Solution $y_{+}$ with $C_{2}=0$. Plots of the deceleration parameter $q(t)$ and parameter $l(t)$. Solid line for $\gamma=2.$ Long dashed line for $\gamma=4/3$. Dashed line for $\gamma=1.$[]{data-label="nfvgl2c2pic2"}](nfvgl2c2pic2.eps){height="1.2697in" width="4.911in"} In Figs. \[nfvgl2c2pic1\] and \[nfvgl2c2pic2\], the behavior of the FE main quantities has been plotted. As it can be seen, a very similar behavior to the scaling solution obtained through the factorization method has been obtained. Therefore, we get the same description and conclusions. It is worth mentioning that the following values for the deceleration parameter $q(t)$ are obtained: $q_{1}=-0.984206$, $q_{4/3}=-0.905604$, and $q_{2}=-0.732051$, while for parameter $l(t)$ we obtain $l_{4/3}=3.81121$, and $l_{2}=1.82137$, i.e., the same values as the ones obtained for the scaling solution. Conclusions. ============ In this work, we have studied a flat FRW cosmological model with a matter model described as a full causal bulk viscous fluid. By assuming the state equations given in Eq. (\[steq1\]), the cosmological model simplifies to a nonlinear second order ODE, the Hubble rate equation, for which a coordinate transformation is performed in order to apply the factorization method. Due to the coordinate transformation developed on the Hubble rate equation, parametric exact solutions have been found. The standard procedure of factorization provides the first order ODE (\[PALOMA\]), and the restriction condition given in Eq. (\[eq2-5\]) which provides a relationship between the viscous parameter $s$ and $\gamma$. Then, the analysis developed through factorization allows to study the model for all the values of $s$ determined by Eq. (\[lisa1\]), instead of constructing a particular ODE for a single given value of $s$ and arbitrary or specific values of $\gamma$, as it has been previously studied by several authors. We have studied several models for different values of $s$. Firstly, we have studied and discussed the model for $s=0$, and $\gamma=\sqrt[3]{2}$. The second model is studied for $s=1/4$, and $\gamma=2,$ finding two solutions. The third case corresponds to $s=1$, and $\gamma=\sqrt[3]{2}\thickapprox 1.25992$. For the very special case $s=1/2$, the restriction equation (\[eq2-5\]) provides the explicit form of parameter $a_{1}$. However, the obtained solutions have not restriction on the values of $\gamma$. For this important case, we have been able to obtain a new solution which reduces, as particular solution, to the known scaling solution. To the best of our knowledge, the parametric solutions obtained for all these cases are new. In order to obtain more new solutions, the case $s=1/2$ has been studied through the Lie group method. The analysis carried out allows to obtain two solutions. The solution (\[lg1\])-(\[lg1a\]) is new, and solution (\[lg2\])-(\[lg2a\]) presents the same behavior as the one obtained through the factorization method. Regarding the solution (\[lg1\])-(\[lg1a\]), it is pointed out that it is not valid for all state equation $\gamma$. It has been shown that for $\gamma=4/3$ the solution is unphysical, while for $\gamma=2$ it is thermodynamically consistent and could be relevant from the cosmological point of view. [99]{} B. Mielnik, O. Rosas-Ortiz, *J. Phys. A.: Math. Gen.* **37** (2004) 10007. H. C. Rosu, *Short survey of Darboux transformations*, in *Symmetries in Quantum Mechanics and Quantum Optics*, Eds. F. J. Herranz, A. Ballesteros, L. M. Nieto, J. Negro, C. M. Pereña, Servicio de Publicaciones de la Universidad de Burgos, Burgos, Spain, 1999. L. M. Berkovich, *Sov. Math. Dokl.* **45** (1992) 162. O. Cornejo-Pérez and H. C. Rosu, *Prog. Theor. Phys.* **114** (2005) 533. H. C. Rosu and O. Cornejo-Pérez, *Phys. Rev. E* **71** (2005) 046607. D. S. Wang and H. Li, *J. Math. Anal. Appl.* **343** (2008) 273. O. Cornejo-Pérez, *J. Phys. A: Math. Theor.* **42** (2009) 035204. P. G. Estévez, S. Kuru, J. Negro, L. M. Nieto, *J. Phys. A: Math. Gen.* **39** (2006) 3911441. P. G. Estévez, S. Kuru, J. Negro, L. M. Nieto, *J. Phys. A: Math. Theor.* **40** (2007) 9819. C. W. Misner, *Phys. Rev. Lett. **19***, 533 (1966). L. P. Chimento and A. Jakubi, *Phys. Lett. **A212***, 320 (1996). C. Eckart, *Phys. Rev. **58***, 919 (1940). L. D. Landau and E. M. Lifshitz, *Fluid Mechanics*, Oxford: Butterworth Heinemann (1987). W. Israel, *Ann. Phys. (NY) **100***, 310 (1976). W. Israel W and J. M. Stewart, *Phys. Lett. **A58***, 213 (1976). W. A. Hiscock and L. Lindblom, *Ann. Phys. (NY) **151***, 466 (1989). W. A. Hiscock and J. Salmonson, *Phys. Rev. **D43***, 3249 (1991). W. A. Hiscock and L. Lindblom, *Phys. Rev. **D35***, 3723 (1987). R. Maartens, Class. Quantum Grav. **12**, 1455 (1995). L. P. Chimento, A. S. Jakubi. Class. Quantum Grav.**10**,2047 (1993). Phys. Lett. **A 212**, 320 (1996). L. P. Chimento, A. S. Jakubi, V. Mendez and R. Maartens, Class. Quantum Grav. **14**, 3363 (1997). L. P. Chimento and A. S. Jakubi, Class. Quantum Grav. **14**, 1811 (1997). M. K. Mak and T. Harko. Gen. Rel. Grav. **30**, 1171 (1998). M. K. Mak and T. Harko, Gen. Rel. Grav. **31**, 273 (1999). M. K. Mak and T. Harko. J. Math. Phys. **39**, 5458 (1998). M. K. Mak and T. Harko. Australian Journal of Physics. **53,**241 (2000)$\mathbf{.}$ M. K. Mak and T. Harko. Euro. Phys. Lett. **56**, 762 (2001)$.$ T. Harko and M. K. Mak. IJTP. **38**, 1561 (1999)$.$ M. K. Mak and T. Harko, IJMPD **13,** 273 (2004). A. A. Coley, R. J. van den Hoogen and R. Maartens, Phys. Rev. **D54**, 1393 (1996). J. A. Belinchón, T. Harko and M. K. Mak. Class. Quantum Grav. **19**,3003 (2002). W. Zimdahl and A. B. Balakin. Entropy **4**, 49 (2002). R. A. Daishev and W. Zimdahl. Class. Quantum Grav. **20,** 5017 (2003). J. A. Belinchón. qr-qc/0412092. N. H. Ibragimov, Elementary Lie Group Analysis and Ordinary Differential Equations. Jonh Wiley & Sons, (1999). G. W. Bluman  and S. C. Anco Symmetry and Integral Methods for Differential Equations. Springer-Verlang (2002).
{ "pile_set_name": "ArXiv" }
--- abstract: 'I present recent work on gravitational waves (GWs) from a generic Standard Model-like effective potential for the electroweak phase transition. We derive a semi-analytic expression for the approximate tunneling temperature, and analytic and approximate expressions for the two GW parameters $\alpha$ and $\beta$. A quick summary of our analysis and general results, as well as a list of some specific models which easily fit into this framework, are presented. The work presented here has been done in collaboration with Stefano Profumo [@paper].' address: | Physics Department, University of California Santa Cruz,\ Santa Cruz, CA, US author: - 'John Kehayias [^1]' title: 'Recent Work on Gravitational Waves From a Generic Standard Model-like Effective Higgs Potential' --- Introduction ============ As the temperature is lowered in the finite temperature quantum field theory description of the electroweak Higgs sector (for a review, see e.g. [@tempreview]), it is possible to have a first order phase transition through quantum mechanical tunneling. A degenerate vacuum state develops at $T_c$, and what began as the true vacuum of the theory can become unstable at a lower temperature $T_{dest}$. A potential barrier separates this state from the true vacuum, and tunneling to the lower energy state is probable at a temperature $T_t$, where $T_{dest} \le T_t < T_c$. Gravitational waves (GWs) can arise from a strongly first-order phase transition through both turbulence and bubble nucleation. Here, a bubble is an area of the universe which has transitioned to the true, electroweak symmetry breaking, vacuum. Some fraction of the energy released in these processes gives rise to a stochastic background of gravitational waves. It has been known for some time that the electroweak phase transition in the minimal version of the Standard Model (SM) is not strongly first order, given the experimental bounds on the Higgs mass. However, many models of physics beyond the SM, including supersymmetry, can enhance the phase transition and produce a GW spectrum which might be experimentally observed in the near future. While many models have been studied extensively, there has not been much work done on a general, model-independent analysis. In this note I will very briefly summarize work soon to be submitted on studying generic effective potentials for the electroweak phase transition. The potential is very similar in form to the SM Higgs potential, and the general results are applicable to several models beyond the SM. Analysis of a Generic Effective Higgs Potential =============================================== We consider a potential for the Higgs which mirrors that of the (finite temperature, one loop, high temperature expansion) SM case of the following form: $$\label{eq:genpot} V_{eff}(\phi,T) = \frac{\lambda(T)}{4}\phi^4 - (ET - e)\phi^3 + D(T^2 - T_0^2)\phi^2.$$ In the SM $e = 0$. A semi-analytic expression for the three dimensional Euclidean action, which is the important quantity for finite temperature tunneling, was found in [@se3approx]. The tunneling temperature, $T_t$, is defined as the temperature when the probability to nucleate a bubble in a horizon volume is $\mathcal{O}(1)$, a condition that is well approximated by $$S_{E3}/T_t \sim 140,$$ where we have assumed the temperature scale is $\mathcal{O}(100\textrm{GeV})$ (see, e.g. [@ewgravwave]). Using the results of [@se3approx], we can thus derive an approximation for $T_t$. At $T_c$ for our potential the expression for $S_{E3}$ from [@se3approx] has a singularity. It also decreases very rapidly as the temperature is lowered, to $0$ at $T_0$. This implies that $T_t$ will be very close to $T_c$, and so we expand in powers of $\epsilon$: $T \rightarrow T_c - \epsilon$. The final lowest order expression for $S_{E3}/T$ has all the parameters of the potential and is proportional to $1/\epsilon^2$. The singularity as $\epsilon \rightarrow 0$ remains, and $\epsilon$ is solved for by setting $S_{E3}/T = 140$. Our approximation for the tunneling temperature is then $$T_t \approx T_c - \epsilon,$$ with $\epsilon \ll 1$. From our potential it is possible to calculate the exact GW parameters, $\alpha$ and $\beta$. $\alpha$ characterizes the energy change of the vacuum transition, while $\beta$ characterizes the bubble nucleation rate per unit volume. These are evaluated at $T_t$, which we now have an approximation for, but they are rather lengthy expressions. However, from our approximation for $S_{E3}/T$, a simple expression for $\beta$ is obtained, which, to lowest order, is proportional to $(T_c - \epsilon)/\epsilon^3$. The Parameter Space and Models ============================== We enforce that the potential of eq. (\[eq:genpot\]) describes electroweak symmetry breaking with a Higgs boson. This constrains the vev of $\phi$ to be the usual $v \approx 246\textrm{ GeV}$, which must be a stable minimum, and furthermore that the mass of the Higgs is above the current experimental bound of $114\textrm{ GeV}$. The signs of the parameters (except for $e$) are fixed through this and the potential considered in [@se3approx] (for general stability, etc.), and we then also have $T_0^2 = v(3e + \lambda v)/2D$, which is similar to the SM form. There are constraints on $e$ based on its sign, and $\lambda$ is set based on $e, v,$ and the Higgs mass $m_h$. We also want the theory to be perturbative, so $\lambda < 1$, giving us a mass range of $115\textrm{ GeV} \le m_h < 348\textrm{ GeV}$ (set by the SM case of $0.11 \le \lambda < 1$). Besides varying $\lambda$ we also vary one other parameter at a time. For parameters besides $e$, which has constraints on its range, we vary each up to two orders of magnitude larger and smaller than the SM value. In plotting the $\alpha-\beta$ plane we describe how the various terms in the potential affect the GW spectrum parameters. This covers as much as $12$ orders of magnitude for both $\alpha$ and $\beta$. The most remarkable enhancement for $\alpha$, which would greatly contribute to observing a signal with future experiments, comes at $e < 0$. Several models can provide changes to the parameters in the potential from their SM values. There has been much work (e.g. [@delaunay; @mssm]) on adding non-renormalizable terms to the SM or MSSM which can enhance $E$, adding to the strength of the phase transition. The addition of an $SU(2)_L$ triplet (see, e.g. [@triplet]) provides a contribution to $\lambda$. Since we find that the additional parameter $e$ can greatly enhance $\alpha$, models which add this term to the effective potential are particularly interesting. One such model is the addition of a gauge singlet, arising as a solution to the $\mu$ problem in supersymmetry, for instance. The phenomenology of this model has been studied extensively (e.g. [@singlet]), and it has been shown that it is possible to find evidence of such an additional singlet at the LHC, allowing us to draw an interesting connection between collider and GW physics. [9]{} J. Kehayias and S. Profumo, arXiv:0911.0687 \[hep-ph\]. M. Quiros, arXiv:hep-ph/9901312. F. C. Adams, Phys. Rev. D [**48**]{}, 2800 (1993) \[arXiv:hep-ph/9302321\]. R. Apreda, M. Maggiore, A. Nicolis and A. Riotto, Nucl. Phys. B [**631**]{}, 342 (2002) \[arXiv:gr-qc/0107033\]. C. Delaunay, C. Grojean and J. D. Wells, JHEP [**0804**]{}, 029 (2008) \[arXiv:0711.2511 \[hep-ph\]\]. K. Blum and Y. Nir, Phys. Rev. D [**78**]{}, 035005 (2008) \[arXiv:0805.0097 \[hep-ph\]\]. P. Fileviez Perez, H. H. Patel, M. J. Ramsey-Musolf and K. Wang, arXiv:0811.3957 \[hep-ph\]. S. Profumo, M. J. Ramsey-Musolf and G. Shaughnessy, JHEP [**0708**]{}, 010 (2007) \[arXiv:0705.2425 \[hep-ph\]\]. [^1]: [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider the standard double well setup extended with a laser beam in the center to create a “triple well" potential. The beam in the center is much more narrow than the barrier, and it creates a tunable depth well which can support a localized state in the middle. We show that the presence of the localized state in the central well changes the sign of tunneling between the left and right wells and therefore controls the fixed point dynamics of the bosonic Josephson junction.' author: - 'G. Szirmai' - 'G. Mazzarella' - 'L. Salasnich' bibliography: - 'dwlaser.bib' title: The effect of a laser dip in the semiclassical dynamics of bosonic Josephson Junctions --- Introduction ============ A Bose-Einstein condensate of a dilute gas of alkaline atoms in a double well potential realizes the physics of Josephson junctions, which was originally predicted in two superconductors separated by an insulating layer [@josephson1962possible]. The bosonic realization of Josephson junction physics has attracted a great interest both theoretically [@javanainen1986oscillatory; @smerzi1997quantum; @raghavan1999coherent; @milburn1997quantum; @mazzarella2010spontaneous; @mazzarella2011coherence; @julia2010macroscopic; @julia2010bose; @gillet2014tunneling] and experimentally [@albiez2005direct; @levy2007ac; @zibold2010classical] in recent years. On one hand the physics of Josephson junctions can be described by the two coupled nonlinear equations of a non-rigid pendulum, therefore its careful investigation is very tempting, since the model and its mathematics look fairly simple, while they are complicated enough in order to help us understanding some aspects of more elaborate problems, like the Bose-Hubbard model. In particular, bosonic Josephson junctions (BJJs) may be regarded as a two-site realization of the Bose-Hubbard model. On the other hand the mesoscopic coherent dynamics of Bose-Einstein condensate has important issues of its own, such as the validity of semiclassical dynamics and the use of coherent states in few mode and finite atom number systems [@mazzarella2011coherence; @julia2010macroscopic]. The tunneling dynamics of BJJs can serve as a basic tool in interferometry applications [@shin2004atom; @shin2005interference; @schumm2005matter]. The first experiments with repulsively interacting Bose condensates revealed self-trapping and plasma oscillations [@albiez2005direct] and later, with an experimental effort the a.c. Josephson effect was also observed [@levy2007ac]. With the help of atomic Feshbach resonances it is possible to change the magnitude and even the sign of the parameter of on-site interaction. Therefore it is in principle possible to “quench” the dynamics of the BJJ and realize the semiclassical dynamics around the stationary points of the Josephson equations or change the dynamics governed by one particular fixed point to a different one governed by a different fixed point [@zibold2010classical]. This way a setup for very fast macroscopic entanglement generation can be achieved [@micheli2003many; @Vidal2004Entanglement]. The question naturally arises whether it is possible or not to obtain some similar quenching not only with the on-site interaction, but rather by engineering the tunneling amplitude of the junction? In this paper we give an affirmative answer to this question. With the help of an external, tightly focused, red-detuned laser beam one can create a tiny hole in the middle of the double-well barrier. When the depth of this dip is increased, at some point a bound state localized inside the dip potential appears, and by further increasing the potential depth the tunneling constant between the original left and right wells changes sign. The creation of such a static obstacle is fairly simple and therefore gives another knob on the system besides the standard Feshbach resonance technique. The plan of the paper is as follows. In Sec. \[sec:dwd\] we consider the single particle problem where a dip potential is superimposed on the standard double well. In Sec. \[sec:bjj\] we apply the two-mode approximation to the problem when the doublet formed by the Wannier states of the left and right wells are sufficiently separated from the other energy levels and consider the Josephson dynamics. We summarize in Sec. \[sec:sum\]. The stability analysis of the stationary points of the dynamics is moved to the Appendix. Double well with a dip in the middle {#sec:dwd} ==================================== ![image](LowestStates){width="\textwidth"} The double well setup considered here consists of a symmetric potential $$\label{eq:dwpot} V_{\text{DW}}(x)=\frac{1}{2}m\omega_H^2 x^2+V_1\, e^{-\frac{x^2}{2 w^2}},$$ where $m$ is the mass of the atoms, $\omega_H$ is the frequency of the parabolic confinement, $V_1$ is the height and $w$ is the width of the double well barrier. We consider tight confinement in the perpendicular directions and treat the system as one-dimensional. In addition to the double well potential there is a tightly focused laser beam in the center which is red detuned from the atomic transition creating a further attractive potential for the atoms, $$\label{eq:centerwell} V_{L}(x)=-I_0\,e^{-\frac{x^2}{2 \sigma^2}}$$ where $I_0$ is the strength and $\sigma\ll w$ is the width of the optical potential. The full single particle Hamiltonian is $$\label{eq:ham} \hat H=-\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + V_{\text{DW}}(x)+V_L(x).$$ The perturbing potential $V_L(x)$ opens up a narrow dip in the center of the double well barrier, as illustrated in Fig. \[fig:lowest\_states\]. When varying the strength $I_0$, one can interpolate between a symmetric double well potential and a triple well one. For $I_0=0$, with our choice of parameters (for ${}^{87}\mathrm{Rb}$), which is close to experimental applications ($m=87\,\mathrm{amu}$, $\omega_H=2\pi\times15\,\mathrm{Hz}$, $w=5\,\mathrm{\mu m}$, $V_1=5 m \omega_H^2 w^2$, and $\sigma=0.5\,\mathrm{\mu m}$) the lowest two energy eigenvalues are almost degenerate and they form the low energy doublet of the double well problem. The corresponding wave functions are the symmetric and antisymmetric combinations of the Wannier orbits, which themselves are localized states around the left and right energy minima of the potential. Other energy eigenvalues are much higher and one can rely on a two-mode approximation when treating the problem. ![(Color online) The three lowest energy eigenvalues plotted as a function of $I_0$. One can observe an avoided crossing. The second energy level is unaffected by the perturbing potential, while the lowest energy and the third energy eigenvalue tilt down with increasing $I_0$.[]{data-label="fig:energy_crossing"}](energy_crossing){width="\columnwidth"} When $I_0$ is increased gradually, as shown in the subsequent plots in Fig \[fig:lowest\_states\], a central well starts to form in the middle of the potential barrier. For small values of $I_0$ the central well doesn’t support a localized state and its effect is just a small perturbation of the energy eigenvalues and an even smaller one on the wave functions. The three lowest energy eigenvalues are plotted in Fig. \[fig:energy\_crossing\]. One eigenvalue of the doublet is basically unchanged by the perturbation, namely the one which corresponds to the antisymmetric wave function, which has a node at the position of the perturbation. The other eigenvalue is shifted a little bit downwards. As $I_0$ increases, the central well deepens, and the third energy eigenvalue approaches the low energy doublet. As this third energy eigenvalue comes closer and closer, the two-mode description becomes more and more inaccurate. One can observe an avoided crossing in the three lowest energy eigenvalues. For small values of $I_0$ the lowest two eigenvalues form the doublet of the symmetric and antisymmetric combinations of the Wannier orbits. On the other side of the crossing, i.e. for large values of $I_0$, the single lowest energy eigenvalue correspond to the state localized in the central well, while the next two eigenvalues form now the doublet of the antisymmetric and symmetric combinations of the Wannier orbits localized at the left and right valleys. Bosonic Josephson Junction {#sec:bjj} ========================== When the splitting of the low energy doublet is much smaller than the energy difference between the doublet and the closest other energy eigenvalue, the two-mode approximation gives a sufficiently accurate description of the tunneling dynamics between the left and right wells. In this limit the other states are non-resonant and energy conservation decouples them from the tunneling dynamics. With the present parameters it means approximately either $I_0<I_{c,1}\approx6$, or $I_0>I_{c,2}\approx7$. When $I_0<I_{c,1}$ the Wannier functions are given by: $w_1(x)=(v_1(x)+v_2(x))/\sqrt{2}$, $w_2(x)=(v_1(x)-v_2(x))/\sqrt{2}$, for the left and right wells, respectively. For $I_0>I_{c,2}$ the first and second excited states give the Wannier functions, and they read as: $w_1(x)=(v_2(x)+v_3(x))/\sqrt{2}$, and $w_2(x)=(v_2(x)-v_3(x))/\sqrt{2}$, for the left and right wells, respectively. In second quantized form the non-interacting Hamiltonian can be cast to the following form: $$\label{eq:hamfree} \hat H_0=\epsilon\left(\hat b_1^\dagger \hat b_1+\hat b_2^\dagger \hat b_2\right)-J\left(\hat b_1^\dagger \hat b_2+\hat b_2^\dagger \hat b_1\right),$$ where the parameters are given by $\epsilon=\left< w_1\right|\hat H\left| w_1\right>$, and $J=-\left< w_1\right|\hat H\left| w_2\right>$. In the two-mode approximation the total atom number $\hat N=\hat b_1^\dagger \hat b_1+\hat b_2^\dagger \hat b_2$ is a constant of motion, therefore the first term in Eq. can be dropped. The parameter $J$ shows a “resonance" like behavior as a function of $I_0$, as illustrated in Fig. \[fig:Jres\]. We note that the central part of the figure, where the crossing of the energy levels takes place, is not reliable, since the two-mode approximation breaks down. Nevertheless, the tunneling amplitude changes sign at the crossing and the lower energy orbital of the doublet changes from ungerade to gerade symmetry. ![(Color online) The tunneling ratio as a function of the depth of the central well, $I_0$.[]{data-label="fig:Jres"}](Jres){width="\columnwidth"} ![image](grid_figure){width="\textwidth"} In the presence of interaction the Hamiltonian is modified to $\hat H=\hat H_0 + \hat H_I$, with $$\hat H_I=\frac{U}{2}\left(\hat b_1^\dagger\hat b_1^\dagger\hat b_1\hat b_1+\hat b_2^\dagger\hat b_2^\dagger\hat b_2\hat b_2\right),$$ where $U$ characterizes the on-site interaction. At sufficiently low temperatures the bosons form a Bose-Einstein condensate, and in the semi-classical approximation the atomic operators are replaced with c-numbers: $b_k=\sqrt{N_k}(t)e^{i\theta_k(t)}$, where $N_k(t)$ is the atom number in well $k$ at time $t$, and $\theta_k(t)$ is the corresponding phase. The total atom number $N_1(t)+N_2(t)\equiv N$ is constant. It is convenient to introduce the fractional population difference of the two wells, $z(t)=[N_1(t)-N_2(t)]/N$, and the relative phase $\theta(t)=\theta_2(t)-\theta_1(t)$. Using this substitution in the Hamiltonian $\hat H$ one can arrive to the semi-classical energy function [@smerzi1997quantum] $$\label{eq:semen} \mathcal{H}(z,\theta)=-2JN\sqrt{1-z^2}\cos(\theta)+\frac{U}{2}N^2z^2,$$ from which the semi-classical equations, known as the bosonic Josephson junction equations can be derived as \[eqs:BJJ\] $$\begin{aligned} \dot z&=-\frac{1}{N}\frac{\partial \mathcal{H}}{\partial \theta}=-2J\sqrt{1-z^2}\sin(\theta),\\ \dot\theta&=\frac{1}{N}\frac{\partial \mathcal{H}}{\partial z}=\left(U\,N+\frac{2J}{\sqrt{1-z^2}}\cos(\theta)\right)z.\end{aligned}$$ Here and from now on we work with $\hbar=1$. Equations have 4 stationary solutions $\dot{\bar{z}}=0$ and $\dot{\bar{\theta}}=0$: It has two zero imbalance solutions with $\mathbf{X}_1=(\bar z=0, \bar \theta=0)$, and $\mathbf{X}_2=(\bar z=0, \bar \theta=\pi)$. Furthermore there are two finite imbalance solutions: $\mathbf{X}_3=(\bar z=\sqrt{1-(2J/U N)^2}, \bar \theta=0)$, and $\mathbf{X}_4=(\bar z=\sqrt{1-(2J/U N)^2}, \bar \theta=\pi)$. By substituting the stationary solutions to the semi-classical energy function , one can immediately see, that for $U>0$ the zero imbalance solutions always have the lowest energy. Also depending on the sign of $J$ the minimal energy solution is either with $\bar\theta=0$ for $J>0$, and $\bar\theta=\pi$ for $J<0$. For attractive interaction $U<0$ the finite imbalance solutions are energetically more favorable for $(UN)^2>4J^2$, and the tunneling dynamics exhibits self-trapping [@raghavan1999coherent]. Thus, points of $(\bar{z},0)$ with $\bar{z} \neq 0$ are stable fixed points for the ODEs only in the presence of attractive on-site interactions $U$ provided that $U<-2|J|/N$. Under inital conditions $(z(0),0)$ - with $z(0)<(2/\Gamma) (\Gamma-1)^{0.5}$ ($\Gamma=|UN/2J|$) - the solutions of these ODEs describes oscillations of the fractional imbalance and relative phase about a nonzero time averaged value and zero, respectively. By suitably tuning $I_0$, one can change the sign of $J$ by moving from the left side of the resonance to the right side of it (see Fig. \[fig:Jres\]). All the above condition thus can be satisfied and one can quench between self-trapping and Josephson dynamics (and vice versa), even with repulsive boson-boson interaction. In Fig. \[fig:quench\] we illustrate the quench dynamics for a repulsive Bose condensate prepared initially for $(z(0)=0.5, \theta(0)=0)$. At $t=0$ the the dip potential is turned off and we have a symmetric double well potential with $J>0$. The system starts Josephson (plasma) oscillations. In panel (a) we show the phase space trajectories and fixed points of the semi-classical Hamiltonian for $U N=3.5 J$. The shading corresponds to the energy, where the central (orange) region is the energy minimum and the outer (green) regions correspond to higher energies. The thick line shows the trajectory of the initial Josephson oscillation. On panel (b) we show the population imbalance as a function of time, measured in units of $|J|^{-1}$. The system parameters are left unchanged for $t=10 J^{-1}$. At $t=10 J^{-1}$ we switch on abruptly a dip potential with $I_0$ such to go to the other side of the resonance with $J\rightarrow -J$. Now the phase space diagram is depicted in panel (c). As we see, due to the change of the sign of $J$, the energy landscape changes by $\theta\rightarrow \theta+\pi$, and the finite imbalance (unstable) fixed points corresponding to the energy maxima are moved to the center. The Bose condensate continues its dynamics in the modified landscape, around the $\mathbf{X}_3$ fixed point, which is selected by its instantaneous state $(z(10|J|^{-1}), \theta(10|J|^{-1}))$. This self-trapping dynamics is shown also in panel (b) for $t>10 |J|^{-1}$. Another indicator of the change of the type of the dynamics is the change in the oscillation frequency, which (at least for small oscillations around the fixed points) can be calculated by the linear stability analysis of the fixed points, as summarized in Appendix \[sec:stabanal\]. In Fig. \[fig:freq\] we plot the oscillation frequency as a function of $I_0$. As we increase $I_0$ at the left hand side of the resonance, the Josephson oscillation frequency $\omega_J$ starts to grow first, since $J$ increases, and then at the right hand side where $J<0$ it decreases again, since $|J|$ decreases. Then at some point, when $U$ becomes bigger than $2|J|$ the fixed point for the Josephson oscillation becomes unstable and instead the self-trapping frequency $\omega_{\text{ST}}$ appears. ![(Color online) The oscillation frequencies as function of the strength of the potential dip. The given fixed point becomes unstable, when the curve goes below zero.[]{data-label="fig:freq"}](lambdas){width="\columnwidth"} Summary {#sec:sum} ======= In this paper we have considered the effect of an additional central well added to the center of the symmetric double well barrier. We have shown that by suddenly opening up this narrow central well the tunneling amplitude of the bosonic Josephson junction can be “quenched” to almost arbitrary values. Therefore in experiments one can have an additional tunable parameter on the double well system and change the dynamics in-situ from plasma oscillations to the a.c Josephson dynamics or even to self-trapping without modifying the scattering properties. Acknowledgements {#acknowledgements .unnumbered} ================ GSZ acknowledges support from the Hungarian National Office for Research and Technology under the contract ERC\_HU\_09 OPTOMECH, the Hungarian Academy of Sciences (Lendület Program, LP2011-016), the Hungarian Scientific Research Fund (grant no. PD104652) and the János Bolyai Scholarship. GM and LS acknowledge financial support from Università di Padova (Progetto di Ateneo grant No. CPDA 118083), Cariparo Foundation (Eccellenza grant 2011/2012), and MIUR (PRIN grant No. 2010LLKJBX). Fixed point stability {#sec:stabanal} ===================== In order to check the stability of the solutions, we look for small perturbations around the stationary points and calculate the linear stability matrix of Eqs. $$\label{eq:stabmat} \left(\begin{array}{c} \delta \dot z\\ \delta \dot \theta \end{array}\right)=\left( \begin{array}{c c} \frac{2J\bar z\sin(\bar\theta)}{\sqrt{1-\bar z^2}}&-2J\sqrt{1-\bar z^2}\cos(\bar\theta)\\ UN+\frac{2J\cos(\bar\theta)}{(1-\bar z^2)^{3/2}}&-\frac{2J\bar z\sin(\bar\theta)}{\sqrt{1-\bar z^2}} \end{array}\right) \left(\begin{array}{c} \delta z\\ \delta \theta \end{array}\right).$$ The linear stability matrix has the following eigenvalues: $$\label{eq:evals} \lambda=\pm i\sqrt{4 J^2\left[\frac{\cos(2\bar\theta)}{1-\bar z^2}+\frac{UN}{2J}\sqrt{1-\bar z^2}\cos(\bar\theta)\right]}.$$ For purely imaginary eigenvalues, the stationary solution is marginally stable: small perturbations around the solution result in periodic oscillations. The frequency of the oscillation is, $\omega=\mathrm{Im}\,\lambda$. On the other hand, when the quantity under the square root becomes negative, the eigenvalues become a pair of real numbers with equal magnitude and opposite sign, and the perturbations can exponentially grow in time. By directly substituting the stationary solutions to the eigenvalues we get \[eqs:evalevals\] $$\begin{aligned} \lambda|_{\mathbf{X}_1}&=\pm i \sqrt{4J^2\left(1+\frac{U N}{2 J}\right)},&\text{stable if: } &\frac{U N}{2 J}>-1,\\ \lambda|_{\mathbf{X}_2}&=\pm i \sqrt{4J^2\left(1-\frac{U N}{2 J}\right)},&\text{stable if: } &\frac{U N}{2 J}<1,\\ \lambda|_{\mathbf{X}_3}&=\pm i \sqrt{(U N)^2-4J^2},&\text{stable if: } &(U N)^2>4J^2,\\ \lambda|_{\mathbf{X}_4}&=\pm i \sqrt{(U N)^2-4J^2},&\text{stable if: } &(U N)^2>4J^2.\end{aligned}$$ For the zero imbalance solutions, $\mathbf{X}_1$ and $\mathbf{X}_2$, the frequency $\omega$ is the Josephson frequency $\omega_\text{J}$. During the dynamics around $\bar{z} = 0$ there is population inversion, i.e. $z(t)$ changes sign. Instead, for $\mathbf{X}_3$ and $\mathbf{X}_4$, when $\bar{z} \neq 0$, this frequency is the self-trapping frequency $\omega_{\text{ST}}$; during the dynamics there is no population inversion, i.e. $z(t)$ does not change sign.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We offer a new theoretical interpretation for the effect of enhanced electron density at $^7$Be nucleus encapsulated in fullerene C$_{60}$. Our [*[ab initio]{}*]{} Hartree-Fock calculations show that electron density at the $^7$Be nucleus in $^7$Be@C$_{60}$ increase due to [*attractive*]{} effective potential well generated by the fullerene. The 2$s$ state in the isolated Be atom turns into 3$s$ state in the joint potential. This new state has higher energy, and slightly larger amplitude at the Be nucleus than the previous 2$s$ state. Moreover the 3$s$ wave function has additional [*node*]{} appeared at the distance $r \simeq 5a_B$ from the center. The node imitates repulsion between the Be electron and the fullerene wall, because the electron has zero probability to occupy this region. Such imitation of the repulsion by means of the node in attractive potential has direct physical analogy in the theory of $\alpha$-$\alpha$ and $N$-$N$ nuclear interactions.' author: - 'E.V. Tkalya' - 'A.V. Bibikov' - 'I.V. Bodrenko' title: | Electron Capture $\beta$-Decay of $^7$Be Encapsulated in C$_{60}$:\ Origin of Increased Electron Density at the $^7$Be Nucleus. --- [^1] Introduction ============ In the $^7$Be $\beta$-decay via electron capture (EC) process, the nucleus absorbs an electron from the atomic or molecular shell and is transformed to $^7$Li in the reaction $p+e^-\rightarrow{}n+\nu_e$. The decay rate is proportional to the electron density at the nucleus and therefore depends on the chemical environment of the radioactive isotope. $^7$Be has been used in investigations of the electron capture decay rate in various chemical states since the first studies by Segre [@Segre-47] and Daudel [@Daudel-47]. By now, experimentalists have published the results of more than sixty measurements relating to $K$- and $L$-shell electron capture by $^7$Be in different chemical forms and media. In 2005-2007, two teams studied the EC decay of $^7$Be inside the fullerene C$_{60}$ [@Ohtsuki-04; @Ray-06; @Ohtsuki-07]. It was found, that the half-life of $^7$Be in metallic beryllium measured at room temperature exceeds the one in $^7$Be@C$_{60}$ at room temperature by 0.83% [@Ohtsuki-04], and by 1.5% [@Ohtsuki-07] if the latter is measured at 5$^{\circ}K$. This difference between the $^7$Be EC $\beta$-decay constants is the largest among available experiments. A density functional theory (DFT) based numerical calculations of the electron density at the Be nucleus have been presented in [@Ohtsuki-07; @Morisato-08] along with the experimental data. It was found that, in accordance with the experiment, the electron density at Be encapsulated in the center of C$_{60}$ is larger than the one at the nuclei in metallic beryllium. Qualitatively, in the metal, the electrons are shared, decreasing their local density at the nuclei, while the beryllium atom in C$_{60}$ remains intact. A more careful analysis, however, has shown that the electron density at $^7$Be@C$_{60}$ is larger by 0.17% even in comparison with that in an isolated beryllium atom. The authors explain this result as a “compression” of the Be’s 2$s$ orbital inside C$_{60}$. The reason of the compression may be the “repulsive interaction” between Be and the C$_{60}$ cage according to [@Lu-02] . Examination within the framework of Hartree-Fock based methods ============================================================== Detailed understanding of the structure of atoms encapsulated in fullerens is important, in particular, for developing the concept of the fullerene as an isolating cage which “does not affect” the trapped single atom and “protects” it from the outer environment. In the present work, we study the Be-C$_{60}$ interaction and its effect on the electron density at the Be nucleus within the framework of Hartree-Fock (HF) based methods. Although our value of the relative decay rate difference between metallic Be and Be@C$_{60}$ is in qualitative agreement with the experimental one as well as with the results of the previous theoretical studies, we suggest a new theoretical interpretation of the physical nature of the enhanced electron density at Be in C$_{60}$. We started with the structural optimization of Be position inside C$_{60}$. The fullerene’s geometry was taken from experiment [@Johnson-91] (the length of the long and the short bonds are 1.448 [Å]{} and 1.404 [Å]{}, respectively) and fixed during the optimization, as the endohedral doping has, as expected, a small effect [@Lu-02]. The total energy of Be@C$_{60}$ complex at every trial configuration was calculated by the restricted (singlet spin state) Hartree-Fock method, with the 6-31G$^{**}$ molecular basis set [@Francl-82; @BS] in a cartesian form. Besides, the electronic correlations were taken into account within the second order perturbation method (MP2). For the calculations, we used our original program which employs the resolution of the identity (RI) method for the electron-electron interaction integrals and allows to perform the Hartree-Fock based calculations for large systems with moderate computational resources (see [@Artemyev-05; @Nikolaev-08] for details). For both the Hatrtee-Fock and the MP2 variants, the optimization results in the position of the Be atom at the center of the fullerene are in full agreement with the previous DFT based studies [@Ohtsuki-07; @Lu-02]. In order to evaluate the interaction energy defined as $\Delta E = E_{\textrm{Be}@\textrm{C}_{60}}-(E_{\textrm{Be}} + E_{\textrm{C}_{60}})$, we have performed additional calculations of the total energies of Be atom, $E_{\textrm{Be}}$, and the fullerene, $E_{\textrm{C}_{60}}$. Besides, in a separate calculation, the basis set superposition error was taken into account by the counterpoise (CP) method [@Boys-70]. The results are summarized in Table \[tab:DE\]. [c|c|c]{} & HF & HF+MP2\ CP corrected & 0.91 & -0.41\ uncorrected & 1.00 & -0.63\ We therefore conclude that Be atom’s equilibrium position at the center of the fullerene belongs to the attractive region of the Van-der-Waals interaction, i.e. the Be-C$_{60}$ interaction is attractive and the Be@C$_{60}$ complex is stable in the ground state with respect to decay to Be and C$_{60}$. This result is in contrast with the one of Lu et.al. [@Lu-02] who concluded a “slightly repulsive” Be-C$_{60}$ interaction from a DFT calculations, and obtained the value of +1.05 eV for the interaction energy. We apply their speculations to our Hartree-Fock results, which also give the repulsive interaction as the pure HF method may not account of the dispersion energy. The consideration of Lu et.al. is based on the modifications in the energy levels of the HOMO/LUMO (highest occupied / lowest unoccupied molecular orbitals) of the Be atom and C$_{60}$ upon formation of the Be@C$_{60}$ complex. Our results, presented in Figure \[fig1\], differ qualitatively from those of [@Lu-02], calculated with the B3LYP density functional with 6-31G$^{**}$ molecular basis set. In [@Lu-02], Be’s atomic 2$s$ level lies in the middle of C$_{60}$’s LUMO-HOMO gap. Moreover, C$_{60}$’s LUMO-HOMO gap increases in [@Lu-02] upon endohedral doping with Be, because the HOMO is slightly lowered by 0.03 eV, while the LUMO is elevated by 0.04 eV. These results were interpreted by Lu et al. as the evidence of a slight repulsion between the Be atom and the fullerene cage in the Be@C$_{60}$ complex. Our calculations show that the Be’s 2$s$ orbital in the Be@C$_{60}$ complex lies 0.14 eV lower than the C$_{60}$’s HOMO (see in Figure \[fig1\]). (The Be’s 2$s$ orbital in isolated Be atom lies 0.69 eV lower than the C$_{60}$’s HOMO.) Furthermore, C$_{60}$’s LUMO-HOMO gap does not change upon the endohedral doping, and both orbitals are just slightly lowered by 0.01 eV. Thus, there is little evidence of the “repulsive interaction”, according to the definition of Lu at.al., between Be and C$_{60}$ within the Hartree-Fock calculations, even if the calculated interaction energy is positive. The Mulliken population analysis of our HF calculations with the 6-31G$^{**}$ molecular basis set gives the charge of +0.03 at Be compared with -0.14 in the DFT calculation of Lu et.al with the same basis set. This makes the concept of a slight hybridization of between Be and the fullerene orbitals unacceptable within the Hatree-Fock method. However, strictly speaking, the Mulliken charges are not suitable basis set independent parameters for analysing the non-chemical interactions. Then, we consider the local electron density at the beryllium nucleus. The electron density for a polyatomic system is given by the standard relation, $\rho({\bf{r}})=\Sigma_n \kappa_n|\varphi_n({\bf{r}})|^2$, where $\varphi_n({\bf{r}})$ are normalized molecular orbitals, $\kappa_n$ – corresponding occupation numbers. In our case, the orbitals are the Hartree-Fock orbitals calculated with Danning’s cc-pVTZ molecular basis set ([@Dunning-89]) used in a Cartesian form. We employed two methods for the evaluation of the electron density at the nucleus which is formally located at the coordinate origin, $r=0$. At first, we added narrow Gaussian s-functions (with exponents $\alpha$ between $10^3$ and $10^8$ in inverse squared Bohr radius, $a_B^{-2}$) to the standard basis set. Then, we extrapolated the local electron density near the nucleus to the $r=0$ point by using the known (the consequence of the Kato theorem [@Kato-57]) non-relativistic asymptotic of the electron density for many-electron system at the Coulomb center with charge $Z$, $(d\rho(r)/dr)_{r=0}=-2Z\rho(0)$. The coincidence of the both values of electron density at the Be nucleus with up to 0.02 a.u. (the atomic unit of density is equal to one electron in cubic Bohr radius, $a_B^{-3}$), i.e. about 0.05%, reflects the level of numerical error for our method. The calculated electron densities at single Be atom, metallic Be and Be@C$_{60}$ are summarized in Table \[tab2\]. (Details of the procedures for calculations of electron density at metallic Be will be presented elsewhere.) [lccccc]{} &\ & 1-st & 2-nd & Others & Total \ Be@C$_{60}$ & 34.22 & 1.24 & 0.02 & 35.48\ Be atom & 34.25 & 1.13 & - & 35.38\ Be metal & 34.11 & 0.32 & 0.33 & 34.78\ Our results show that $$\frac{\rho(0)_{\textrm{Be}@\textrm{C}_{60}}-\rho(0)_{\textrm{Be metal}}}{\rho(0)_{\textrm{Be metal}}}100\% \simeq 2.0\% ,$$ i.e. a 2% decrease of the electron density at the nucleus from Be@C$_{60}$ to metallic Be what is in qualitative agreement with the experimentally determined change of the decay rate at 5$^{\circ}K$. Though, the absolute value is somewhat larger then the measured 1.5% as well as the value of 1.7% obtained by the DFT calculations [@Ohtsuki-07; @Morisato-08]. On the other hand our value is in excellent agreement with experimental data of Kraushaar et al. [@Kraushaar-53] of direct measurement of the $^7$Be half-life in the metal source $T_{1/2}^{^7\textrm{Be metal}} = 53.61\pm 0.17$ days and the half-life of $^7$Be in $^7$Be@C$_{60}$ $T_{1/2}^{^7\textrm{Be}@\textrm{C}_{60}} = 52.47\pm 0.04$ days from the work [@Ohtsuki-07]: $$\frac{T_{1/2}^{^7\textrm{Be metal}} - T_{1/2}^{^7\textrm{Be}@\textrm{C}_{60}}}{T_{1/2}^{^7\textrm{Be metal}}}100\% \simeq 2.1\% .$$ In the present paper we do not discuss these minor quantitative discrepancies neither between available experimental data, nor between various theoretical results. Experimental data require the further specification. As to numerical results it is necessary to take into account that both methods of calculation contain approximations. The Hartree-Fock method correctly takes account of the exchange, but lacks for the electronic correlations. The model density functionals, on the other hand, make approximations for both the exchange and the correlations. To make reasoning about the accuracy of the methods, especially in the case of the non-chemically, weakly bounded molecules, one would refer to a more robust theories like the coupled clusters or the configuration interaction methods. Instead, we are focussing on a qualitative phenomenon — the electron density difference between Be@C$_{60}$ and isolated Be atom. The corresponding relative decrease of the electron density in our calculations is $0.28$%, in qualitative accord with the value of 0.17% in [@Ohtsuki-07; @Morisato-08]. The reason of the enhanced electron density in Be@C$_{60}$ might be a slight hybridization of Be’s and C$_{60}$’s orbitals, as suggested in [@Lu-02], so that the beryllium grabs the electron density from the fullerene. In that case, some of fullerene’s orbitals would contribute to the electron density at Be. However, only two orbitals (those originated from atomic 1$s$ and 2$s$) make apparent contribution to the electron density at the Be nucleus. Thus, the hybridization concept has no support from the Hartree-Fock results. The structure of the Be is therefore changed due to the potential effect of the fullerene. In particular, fullerene’s electrostatic field might prevent beryllium’s electrons to spread out the cage acting as a strong repulsive potential wall. In that case, a more compact 1$s$ orbital would not be affected. However, the 2$s$ one would rapidly vanish after certain distance from the center and would be compressed (due to the normalization) in the internal region resulting in the enhanced density at the nucleus. This concept, which also was expressed in [@Ohtsuki-07], is in full accord with the data from Table \[tab2\] as well as with the increased energy of the 2$s$ orbital of Be in C$_{60}$. Nevertheless, a more detailed analysis of the spherically averaged electron density curves for the 1$s$ and 2$s$ orbitals of atomic Be and Be in C$_{60}$ presented on Figure \[fig2\] rules out the concept of [*repulsive*]{} potential wall. Indeed, Be’s 2$s$ orbital inside the fullerene, though tends to zero at $r \simeq 5a_B$, then rapidly increasing and becomes even larger than the one of isolated Be atom. Below, we suggest a different solution to this intriguing issue. ![Color online. Electron densities of 1$s$ and 2$s$ states for isolated Be atom, and the first and the second orbitals of the Be atom in the Be@C$_{60}$ complex: 1 (5) — 2$s$ (1$s$) Be, Hartree-Fock; 2 (6) — 2$s$ (1$s$) Be@C$_{60}$, Hartree-Fock; 3 (7) — 2$s$ (1$s$) Be@C$_{60}$, model potential in Figure \[fig3\]; 4 (8) — 2$s$ (1$s$) Be@C$_{60}$, model potential with model potential well in Figure \[fig3\].[]{data-label="fig2"}](Fig2.eps){width="10cm"} The electron density of the 2$s$ orbital vanishes at $r \simeq 5a_B$ since the corresponding wave function crosses zero and changes its sign. Zero probability for the electrons to occupy the vicinity of $r \simeq 5a_B$ imitates the repulsive core for the 2$s$ electrons of Be in C$_{60}$. However, the origin of the additional node in the wave function is the [*attractive*]{} potential well at $5a_B \lesssim{} r \lesssim 8a_B$. Spherically averaged electrostatic potential extracted from the Hartree-Fock electron density of C$_{60}$ (dashed-line curve in Figure\[fig3\]) is attractive. To illustrate the phenomenon, we designed a model spherical potential for the 2$s$ orbital of Be (see in Figure \[fig3\]). It consists of the screened Coulomb potential as well as the spherical attractive potential well centered at $r \simeq 6.7a_B$. The screening constant was chosen to reproduce qualitatively the 2$s$ orbital for the isolated atom. It turns out, that the addition of the attractive potential into the model potential results in the appearance of the second node for 2$s$ orbital, in the increase of its energy ( in full agreement with the result of HF calculation, see in Figure\[fig1\]) and also in the increase of the electron density at $r = 0$. ![Color online. Potentials for the Be 2$s$ orbital in the Be@C$_{60}$ complex: 1 — Coulomb potential; 2 — model potential (screened Coulomb potential); 3 — potential well extracted from Hartree-Fock electron density; 4 — model potential “2” with model potential well.[]{data-label="fig3"}](Fig3.eps){width="8cm"} Analytically solvable model =========================== The reason of this phenomenon becomes clear from the following analytically solvable model. Let us compare the 2$s$ and 3$s$ electron wave functions and the energy levels in the Coulomb potential (Figure \[fig4\] (a) ) and in the new potential shown in Figure\[fig4\] (b). (This is the same Coulomb potential combined with the spherical potential layer.) ![Color online. Energy levels, electron wave functions, and electron densities of 2$s$ and 3$s$ states in: (a) Coulomb potential; (b) Coulomb potential combined with a spherical potential layer. (c) Energy level and electron wave function of 1$s$ state in a single spherical potential layer.[]{data-label="fig4"}](Fig4.eps){width="17cm"} We consider here only $s$ wave functions $\varphi_{ns}(r)$, because $\varphi_{np}(0)=0$ for all $p$ states in non relativistic limit, and these $p$ states do not give a contribution to the decay of $^7$Be. The electron radial wave functions for the combined potential in Figure \[fig4\] (b) are $$\varphi_{s}(r) = \left \{ \begin{array}{lll} a_1 \exp(-\kappa{}r)_1F_1(1-Z/\kappa;2;2\kappa{}r) , & 0 \leq r < R_1, \\ a_2 (\sin(kr)+b_2\cos(kr))/r , & R_1 \leq r < R_2 , \\ a_3 \exp(-\kappa{}r)/r , & R_2 \leq r . \label{eq:WF} \end{array} \right.$$ Here, $_1F_1$ is the confluent hypergeometric function, the wave numbers are $\kappa = \sqrt{2m|E|}$, $k = \sqrt{2m(E -V)}$, where $E$ is the energy of the state ($E,V<0$). The coefficients $a_{1-3}$, $b_2$ and the energy $E$ are determined from the conditions of continuity and differentiability of the wave function at the boundaries $r=R_1$ and $r=R_2$, where $R_{1,2}$ are internal and external radiuses of the potential layer correspondingly. For definiteness, we take charge $Z=2$ for the Coulomb potential and depth $V=-50$ eV for the spherical layer located between $R_1 = 5.5$ a$_B$ and $R_2 = 7.5$ a$_B$. The energy and the electron density at the nucleus of the 2$s$ state in this potential are $E_{2s}=-13.61$ eV and $\rho_{2s}(0)=4$ a$_B^{-3}$, correspondingly. If we add the spherical potential layer to the Coulomb potential, as it is shown in Figure \[fig4\] (b), and will gradually increase its depth up to the value of 50 eV, we will see the following. The energy of the 2$s$ state decreases; its wave function gradually moves from the region of the Coulomb potential to the region of the spherical layer and becomes similar to the wave function of the 1$s$ state in the isolated potential layer shown in Figure \[fig4\] (c). The electron density of the 2$s$ state at the nucleus decreases according to Figure \[fig5\]. ![Color online. Electron densities of 2$s$ and 3$s$ states at the point $r=0$ as a function of spherical potential layer depth $V$.[]{data-label="fig5"}](Fig5.eps){width="8cm"} At the same time the 3$s$ state gradually takes up the space region occupied previously by the 2$s$ state. Depending on the width of the spherical potential layer and on its depth $V$, the energy of the 3$s$ state in the new combined potential may lie both above and below of the 2$s$ energy level in the Coulomb potential, and may have either a higher or a smaller electron density at the nucleus (see in Figure \[fig5\]). Qualitatively, the mechanism of the variation of electron density of the 3$s$ state at the nucleus, shown in Figure \[fig5\], follows. The electron density in the origin, $\rho_{ns}(0)$, depends on the energy of the state $E_{ns}$. For example, $\rho_{ns}(0)\propto{}|E_{ns}|^{3/2}$ in the Coulomb potential, $\rho_{ns}(0)\propto{}|E_{ns}|$ in the infinite potential well and so on. The absolute value of the energy, $|E_{3s}|$, of the 3$s$ state increases, when the depth $|V|$ of the spherical potential layer grows. At the same time, gradual redistribution of the 3$s$ electronic density between the area of Coulomb potential and the spherical potential layer starts. For a relatively small $|V|$, the space occupied by the 3$s$ wave function in the area of the potential spherical layer increases comparatively slowly, because the area of localization of the wave function practically does not vary, and the spherical potential layer does not have its own binding state. If the layer has a large depth, the situation changes. There is a binding state in the deep isolated spherical potential layer now. The energy of the 3$s$ state in the joint potential verges towards the energy of such binding state, moving simultaneously from the position of the binding state in the pure Coulomb field. A part of the wave function occupies the forbidden for classical movement area (between the Coulomb potential and the spherical potential layer), and the considerable enhancement of the 3$s$ electron density occurs in the area of the spherical potential layer. The reduction of the wave function in the area of the Coulomb potential is not compensated anymore by growth of the $|E_{3s}|$ energy, and the electronic density at the nucleus decreases fast, returning at first to the initial value (see in Figure \[fig5\]), and tends to zero at the further increase of the spherical potential layer depth. By applying the above considerations to the Be@C$_{60}$ system one concludes the following. The C$_{60}$ fullerene modifies the Coulomb potential of the Be atom in such way, that the 3$s$ state in the joint potential occupies approximately the same position as the 2$s$ state in the isolated Be atom. That is, the 3$s$ state has approximately the same energy and practically the same electron density inside the Be atom as the 2$s$ state in Coulomb potential. Moreover, corresponding wave function $\varphi_{3s}(r)$ has the second node, which imitates the repulsion of the electrons of the Be atom from the C$_{60}$ cage at $r \simeq 5a_B$. As regarding the Be 2$s$ state, the energy of this state becomes considerably smaller in the new potential. Furthermore, its wave function moves to the potential well formed by the fullerene. In that way, the small increase of electron density at the Be nucleus has casual character. One could obtain another result with different parameters of the “fullerene potential well” (for example, by doping of C$_{60}$ with certain atoms). Thus, we obtain infrequent possibility to control the $^7$Be decay by means of a non-chemical interaction. In the considered particular case, this is the electrostatic interaction between the Be atom and the fullerene. To confirm, that the proposed mechanism does not depend on the model, we also have considered two different analytically solvable models – a particle inside two “independent” and two connected spherical potential wells. The results obtained are in a good qualitative agreement with those described above. Comparison with repulsive core in $\alpha$-$\alpha$ and $N$-$N$ interactions ============================================================================ It is interesting to note in the end, that the problem considered here is not physically new and has a vague similarity with the well known problem of the repulsive core in nuclear physics. The concept of repulsive core in $\alpha$-$\alpha$ and $N$-$N$ interactions had been accepted as correct right until the seventies of the last century. This repulsive core arose at small distances as the consequence of the Pauli exclusion principle. It was established later that this concept was simplified. In some cases, the repulsive core must be understood in terms of the nodal wave function for relative motion in an attractive potential (see in Refs. [@Saito-69; @Neudatchin-83] and references therein). In other words, the zero probability for particle to occupy certain region can be achieved both by the infinite potential wall (repulsive core) and by the node of the wave function in an attractive potential. In this sense the phenomenon considered in the present paper resembles the above mentioned effects in the $\alpha$-$\alpha$ and the $N$-$N$ interactions. Conclusion ========== In summary, according to our Hartree-Fock calculations with the electronic correlations accounted at the MP2 level, the lowest energy singlet configuration of Be atom encapsulated in C$_{60}$ is the one at the center of the fullerene, – in accordance with previous DFT studied. The Be atom resides in the attractive region of the Van-der-Waals interaction with the interaction energy of about -0.6 eV, — in contrast with +1.05 eV from the DFT. Thus, the $^7$Be@C$_{60}$ complex is stable with respect to decay to $^7$Be and C$_{60}$. The HF electron density at the beryllium nucleus in Be@C$_{60}$ exceeds the one in metallic Be by 2% in qualitative agreement with the relative difference of the corresponding $^7$Be EC decay rates measured by Ohtsuki et.al. as well as with the DFT calculations. The electron density at the Be nucleus in $^7$Be@C$_{60}$ also exceeds the one in isolated Be atom. The origin of this increasing is neither the modification of Be’s 2$s$ orbital in C$_{60}$ because of the hybridization with the fullerene orbitals nor the repulsive potential wall at the region of the fullerene’s atoms. Rather the replacement of the Be 2$s$ state by 3$s$ orbital in the new potential, which is the joint Coulomb potential of Be atom and the attractive effective potential well generated by the fullerene. The 3$s$ state has additional node at distance $r \simeq 5a_B$ from the center. This node imitates the repulsion between electrons of the Be atom and the C$_{60}$ cage, which has direct physical analogy in the theory of $\alpha$-$\alpha$ and $N$-$N$ nuclear interactions. Acknowledgements ================ E. Tkalya thanks Prof. Frederik Scholtz and Dr. Alexander Avdeenkov for the hospitality and for the given opportunity to do a part of this work at the National Institute for Theoretical Physics, South Africa. [39]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (); . , , , , ****, (). , . , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (); . , , , ****, (). , ****, (); . , , , , ****, (). [^1]: Submitted to Phys.Rev.C
{ "pile_set_name": "ArXiv" }
--- abstract: | Submodular function minimization (SFM) is a fundamental and efficiently solvable problem in combinatorial optimization with a multitude of applications in various fields. Surprisingly, there is only very little known about constraint types under which SFM remains efficiently solvable. The arguably most relevant non-trivial constraint class for which polynomial SFM algorithms are known are parity constraints, i.e., optimizing only over sets of odd (or even) cardinality. Parity constraints capture classical combinatorial optimization problems like the odd-cut problem, and they are a key tool in a recent technique to efficiently solve integer programs with a constraint matrix whose subdeterminants are bounded by two in absolute value. We show that efficient SFM is possible even for a significantly larger class than parity constraints, by introducing a new approach that combines techniques from Combinatorial Optimization, Combinatorics, and Number Theory. In particular, we can show that efficient SFM is possible over all sets (of any given lattice) of cardinality $r \bmod{m}$, as long as $m$ is a constant prime power. This covers generalizations of the odd-cut problem with open complexity status, and has interesting links to integer programming with bounded subdeterminants. To obtain our results, we establish a connection between the correctness of a natural algorithm, and the nonexistence of set systems with specific combinatorial properties. We introduce a general technique to disprove the existence of such set systems, which allows for obtaining extensions of our results beyond the above-mentioned setting. These extensions settle two open questions raised by Geelen and Kapadia \[Combinatorica, 2017\] in the context of computing the girth and cogirth of certain types of binary matroids. title: Submodular Minimization Under Congruency Constraints --- Introduction ============ Submodular function minimization (SFM) is a central combinatorial optimization problem with numerous applications in many fields, including speech analysis, image segmentation, combinatorial optimization, and integer programming (see [@schrijver_2003_combinatorial; @mccormick_2005_submodular; @iwata_2008_submodular; @chakrabarty_2017_subquadratic; @artmann_2017_strongly] and references therein). A set function $f\colon2^N \rightarrow \mathbb{R}$ on a finite ground set $N$ is submodular if $$f(A)+f(B)\geq f(A\cup B)+f(A\cap B) \quad \forall A,B\subseteq N\enspace.$$ The high relevance of SFM is explained by the fact that the above condition, which defines submodularity, is equivalent to the diminishing returns property, which is a very natural property of set functions appearing in various contexts.[^1] Typical examples of submodular functions include valuation functions in economics, cut functions, matroid rank functions, the Shannon entropy of joint distributions, and coverage functions, just to name a few. A cornerstone result in Combinatorial Optimization, known since the early ’80s, is that SFM is efficiently solvable, only assuming value oracle access to the submodular function [@groetschel_1981_ellipsoid], which is the usual model in the field and assumed throughout this paper. Typically, results on SFM easily carry over to lattices, implying that efficient SFM is possible over any lattice of the ground set.[^2] Since the early results on SFM, there has been exciting progress on the subject with some recent impressive speedups in the best-known running times for solving SFM [@cunningham_1985_submodular; @schrijver_2000_combinatorial; @iwata_2001_combinatorial; @iwata_2009_simple; @lee_2015_faster; @chakrabarty_2017_subquadratic]. Unfortunately, the picture is much less satisfactory for constrained SFM. A canonical extension of the unconstrained case is obtained by only considering non-empty sets, a problem that can easily be reduced to unconstrained SFM by guessing one element of an optimal solution. Another relatively direct extension that includes the case of non-empty sets is that SFM is efficiently solvable over intersecting or crossing set families (see [@schrijver_2003_combinatorial Volume B]).[^3] Surprisingly, very little is known beyond these relatively direct extensions. In particular, Grötschel, Lovász, and Schrijver [@groetschel_1981_ellipsoid] (see also [@groetschel_1984_corrigendum; @groetschel_1993_geometric]) showed that SFM can be solved efficiently over all odd or even sets. This extended a well-known earlier result by Padberg and Rao [@padberg_1982_odd], showing that minimum odd cuts can be found efficiently, and also a later extension by Barahona and Conforti [@barahona_1987_construction] to even cuts. More precisely, Grötschel, Lovász, and Schrijver [@groetschel_1984_corrigendum] show that SFM can be solved efficiently over any family of sets $\mathcal{F}\subseteq 2^N$ that is a *triple family*, which they define as follows: For any $A,B \subseteq N$, if three of the four sets $A,B,A\cup B$, and $A\cap B$ are not in $\mathcal{F}$, then none of the four sets are in $\mathcal{F}$. One can easily check that all even or odd sets indeed form a triple family. The most general constraint family under which SFM is known to be efficiently solvable was introduced by Goemans and Ramakrishnan [@goemans_1995_minimizing]. They showed that SFM can be efficiently solved over a generalization of triple families, which they called *parity family*. A set family $\mathcal{F}\subseteq 2^N$ is a parity family if for any pair of sets $A,B\subseteq N$ with $A\notin\mathcal{F}$ and $B\not\in\mathcal{F}$, either both of $A\cup B$ and $A\cap B$ are in $\mathcal{F}$, or none of the two. The difficulty in identifying relevant constraint classes under which SFM can be done efficiently is partially explained by the fact that SFM can quickly become very hard, even under constraint types for which other problems, like submodular maximization, can still be solved approximately. More precisely, Svitkina and Fleischer [@svitkina_2011_submodular] showed that even with a single cardinality lower bound, monotone SFM is impossible to approximate in the oracle model up to a factor $o(\sqrt{\sfrac{n}{\log n}})$, where $n\coloneqq |N|$ is the size of the ground set. The goal of this work is to present a new natural constraint class under which efficient submodular minimization is possible, and which is motivated by recent progress in linear integer programming with bounded subdeterminants, and by recent open questions related to binary matroids. More precisely, we consider the following natural generalization of parity-constrained submodular minimization. [**Congruency-Constrained Submodular Minimization (CCSM):**]{} Let $f\colon\mathcal{L} \rightarrow \mathbb{Z}$ be a submodular function defined on a lattice $\mathcal{L}\subseteq 2^N$, and let $m \in \mathbb{Z}_{>0}$, $r\in \{0,\ldots, m-1\}$. The task is to find a minimizer of $$\label{eq:CCSM}\tag{CCSM} \min\{f(S) \mid S\in \mathcal{L}, \; |S| \equiv r \pmod*{m}\}\enspace.$$ We call $m$ the *modulus* of the problem. Moreover, we highlight that $N$ is a finite ground set throughout this paper. Notice that the case $m=2$ captures odd/even submodular minimization, and thus in particular the odd cut problem. More generally, one can observe that also the $T$-cut problem, which only considers cuts with an odd number of vertices within a vertex set $T$, can easily be cast as .[^4] Apart from naturally extending known SFM settings, our study of  is motivated by an open question in integer programming, namely whether integer linear programs (ILPs) with constraint matrices having constantly bounded subdeterminants can be solved efficiently. More precisely, it was recently shown in [@artmann_2017_strongly] that bimodular ILPs can be solved efficiently, which are problems of the form $\max\{c^T x \mid Ax \leq b, x\in \mathbb{Z}^n\}$, where $A$ has full column rank and each $n\times n$ submatrix of $A$ has a determinant within $\{-2,-1,0,1,2\}$. This result implies that any ILP such that all subdeterminants of $A$ are within $\{-2,-1,0,1,2\}$ can be solved efficiently (see [@artmann_2017_strongly] for more details), thus extending the well-known fact that ILPs with totally unimodular constraint matrices are efficiently solvable. However, whether ILPs with larger subdeterminants can still be solved efficiently seems to be a question beyond current techniques. Interestingly, a key algorithmic tool used in [@artmann_2017_strongly] to show that bimodular ILPs are efficiently solvable is efficient odd submodular minimization, or at least efficient algorithms to find minimum directed $T$-cuts, since the submodular minimization problems appearing in [@artmann_2017_strongly] can be reformulated as directed $T$-cut problems. Conversely, a directed $T$-cut problem can naturally be modeled as a bimodular ILP. ILPs with subdeterminants up to $m$ include a natural extension of the directed $T$-cut problem, namely the problem of finding a cut of smallest value among all cuts of cardinality $r \bmod{m}$. This is clearly a special case of , by choosing $f$ to be the directed cut function. Hence, to make progress on the question of ILPs with bounded subdeterminants, one needs to be able to solve  for $f$ being an arbitrary directed cut function. Furthermore, due to the approach presented in [@artmann_2017_strongly], there is hope that this subproblem may be an important building block for finding an efficient procedure to solve ILPs with bounded subdeterminants. Moreover, we consider the following generalized version of , which nicely highlights the versatility of our approach and captures several open problems raised by Geelen and Kapadia [@geelen_2017_computing] in the context of computing the girth and cogirth of perturbed graphic matroids. In the definition below, as well as later in the paper, we use the shorthand $[k]\coloneqq \{1,\ldots,k\}$. [**Generalized Congruency-Constrained Submodular Minimization (GCCSM):**]{} Let $f\colon\mathcal{L} \rightarrow \mathbb{Z}$ be a submodular function defined on a lattice $\mathcal{L}\subseteq 2^N$, and let $m \in \mathbb{Z}_{>0}$. Moreover, let $k\in\mathbb{Z}_{>0}$, $S_1,\ldots, S_k\subseteq N$ and $r_1,\ldots, r_k\in \{0,\ldots, m-1\}$. The task is to find a minimizer of $$\label{eq:GCCSM} \min\{f(S) \mid S\in \mathcal{L}, \; |S\cap S_i| \equiv r_i \pmod*{m}\;\;\forall i\in [k]\}\enspace. \tag{GCCSM}$$ In particular,  captures the $t$-Set Even-Cut Problem and $t$-Set Odd-Cut Problem defined in [@geelen_2017_computing]. There, one is given a constant $t$, an undirected graph $G=(V,E)$, and sets $T_1,\ldots, T_t\subseteq V$. The task is to find a cut $S\subseteq V$ with a minimum number of edges $|\delta(S)|$ among all cuts whose intersections with the sets $T_i$ are all even or all odd, respectively. Geelen and Kapadia identified the $t$-Set Even-Cut Problem as a special case of the so-called $t$-Dimensional Even-Cut Problem. While the latter is key to their algorithm for computing the cogirth of perturbed graphic matroids, they consider the $t$-Set Even-Cut problem as a purer form of the problem, which they believe to be of independent interest, as well as the natural variation of the $t$-Set Odd-Cut problem. Geelen and Kapadia present a randomized algorithm for the $t$-Set Even-Cut Problem, based on an adaptation of Karger’s contraction algorithm [@karger_1993_global; @karger_1996_new], and they raise the following open questions which we address through our work: 1. They ask about a deterministic procedure for the $t$-Set Even-Cut problem, which they mention as one of the main shortcomings of their approach. As noted in [@geelen_2017_computing], Conforti and Rao [@conforti_1987_some] found an efficient deterministic algorithm for the $1$-Set Even-Cut Problem. However, even for the $2$-Set version, no deterministic procedure is known. 2. They raise the question about the complexity of the Odd-Cut problem, stating that the method of Padberg and Rao [@padberg_1982_odd] for finding an odd cut extends to the $2$-Set Odd-Cut setting; however, even for the $3$-Set Odd-Cut problem, the complexity remains open. The main technical contribution of this paper is to introduce a new approach based on techniques from Combinatorics and Number Theory to analyze a natural algorithm for  and . Our results ----------- We start by stating the implications of our techniques on  and , and provide an overview of the techniques in Section \[subsec:overviewTech\]. Our main result for  is the following. \[thm:mainCCSM\] For any $m\in \mathbb{Z}_{>0}$ that is a prime power, can be solved in time $n^{2m+O(1)}$. Hence, we can efficiently solve  for any modulus $m$ that is a prime power bounded by a constant. Notice that an upper bound on $m$ is required to obtain an efficient algorithm. Indeed, in particular if $m=n\coloneqq |N|$, the congruency constraint simply models a cardinality constraint. However, as mentioned in the introduction, SFM subject to a cardinality constraint is impossible to approximate up to any factor $o(\sqrt{\sfrac{n}{\log n}})$ in the oracle model. It is not hard to observe that this implies that even for any $\epsilon >0$, with modulus $m=\Omega(n^\epsilon)$ cannot be solved exactly in polynomial time. Our key contribution, which leads to Theorem \[thm:mainCCSM\], is a connection of the correctness of a natural procedure, which we introduce in Section \[subsec:overviewTech\], and the nonexistence of certain set systems. To disprove the existence of such set systems, we employ tools from Combinatorics and Number Theory, in particular Fermat’s Little Theorem. A main advantage of our techniques is that they are very versatile, and allow in particular for an adaptation to , leading to the following result. \[thm:mainGCCSM\] For any $m\in \mathbb{Z}_{>0}$ that is a prime power, can be solved in time $n^{2km+O(1)}$. Notice that Theorem \[thm:mainGCCSM\] solves the two open questions by Geelen and Kapadia [@geelen_2017_computing] mentioned in the introduction. Moreover, we want to highlight that our algorithms for solving  and  consist of repeatedly solving unconstrained submodular function minimization problems, namely at most $n^{2(m-1)}$ many for  and $n^{2k(m-1)}$ many for . Using a strongly polynomial algorithm for submodular function minimization, the running time guarantees of Theorems \[thm:mainCCSM\] and \[thm:mainGCCSM\] are achieved. Overview of main steps of our technique {#subsec:overviewTech} --------------------------------------- We start by stating a natural algorithm, highlighted below, that we use to derive both of our main results, Theorems \[thm:mainCCSM\] and \[thm:mainGCCSM\]. Our algorithm is parameterized by an integer $d\in \mathbb{Z}_{> 0}$, which we call the *depth* of the algorithm. Its input is a value oracle for a submodular function $f\colon\mathcal{L} \rightarrow \mathbb{Z}$ defined on a lattice $\mathcal{L}\subseteq 2^N$, and a family $\mathcal{F}\subseteq 2^N$, capturing additional constraints we want to satisfy. In particular, for  we have $\mathcal{F}=\{S\subseteq N \mid |S|\equiv r \pmod{m}\}$, and for , the set $\mathcal{F}$ is given by $\mathcal{F}=\{S\subseteq N \mid |S\cap S_i|\equiv r_i \pmod{m} \;\forall i\in[k]\}$. We assume that $\mathcal{F}$ is given by a membership oracle, which can be queried for any set $S\subseteq N$, and returns whether $S\in \mathcal{F}$. 1. \[algitem:enum\] For all $A,B \subseteq N$ with $|A|,|B|\leq d$ and $A\cap B=\emptyset$, compute a minimal minimizer of $f$ over the lattice $$\mathcal{L}_{AB} \coloneqq \{S\in \mathcal{L} \mid A\subseteq S \subseteq N\setminus B\}\enspace. \vspace*{-0.3em}$$ Let $\mathcal{S}$ be the family of all computed minimal minimizers for all pairs of $A$ and $B$. 2. Return a set $S\in \mathcal{S}$ of minimum value among all sets in $\mathcal{S}\cap\mathcal{F}$. The algorithm is a natural extension of a procedure suggested in [@goemans_1995_minimizing], which corresponds to . In step \[algitem:enum\], we repeatedly solve unconstrained submodular minimization problems for minimal minimizers. To this end, one can observe that many submodular function minimization algorithms do actually return minimal minimizers. Alternatively, for integer-valued submodular functions, we can observe that a set is a minimal minimizer of $f$ if and only if it is a minimizer of the submodular function $g$ given by $g(S)=(n+1)f(S)+|S|$. Hence, it suffices to find any minimizer of $g$ to obtain a minimial minimizer of $f$. Notice that is clearly a polynomial time algorithm for any constant depth $d$. However, depending on the structure of the constraint set $\mathcal{F}$, and the choice of $d$, the above algorithm may fail to return a set in $\mathcal{F}$ with minimum submodular value. In particular, it may even happen that no feasible solution is found, i.e., $\mathcal{S}\cap \mathcal{F} = \emptyset$. In the following, we show the main steps that we used to derive our main result for , i.e., Theorem \[thm:mainCCSM\]. In Section \[sec:GCCSM\], we show how to extend the results to . To analyze the correctness of for , we show that if fails to return an optimal solution to , then this implies the existence of a set system with the following properties. For brevity, we call a set system satisfying these properties an $(m,d)$-system. Let $N$ be a finite ground set, and let $m,d\in \mathbb{Z}_{>0}$. We say that a set system $\mathcal{H}\subseteq 2^N$ is an $(m,d)$-system (on $N$) if 1. \[item:MDIntClosed\] $\mathcal{H}$ is closed under intersection, i.e., $H_1\cap H_2\in \mathcal{H} \;\;\forall H_1,H_2\in \mathcal{H}$, 2. \[item:MDDiffParity\] $|H| \not\equiv |N| \pmod{m} \;\;\forall H\in \mathcal{H}$, and 3. \[item:MDCoverage\] for any $S\subseteq N$ with $|S|\leq d$, there is a set $H\in \mathcal{H}$ with $S\subseteq H$. Note that in particular, we require property \[item:MDIntClosed\] also for disjoint sets: If there are $H_1,H_2\in\mathcal{H}$ with $H_1\cap H_2=\emptyset$, then $\emptyset\in\mathcal{H}$. On the other hand, if $\emptyset\not\in\mathcal{H}$, we can conclude that all sets have at least one element in common. Also observe that property \[item:MDCoverage\] implies that the sets of an $(m,d)$-system $\mathcal{H}$ cover the ground set, i.e., we always have $N=\bigcup_{H\in\mathcal{H}}H$. The following theorem formalizes a crucial link between nonexistence of $(m,d)$-systems and correctness of , and reduces the correctness of to a purely combinatorial question. Here (and throughout the rest of this paper), *nonexistence of $(m,d)$-systems* without explicit reference to a ground set is to be understood to hold for any ground set, i.e., no matter what finite ground set $N$ is chosen, there does not exist an $(m,d)$-system on $N$. \[thm:EnumDGoodIfNoBadSys\] Let $m,d\in \mathbb{Z}_{>0}$. If no $(m,d)$-system exists, then returns an optimal solution to any problem with modulus $m$. Notice that Theorem \[thm:EnumDGoodIfNoBadSys\] does not depend on the lattice $\mathcal{L}$ underlying the problem. For specific lattices $\mathcal{L}\subseteq 2^N$, the above conditions can be slightly weakened. In particular, it suffices to consider a weaker definition of $(m,d)$-systems, where $\mathcal{H}$ needs to be a subfamily of $\mathcal{L}$. Section \[sec:reduceToSetSys\] provides further details on this. However, for the congruency constraints we consider, we do not need the weaker requirements for specific lattices, and we thus decided to avoid these details here in the interest of simplifying the presentation. Finally, our approach is completed by deriving the following result, which completes the last step of our proof, and, together with Theorem \[thm:EnumDGoodIfNoBadSys\], implies Theorem \[thm:mainCCSM\]. \[thm:noMM-1Sys\] For $m\in \mathbb{Z}_{>0}$ being a prime power, there is no $(m,m-1)$-system. Moreover, we want to mention that Gopi [@gopi2017systems], after hearing a presentation of this paper, found an elegant proof showing that for $m$ not being a prime power, there do exist $(m,m-1)$-systems. This shows an interesting discrepancy between prime power moduli and non-prime power moduli, and it suggests that an extension of our techniques to the latter case requires new ideas. Organization of the paper ------------------------- In Section \[sec:reduceToSetSys\], we show how the correctness of can be reduced to the nonexistence of $(m,d)$-systems, thus proving Theorem \[thm:EnumDGoodIfNoBadSys\]. Section \[sec:noBadSetSystem\] shows Theorem \[thm:noMM-1Sys\], the nonexistence of $(m,m-1)$-systems for $m$ being a prime power. The techniques presented in Section \[sec:noBadSetSystem\] comprise a general framework based on results from Combinatorics and Number Theory to disprove existence of certain types of set systems. In Section \[sec:GCCSM\], we show how these techniques can be extended to , thus implying our main result for , Theorem \[thm:mainGCCSM\]. Section \[sec:barriers\] identifies a combinatorial barrier to extending our proof techniques beyond $m$ being a prime power. Section \[sec:existenceMMm2Systems\] shows that our choice of the depth $d$ of is smallest possible for the problems we consider. Reducing correctness of to properties of set systems {#sec:reduceToSetSys} ==================================================== The main goal of this section is to prove Theorem \[thm:EnumDGoodIfNoBadSys\]. In fact, we show a slight strengthening, which allows us to derive results for , and may lead to further applications for constraints beyond congruency constraints. For this we generalize the notion of $(m,d)$-system to the notion of an $(\mathcal{F},d)$-system, where the role of all sets of cardinality $r\bmod{m}$ is replaced by a general constraint family $\mathcal{F}\subseteq 2^N$ on $N$. Moreover, we will be explicit about the underlying lattice, which leads to stronger statements that may be helpful for extending our results to further contexts. Let $\mathcal{L}\subseteq 2^N$ be a lattice, $\mathcal{F}\subseteq \mathcal{L}$, and let $d\in \mathbb{Z}_{>0}$. A family $\mathcal{H}\subseteq \mathcal{L}$ is called an $(\mathcal{F},d)$-system if the following holds, where $Q\coloneqq \bigcup_{H\in \mathcal{H}}H$: 1. \[item:FDQin\] $Q\in \mathcal{F}$, 2. \[item:FDIntClosed\] $\mathcal{H}$ is closed under intersection, 3. \[item:FDSetsNotFeasible\] $H\not\in \mathcal{F} \;\;\forall H\in \mathcal{H}$, and 4. \[item:FDCoverage\] for any $S\subseteq Q$ with $|S|\leq d$, there is a set $H\in \mathcal{H}$ with $S\subseteq H$. Using the notion of $(\mathcal{F},d)$-systems, we can now define the following strengthening of Theorem \[thm:EnumDGoodIfNoBadSys\], where for any set family $\mathcal{F}\subseteq \mathcal{L}$ defined on a lattice $\mathcal{L}$, we denote by $\operatorname{comp}(\mathcal{F})$ the complement family, i.e., $\operatorname{comp}(\mathcal{F})\coloneqq\{N\setminus F \mid F\in \mathcal{F}\}$, which we will interpret as a subfamily of the lattice $\operatorname{comp}(\mathcal{L})$. \[thm:EnumDGoodIfNoBadSysGen\] Let $\mathcal{L}\subseteq 2^N$ and $\mathcal{F}\subseteq \mathcal{L}$, and let $d\in \mathbb{Z}_{>0}$. If no $(\mathcal{F},d)$-system and no $(\operatorname{comp}(\mathcal{F}),d)$-system exists, then returns an optimal solution to any submodular function minimization problem over $\mathcal{F}$. We start by observing that Theorem \[thm:EnumDGoodIfNoBadSysGen\] indeed implies Theorem \[thm:EnumDGoodIfNoBadSys\]. Consider a problem $\min\{f(S) \mid S\in \mathcal{L}, |S|\equiv r \pmod{m}\}$. Hence, the set family $\mathcal{F}$ over which we want to minimize the function $f$ is given by $$\mathcal{F} = \{S\in \mathcal{L} \mid |S| \equiv r \pmod*{m}\}\enspace,$$ and its complement family is therefore $$\begin{aligned} \operatorname{comp}(\mathcal{F}) = \{S\in \operatorname{comp}(\mathcal{L}) \mid |N\setminus S| \equiv r \pmod*{m}\} = \{S\in \operatorname{comp}(\mathcal{L}) \mid |S| \equiv |N| -r \pmod*{m}\}\enspace.\end{aligned}$$ The proof now follows by observing that any $(\mathcal{F},d)$-system or $(\operatorname{comp}(\mathcal{F}),d)$-system is also an $(m,d)$-system on a potentially different ground set. Indeed, consider an $(\mathcal{F},d)$-system $\mathcal{H}$, and let $Q=\bigcup_{H\in \mathcal{H}}H$. (The case of a $(\operatorname{comp}(\mathcal{F}),d)$-system is analogous.) Then $\mathcal{H}$ is an $(m,d)$-system on $Q$ because properties \[item:FDIntClosed\] and \[item:FDCoverage\] of the definition of an $(\mathcal{F},d)$-system correspond to properties \[item:MDIntClosed\] and \[item:MDCoverage\] of an $(m,d)$-system on $Q$, respectively; moreover, properties \[item:FDQin\] and \[item:FDSetsNotFeasible\] of an $(\mathcal{F},d)$-system imply property \[item:MDDiffParity\] of an $(m,d)$-system. It remains to prove Theorem \[thm:EnumDGoodIfNoBadSysGen\]. Proof of Theorem \[thm:EnumDGoodIfNoBadSysGen\] ----------------------------------------------- We start by stating a key property of set systems $\mathcal{F}\subseteq \mathcal{L}$ that is crucial in our analysis to show that returns an optimal solution. This is an extension of a property used in [@goemans_1995_minimizing] for parity constraints. Let $\mathcal{L}\subseteq 2^N$ be a lattice and $\mathcal{F}\subseteq \mathcal{L}$. We say that the tuple $(\mathcal{F}, \mathcal{L})$ is *$d$-good*—or simply that $\mathcal{F}$ is *$d$-good* if $\mathcal{L}$ is clear from context—if for any submodular function $f\colon\mathcal{L} \rightarrow \mathbb{Z}$, and any minimizer $S^*$ of $\min\{f(S) \mid S\in \mathcal{F}\}$, there exists a set $A\subseteq S^*$ with $|A|\leq d$ satisfying $$f(S) \geq f(S^*) \quad \forall S\in \mathcal{L} \text{ with } A\subseteq S \subseteq S^*\enspace.$$ We now prove Theorem \[thm:EnumDGoodIfNoBadSysGen\] in two steps. First, we show that if a set system $\mathcal{F}$ and its complement family $\operatorname{comp}(\mathcal{F})$ are $d$-good, then our algorithm will return an optimal solution. \[lem:dGoodToOpt\] Let $\mathcal{L}\subseteq 2^N$ be a lattice, $\mathcal{F}\subseteq \mathcal{L}$, and $d\in \mathbb{Z}_{>0}$. If $(\mathcal{F},\mathcal{L})$ and $(\operatorname{comp}(\mathcal{F}),\operatorname{comp}(\mathcal{L}))$ are both $d$-good, then returns an optimal solution to any submodular minimization problem on $\mathcal{F}$. Conversely, if a constraint set $\mathcal{F}$ is not $d$-good, then we can derive the existence of an $(\mathcal{F},d)$-system out of it as shown by the following lemma, which, together with Lemma \[lem:dGoodToOpt\], immediately implies Theorem \[thm:EnumDGoodIfNoBadSysGen\], as desired. \[lem:notDGoodToSys\] Let $\mathcal{L}\subseteq 2^N$ be a lattice, $\mathcal{F}\subseteq \mathcal{L}$, and $d\in \mathbb{Z}_{>0}$. If $(\mathcal{F},\mathcal{L})$ is not $d$-good, then there exists an $(\mathcal{F},d)$-system. The proof strategies for Lemmas \[lem:dGoodToOpt\] and \[lem:notDGoodToSys\] are heavily inspired by an approach presented in [@goemans_1995_minimizing] for parity families. We remark that the proof of Lemma \[lem:dGoodToOpt\] strengthens the proof approach presented in [@goemans_1995_minimizing], which allows us to use simpler requirements for the definition of a $d$-good system than what would have been necessary by following the proof approach in [@goemans_1995_minimizing] more closely. To prove Lemma \[lem:dGoodToOpt\], we show that under the assumption that $(\mathcal{F},\mathcal{L})$ and $(\operatorname{comp}(\mathcal{F}),\operatorname{comp}(\mathcal{L}))$ are both $d$-good, returns a set with function value equal to the function value of a minimal optimal solution. As the following lemma shows, arguing about minimal (with respect to inclusion) optimal solutions allows us to obtain a stronger result from $(\mathcal{F},\mathcal{L})$ being a $d$-good set system. \[lem:dGoodForMinimal\] Let $(\mathcal{F},\mathcal{L})$ be a $d$-good set system. Then, for any submodular function $f\colon \mathcal{L} \rightarrow \mathbb{Z}$, and any minimal minimizer $S^*$ of $\min\{f(S) \mid S\in \mathcal{F}\}$, there exists a set $A\subseteq S^*$ with $|A|\leq d$ satisfying $$f(S) > f(S^*) \quad \forall S\in \mathcal{L} \text{ with } A\subseteq S \subsetneq S^*\enspace.$$ Fix a submodular function $f\colon \mathcal{L} \rightarrow \mathbb{Z}$, and a minimal minimizer $S^*$ of $\min\{f(S) \mid S\in \mathcal{F}\}$. Consider the function $g\colon \mathcal{L} \rightarrow \mathbb{Z}$ given by $$g(S) = |N|f(S) + |N||S\setminus S^*| + |S| \quad \text{for all } S\in\mathcal{L}\enspace.$$ The function $g$ is submodular because it is a conic combination of the three submodular functions $S\mapsto f(S)$, $S\mapsto |S\setminus S^*|$, and $S\mapsto |S|$. Moreover, we claim that $S^*$ is a minimizer of $\min\{g(S) \mid S\in\mathcal{F}\}$. Indeed, by definition of $S^*$, we have $f(S) \geq f(S^*)$ for all $S\in\mathcal{F}$. If $f(S)\geq f(S^*)+1$, we get $$\begin{aligned} g(S) \geq |N| f(S) \geq |N| (f(S^*)+1) \geq |N| f(S^*) + |S^*| = g(S^*)\enspace. \end{aligned}$$ If, in the other case, $f(S) = f(S^*)$, then minimality of $S^*$ implies that $S\setminus S^* \neq \emptyset$, hence $|S\setminus S^*| \geq 1$, so $$\begin{aligned} g(S) \geq |N| f(S) + |N| |S\setminus S^*| \geq |N| f(S^*) + |N| \geq |N| f(S^*) + |S^*| = g(S^*)\enspace. \end{aligned}$$ Applying the property that $(\mathcal{F},\mathcal{L})$ is $d$-good to the submodular function $g$ and the minimizer $S^*$ of $\min\{g(S) \mid S\in\mathcal{F}\}$, we obtain that there exists a set $A\subseteq S^*$ with $|A|\leq d$ satisfying $$g(S)\geq g(S^*) \quad \forall S\in \mathcal{L} \text{ with } A\subseteq S \subseteq S^*\enspace.$$ To conclude, it suffices to see that for all $S\in\mathcal{L}$ with $S\subsetneq S^*$, the inequality $g(S) \geq g(S^*)$ implies $f(S) > f(S^*)$. Note that $S\subsetneq S^*$ implies $|S\setminus S^*|=0$, so the inequality $g(S) \geq g(S^*)$ can be rewritten as $$|N| f(S) + |S| \geq |N| f(S^*) + |S^*|\enspace.$$ The assumption $S\subsetneq S^*$ also implies $|S| < |S^*|$, hence, from the last inequality, we conclude $f(S)>f(S^*)$. With the above strengthening, we are ready to prove Lemma \[lem:dGoodToOpt\]. Let $f\colon\mathcal{L} \rightarrow \mathbb{Z}$ be a submodular function and let $S^*$ be a minimal minimizer of $\min\{f(S) \mid S\in\mathcal{F}\}$. Using that $(\mathcal{F},\mathcal{L})$ is $d$-good and applying Lemma \[lem:dGoodForMinimal\], we obtain existence of a set $A\subseteq S^*$ with $|A|\leq d$ satisfying $$\label{eq:innerIneq} f(S) > f(S^*) \quad \forall S\in \mathcal{L} \text{ with } A\subseteq S \subsetneq S^*\enspace.$$ Note that the function $g\colon \operatorname{comp}(\mathcal{L})\to\mathbb{Z}$ given by $g(S)=f(N\setminus S)$ is submodular, and $N\setminus S^*$ is a minimizer of $\min\{ g(S) \mid S\in\operatorname{comp}(\mathcal{F})\}$. So using the assumption that $(\operatorname{comp}(\mathcal{F}),\operatorname{comp}(\mathcal{L}))$ is $d$-good, we obtain existence of a set $B\subseteq N\setminus S^*$ with $|B|\leq d$ satisfying $$g(S) \geq g(N\setminus S^*) \quad \forall S\in \operatorname{comp}(\mathcal{L}) \text{ with } B\subseteq S \subseteq N\setminus S^*\enspace.$$ Rewriting the above in terms of $f$ and replacing $S$ by $N\setminus S$, we get $$\label{eq:outerIneq} f(S) \geq f(S^*) \quad \forall S \in \mathcal{L} \text{ with } S^* \subseteq S \subseteq N\setminus B\enspace.$$ Let $T$ be a minimal minimizer of $f$ over the lattice $\mathcal{L}_{AB}=\{S\in\mathcal{L} \mid A\subseteq S\subseteq N\setminus B\}$. Note that sets of this type are found in the first step of when considering the sets $A$ and $B$ given above. We claim that in fact $T=S^*$, proving that the minimizer $S^*$ is found in the first step of . Consequently, the set returned by is a set of optimal value $f(S^*)$, which is what we wanted to prove. It remains to see that $T=S^*$. As $S^*\in\mathcal{L}_{AB}$, we have $f(S^*)\geq f(T)$. Together with submodularity of $f$, we get $$2f(S^*) \geq f(S^*) + f(T) \geq f(S^*\cap T) + f(S^* \cup T)\enspace.$$ If $S^*\cap T \subsetneq S^*$,  implies $f(S^*\cap T)>f(S^*)$. Moreover,  implies $f(S^*\cup T)\geq f(S^*)$. Together, we obtain a contradiction to the previous inequality. Consequently, we have $S^*\cap T = S^*$ or, in other words, $S^*\subseteq T$. Minimality of $T$ implies $S^* = T$, as desired. Finally, we prove Lemma \[lem:notDGoodToSys\], which is the last missing piece in our proof of Theorem \[thm:EnumDGoodIfNoBadSysGen\]. Assume that $(\mathcal{F}, \mathcal{L})$ is not $d$-good. Hence, there is a submodular function $f\colon\mathcal{L}\to\mathbb{Z}$ and a minimizer $S^*$ of $\min\{f(S) \mid S\in \mathcal{F}\}$ such that for any set $A\subseteq S$ with $|A|\leq d$, there is a set $S_A\in \mathcal{L}$ with $A\subseteq S_A \subseteq S^*$ satisfying $f(S_A) < f(S^*)$. Among all such sets $S_A$, we choose one that is maximal (inclusion-wise). Let $\mathcal{H}\subseteq \mathcal{L}$ be the family of all sets that can be obtained as intersections of the sets $\{S_A\}_{A\subseteq S^*, |A|\leq d}$, where we include the sets $S_A$ themselves also in the family $\mathcal{H}$. We claim that $\mathcal{H}$ is an $(\mathcal{F},d)$-system. Clearly, $\mathcal{H}\subseteq \mathcal{L}$, because each set $S_A$ satisfies $S_A \in \mathcal{L}$ and the lattice $\mathcal{L}$ is closed under intersection. Moreover, we have $Q = \bigcup_{H \in \mathcal{H}} H = S^*$ because each set in $\mathcal{H}$ is contained in $S^*$, and for each element $e\in S^*$, the set $S_{\{e\}}\in \mathcal{H}$ contains $e$. Property \[item:FDQin\] of an $(\mathcal{F},d)$-system follows from $Q=S^*\in \mathcal{F}$. Moreover, \[item:FDIntClosed\] holds because $\mathcal{H}$ is intersection-closed by construction. Property \[item:FDCoverage\] is fulfilled because for each $A\subseteq Q$ with $|A|\leq d$, the set $S_A\in \mathcal{H}$ fulfills $A\subseteq S_A$. It remains to show that $\mathcal{H}$ fulfills property \[item:FDSetsNotFeasible\] of an $(\mathcal{F},d)$-system, i.e., that each set $H\in \mathcal{H}$ satisfies $H\not\in \mathcal{F}$. Recall that each set $H\in \mathcal{H}$ can be written as $$\label{eq:HAsIntersection} H = \bigcap_{i=1}^k S_{A_i}\enspace,$$ where $k\in \mathbb{Z}_{\geq 1}$, and $A_1, \ldots, A_k \in \mathcal{L}$ with $A_i \subseteq S^*$ and $|A_i| \leq d$ for $i\in [k]$. We show that $f(H) < f(S^*)$ by induction on $k$. Notice that this implies $H\not\in \mathcal{F}$ because $S^*$ is a minimizer of $\min\{f(S) \mid S \in \mathcal{F}\}$, and hence, no other set in $\mathcal{F}$ can have a smaller $f$-value. The case $k=1$ corresponds to sets $H=S_A$, where $A\in \mathcal{L}$, $A\subseteq S^*$, and $|A|\leq d$. By our choice of the sets $S_A$, we have $f(S_A) < f(S^*)$ for these sets. Now consider a set $H$ as described in  for $k\geq 2$, and assume that for any set $H'$ that can be described as the intersection of at most $k-1$ sets $S_{A_i}$, it holds that $f(H') < f(S^*)$. Let $H' = \bigcap_{i=1}^{k-1} S_{A_i}$, and hence, $H= H' \cap S_{A_k}$. By submodularity of $f$ we have $$\label{eq:HAsIntSAk} f(H') + f(S_{A_k}) \geq f(H'\cup S_{A_k}) + f(H)\enspace.$$ By definition, $S_{A_k}$ is a maximal subset of $S^*$ containing $A_k$ and satisfying $f(S_{A_k})<f(S^*)$. The chain $A_k \subseteq S_{A_k} \subseteq H'\cup S_{A_k} \subseteq S^*$ of inclusions thus lets us conclude $f(S_{A_k}) \leq f(H' \cup S_{A_k})$. Combined with , this implies $$f(H') \geq f(H)\enspace,$$ and the result now follows by the induction hypothesis, which implies $f(S^*)>f(H')$. Disproving the existence of $(m,m-1)$-systems {#sec:noBadSetSystem} ============================================= In this section we prove Theorem \[thm:noMM-1Sys\], i.e., that no $(m,m-1)$-system exists for $m$ being a prime power. To this end, we present a variety of techniques to transform set systems into more structured ones. Using those transformations, we show that any $(m,m-1)$-system, for $m=p^{\alpha}$ being a prime power, could be transformed into a $(p,1)$-system $\mathcal{H}$, on a possibly different ground set, such that each set $H\in \mathcal{H}$ is in the same congruence class with respect to $\bmod\ p$, i.e., there is an $r\in \{0,\ldots, p-1\}$ with $|H| \equiv r \pmod{p}$ for all $H\in \mathcal{H}$. Such systems can quite easily be seen not to exist, which is shown by the next lemma. Notice that the lemma does not depend on $p$ being a prime or prime power; only the transformations we introduce later depend on this. \[lem:structuredM1SystemNotPossible\] Let $N$ be a finite set. There is no non-empty intersection-closed set system $\mathcal{H}\subseteq 2^N$, and integers $p\in \mathbb{Z}_{>0}, r\in \{0,\ldots, p-1\}$ such that 1. \[item:impSystemCardN\] $|N|\not\equiv r \pmod{p}$, 2. \[item:impSystemCardH\] $|H|\equiv r \pmod{p} \;\;\forall H\in \mathcal{H}$, and 3. \[item:impSystemCovering\] for any $e\in N$, there is a set $H\in\mathcal{H}$ with $e\in H$, i.e., $N=\bigcup_{H \in \mathcal{H}}H$. Assume for the sake of contradiction that there exists a set system $\mathcal{H}\subseteq 2^N$ with the properties stated in the lemma. We first observe that we can assume without loss of generality that $r=0$. Indeed, by introducing $p-r$ new elements that we add to $N$ and every set in $\mathcal{H}$, a new set system is obtained that fulfills the properties of the lemma with $r=0$. As $N=\bigcup_{H\in \mathcal{H}}H$, we can compute $|N|$ by the inclusion-exclusion principle: $$\begin{aligned} |N| = \sum_{k=1}^{|\mathcal{H}|} (-1)^{k+1} \sum_{\substack{\mathcal{F}\subseteq \mathcal{H}\\ |\mathcal{F}|=k}} \left\vert \bigcap_{F\in \mathcal{F}} F\right\vert\enspace.\end{aligned}$$ However, when considering the above equation modulo $p$, a contradiction arises because each set $\bigcap_{F\in \mathcal{F}} F$ on the right-hand side is contained in $\mathcal{H}$, as $\mathcal{H}$ is intersection-closed, and thus $|\bigcap_{F\in \mathcal{F}} F|\equiv 0 \pmod{p}$; this implies that the right-hand side is $0 \bmod{p}$, which contradicts $|N|\not\equiv 0\pmod{p}$. To illustrate some of our techniques, we first present a transformation of sets systems that proves Theorem \[thm:noMM-1Sys\] for $m$ being a prime. Later, in Section \[subsec:setTransformations\], we introduce a general framework of set transformations, which we can use, as we will show in Section \[subsec:proofOfThmNoMM-1Sys\], to handle prime powers. Moreover, the versatility of these set transformations also allows us to extend our results to , which we show in Section \[sec:GCCSM\]. Set transformations and nonexistence of $(m,m-1)$-systems for $m$ prime {#subsec:mPrime} ----------------------------------------------------------------------- To prove that no $(m,m-1)$-system exists for $m$ prime, assume for the sake of contradiction that there is an $(m,m-1)$-system $\mathcal{H}\subseteq 2^N$. Notice that without loss of generality we can assume that $|N|\equiv 0 \pmod{m}$, and consequently $|H|\not\equiv 0 \pmod{m}$ for $H\in \mathcal{H}$. Indeed, if $|N|\equiv r \pmod{m}$, then we can construct a new set system by introducing $m-r$ new elements which get added to the ground set $N$ and also to every set in $\mathcal{H}$. One can easily observe that this leads to another $(m,m-1)$-system with $|N|\equiv 0 \pmod{m}$. Our goal is now to transform $\mathcal{H}$ into a new set system, on a different ground set $W$, such that the cardinality of each set changes in a well-defined way. More precisely, we want that a set of cardinality $x$ gets transformed into a set of cardinality $g(x)=x^{m-1}$. For $m$ being prime, Fermat’s Little Theorem implies $x^{m-1} \equiv 1 \pmod{m}$ for any $x\not\equiv 0 \pmod{m}$. Hence, such a transformation would have the desired effect that any set $H\in \mathcal{H}$ will be transformed to a set in the same congruence class; moreover, the cardinality of the image of the ground set would remain $0 \pmod{m}$. Furthermore, for the resulting system to be an $(m,1)$-system, we need two additional properties: First, each element of the new ground set needs to be contained in at least one transformed set, and additionally, the transformed system needs to retain the property of being intersection-closed. We now describe how a set transformation $G\colon2^N \rightarrow 2^W$ with the properties described above can be obtained. The new ground set is $$W = N^{m-1} \coloneqq \underbrace{N\times N \times \ldots \times N}_{\text{$m-1$ times}}\enspace.$$ Moreover, a set $S\subseteq N$ gets transformed into the set $$G(S) = \{(e_1,\ldots, e_{m-1}) \mid e_1,\ldots, e_{m-1}\in S\}\subseteq W\enspace.$$ Clearly, the cardinality of the transformed set $G(S)$ is $|G(S)| = |S|^{m-1}$. Hence, the change of cardinalities is indeed described by the function $g(x) = x^{m-1}$, as desired. Hence, if we look at the transformed set system $G(\mathcal{H})\coloneqq\{G(H)\mid H\in\mathcal{H}\}\subseteq 2^W$ on the new ground set $W$, we have 1. $|W| = g(|N|) = |N|^{m-1} \equiv 0 \pmod{m}$, because $|N|\equiv 0 \pmod{m}$, and 2. $|G(H)|\! = \! g(|H|)\! =\! |H|^{m-1}\! \equiv\! 1 \pmod{m} \;\forall H\in \mathcal{H}$, by Fermat’s Little Theorem and $|H|\not\equiv 0 \pmod{m}$. Moreover, $G(\mathcal{H})$ is indeed intersection-closed because the definition of $G$ implies $$G(S\cap T) = G(S) \cap G(T) \quad \forall S,T\subseteq N\enspace.$$ Finally, $G(\mathcal{H})$ is an $(m,1)$-system, as each element $(e_1, \ldots, e_{m-1})\in W$ is covered by a set in $G(\mathcal{H})$ due to the following. Because $\mathcal{H}$ is an $(m,m-1)$-system, there is a set $H\in \mathcal{H}$ such that $\{e_1,\ldots, e_{m-1}\}\subseteq H$, and hence $(e_1, \ldots, e_{m-1})\in G(H)$. Hence, $G(\mathcal{H})$ is an $(m,1)$-system with all sets in $G(\mathcal{H})$ being in the same congruence class $\bmod\ m$, which, by Lemma \[lem:structuredM1SystemNotPossible\], does not exist and thus leads to the desired contradiction. This disproves the existence of $(m,m-1)$-systems for $m$ being a prime, and implies via Theorem \[thm:EnumDGoodIfNoBadSys\] that our enumeration procedure works for prime moduli. For $m$ being a prime, with $d=m-1$ returns an optimal solution to with modulus $m$. Whereas the above product space transformation was enough to deal with prime moduli and allowed for highlighting several important ideas, we need more involved transformations to deal with prime powers and . In the next section, we therefore formalize and discuss in more generality a large class of cardinality transformations $g$ that can be achieved, and how they can be combined. A general framework based on set transformations {#subsec:setTransformations} ------------------------------------------------ We start by formalizing the idea of a transformation that changes the cardinality of a set $S$ in a well-defined way by some function $g$ and will also maintain the intersection-closed property, analogous to the transformation described in Section \[subsec:mPrime\]. Moreover, the notion of the *level of $g$*, which we also define below, allows us to give a simple condition to guarantee that elements in the new ground set remain covered by transformed sets. \[def:cardTrans\] A map $g\colon\mathbb{Z}_{\geq 0} \rightarrow \mathbb{Z}_{\geq 0}$ is a *cardinality transformation function* if for every finite set $N$, there is a finite set $W$ and a map $G\colon2^N \rightarrow 2^W$ such that 1. $G(N) = W$, 2. $|G(S)| = g(|S|) \;\;\forall S\subseteq N$, and 3. \[item:ctfIntClosed\] $G(S)\cap G(T) = G(S\cap T) \;\;\forall S,T\subseteq N$. Moreover, for $\ell\in \mathbb{Z}_{\geq 0}$, we say that $g$ is of *level $\ell$* if $G$ can be chosen such that for every $w\in W$, there exists a set $S\subseteq N$ with $|S|\leq \ell$ such that $w\in G(S)$. In this case we call $G$ a set transformation of level $\ell$. We call $G$ a *$g$-realizing set transformation* for the ground set $N$. Conversely, $g$ is called the *cardinality transformation function corresponding to $G$*. Notice that property \[item:ctfIntClosed\] implies that for any intersection-closed family $\mathcal{H}\subseteq 2^N$, its image $G(\mathcal{H})$ is as well intersection-closed. Furthermore, a set transformation function is always monotone, i.e., $G(S) \subseteq G(T)$ for $S\subseteq T \subseteq N$. This follows from $G(S) = G(S\cap T) = G(S) \cap G(T) \subseteq G(T)$ for any $S\subseteq T$. Moreover, we recall that we want to find a set transformation $G$ that would transform an $(m,m-1)$ system, for $m$ being a prime power, to a system with the properties stated in Lemma \[lem:structuredM1SystemNotPossible\], which leads to a contradiction by the same lemma. Hence, we want to find a set transformation $G$ such that the transformed set system still covers the ground set. For this, observe that by applying a set transformation of level $\ell$ to any set system $\mathcal{H}\subseteq 2^N$ satisfying that for any $U\subseteq N$ with $|U|\leq \ell$, there is a set $S\in \mathcal{H}$ such that $U\subseteq S$, a new set system that covers the whole ground set is obtained. To better quantify this property in a way that allows us later to combine several set transformations, we introduce the notion of a $k$-covering set system. For $k\in \mathbb{Z}_{\geq 1}$, a set family $\mathcal{H}\subseteq 2^N$ is *$k$-covering* if, for any $U\subseteq N$ with $|U|\leq k$, there exists a set $S\in \mathcal{H}$ such that $U\subseteq S$. Hence, any $(m,d)$-system (and also any $(\mathcal{F},d)$-system) is a $d$-covering set system by definition. Moreover, a $1$-covering set system is a system covering the whole ground set. The following observation highlights how the coverage of a set system changes through set transformations of a certain level. \[lem:transCoverage\] Let $\mathcal{H}\subseteq 2^N$ be a $k$-covering set system, and let $G$ be a set transformation of level $\ell\in \mathbb{Z}_{\geq 1}$. Then $G(\mathcal{H})$ is a $\left\lfloor \frac{k}{\ell}\right\rfloor$-covering system. Let $W=G(N)$ be the ground set of the transformed set system $G(\mathcal{H})$, and let $U\subseteq W$ with $|U|\leq \lfloor \frac{k}{\ell}\rfloor$. We have to show that there is a set $Y\in G(\mathcal{H})$ with $U\subseteq Y$. Because $G$ is of level $\ell$, for each element $u\in U$ there is a set $S_u\subseteq N$ with $|S_u|\leq \ell$ and $u\in G(S_u)$. Notice that $S\coloneqq \bigcup_{u\in U} S_u$ has thus size at most $|S| \leq \ell |U| \leq k$. Because $\mathcal{H}$ is $k$-covering, there exists $X\in \mathcal{H}$ with $S\subseteq X$. We finish the proof by showing that for $Y=G(X)$ we indeed have $U \subseteq Y$, which holds because we have that for all $u\in U$, $$G(X) \supseteq G(S) \supseteq G(S_u) \ni u\enspace,$$ where we use monotonicity of $G$ on the sets $X\supseteq S \supseteq S_u$, and the fact that $u\in G(S_u)$. In summary, the following provides a sufficient condition to disprove the existence of an $(m,m-1)$-system. Note that in the following statement, the number $p$ is not required to be prime. \[thm:targetCTF\] Let $m\in \mathbb{Z}_{\geq 1}$ and $d\in \mathbb{Z}_{\geq 1}$. There does not exist an $(m,d)$-system if there exists an integer $p\in \mathbb{Z}_{\geq 1}$ and a cardinality transformation function $g$ of level $d$ such that for $x\in \mathbb{Z}_{\geq 0}$: $$\label{eq:goodCTF} g(x) \equiv \begin{cases} 0 \pmod{p} & \text{if } x\equiv 0 \pmod{m},\\ 1 \pmod{p} & \text{if } x\not\equiv 0 \pmod{m}. \end{cases}$$ With the goal of deriving a contradiction, assume that there exists both a cardinality transformation function as stated in the theorem and an $(m,d)$-system $\mathcal{H}\subseteq 2^N$ on some finite ground set $N$. Let $r\in \{0,\ldots, m-1\}$ be such that $|N| \equiv r \pmod{m}$. Notice that, as in the proof of the above statement for prime numbers $m$ in Section \[subsec:mPrime\], we can assume $r=0$, because if $r\neq 0$, then we can add $m-r$ new elements to $N$ and each set in $\mathcal{H}$, thus obtaining an $(m,d)$-system on a larger ground set with $r=0$. Hence, assume $r=0$. The theorem now follows by observing that $G(\mathcal{H})$, where $G$ is a $g$-realizing set transformation for $N$ of level $d$, is a set system system fulfilling the conditions of Lemma \[lem:structuredM1SystemNotPossible\] with $r=1$, which is impossible by the same lemma. Notice that the fact of $G(\mathcal{H})$ covering the whole transformed ground set $G(N)$ follows by Lemma \[lem:transCoverage\], as any $(m,d)$-system is by definition $d$-covering, and $G$ is of level $d$. The following two lemmas present a large class of cardinality transformation functions with low level. In Section \[subsec:proofOfThmNoMM-1Sys\], we will see that this class is rich enough to disprove the existence of $(m,m-1)$-systems for $m$ being a prime power via Theorem \[thm:targetCTF\]. \[lem:basicCTF\] The following cardinality transformation functions $g\colon\mathbb{Z}_{\geq 0} \rightarrow \mathbb{Z}_{\geq 0}$ exist for every $k\in \mathbb{Z}_{\geq 1}$: 1. \[item:basicCTFConst\] $g(x) = k$ of level $0$, 2. \[item:basicCTFMonomial\] $g(x) = x^{k}$ of level $k$, and 3. \[item:basicCTFBinomial\] $g(x) = \binom{x}{k}$ of level $k$.[^5] Throughout this proof, let $N$ be an arbitrary finite ground set. We have to show that there is a $g$-realizing cardinality transformation function $G$ for $N$ of the claimed level. 1. Let $W$ be a set of cardinality $k$, and we define $G(S) = W$ for every $S\subseteq N$. One can easily verify that $G$ is a $g$-realizing cardinality transformation function for $g(x) =k$; moreover, it has level $0$ since $G(\emptyset) = W$. 2. The existence of such a cardinality transformation function was shown in our example in Section \[subsec:mPrime\], where $k$ corresponds to $m-1$. 3. For a finite set $A$ and $a \geq 1$, we denote by $\binom{A}{a}$ the family of all subsets of $A$ of cardinality $a$. The transformed ground set $W=G(N)$ is set to be $W = \binom{N}{k}$, and we define $G\colon2^N \rightarrow 2^W$ to be $G(S) = \binom{S}{k}$, i.e., this is the family of all subsets of size $k$ of elements in $S$, also called $k$-subsets of $S$. This $G$ clearly fulfills $G(N)=W$ and $|G(S)| = g(S)$ for all $S\subseteq N$. Moreover, for $S,T\subseteq N$, we have $$G(S) \cap G(T) = \{\text{all $k$-subsets of $S$}\} \cap \{\text{all $k$-subsets of $T$}\} = \{\text{all $k$-subsets of $S\cap T$}\} = G(S\cap T)\enspace.$$ Finally, the level of $G$ is indeed $k$, because any $w\in W$ corresponds to a $k$-subset of $N$, i.e., $w=\{e_1,\ldots, e_k\}\subseteq N$, and we have $G(\{e_1,\ldots, e_k\}) = \{w\}$. \[lem:addCTF\] Let $g_1, g_2 \colon \mathbb{Z}_{\geq 0} \rightarrow \mathbb{Z}_{\geq 0}$ be two cardinality transformation functions of level $\ell_1$ and $\ell_2$, respectively. Then $g_1 + g_2$ is a cardinality transformation function of level $\max\{\ell_1,\ell_2\}$. Let $N$ be a finite ground set, and let $G_i\colon2^N \rightarrow 2^{W_i}$ for $i\in \{1,2\}$ be a $g_i$-realizing set transformation of level $\ell_i$. Moreover, we choose the sets $W_1=G_1(N)$ and $W_2=G_2(N)$ to be disjoint. We claim that $G\colon2^N \rightarrow 2^{W_1\cup W_2}$ defined by $G(S) = G_1(S) \cup G_2(S)$ is a $(g_1+g_2)$-realizing set transformation of level $\max\{\ell_1,\ell_2\}$ as desired. Indeed, $G(N) = G_1(N) \cup G_2(N) = W_1\cup W_2$. Moreover, for any $S\subseteq N$, $$\begin{aligned} |G(S)|&=|G_1(S) \cup G_2(S)| = |G_1(S)| + |G_2(S)| = g_1(|S|) + g_2(|S|) = (g_1+g_2)(|S|)\enspace,\end{aligned}$$ where the second equality follows from $G_i(S) \subseteq W_i$ for $i\in \{1,2\}$ and $W_1$ and $W_2$ were chosen to be disjoint. Furthermore, for any $S,T\subseteq N$, we have $$\begin{aligned} G(S) \cap G(T) &= (G_1(S) \cup G_2(S)) \cap (G_1(T) \cup G_2(T)) = (G_1(S) \cap G_1(T)) \cup (G_2(S) \cap G_2(T))\\ &= G_1(S\cap T) \cup G_2(S\cap T) = G(S\cap T)\enspace,\end{aligned}$$ again exploiting disjointness of images with respect to $G_1$ and $G_2$, and the fact that $G_1$ and $G_2$ fulfill the intersection property of set transformations. Finally, $G$ is indeed of level $\max\{\ell_1,\ell_2\}$, because for any element $w\in W_1\cup W_2$ there is an $i\in \{1,2\}$ such that $w\in W_i$, and due to the fact that $G_i$ is of level $\ell_i$, there exists a set $S\subseteq N$ with $|S|\leq \ell_i$ and $w\in G_i(S) \subseteq G(S)$. By combining Lemma \[lem:basicCTF\] and Lemma \[lem:addCTF\], we obtain the following. \[cor:polyBinomCTF\] For any $k\in \mathbb{Z}_{\geq 1}$ and $a$, $b_1, \ldots, b_k$, $c_1,\ldots, c_k \in \mathbb{Z}_{\geq 0}$, the function $$g(x) = a + \sum_{i=1}^k b_i x^i + \sum_{i=1}^k c_i \binom{x}{i}$$ is a cardinality transformation function of level $k$. Theorem \[thm:targetCTF\] together with the existence of a rich set of cardinality transformation functions of low level, as stated by the above corollary, lead to a general approach to disprove the existence of $(m,d)$-systems in a concise way. Moreover, the approach can be adjusted to further settings, as we will see in Section \[sec:GCCSM\], when talking about . In particular, the proof of why no $(m,m-1)$-system exists for $m$ being prime can now be rephrased as follows in a concise way. By Corollary \[cor:polyBinomCTF\] (or even just by Lemma \[lem:basicCTF\]) the function $g(x)=x^{m-1}$ is a cardinality transformation function of level $m-1$; moreover, it has property  stated in Theorem \[thm:targetCTF\] for $p=m$, due to Fermat’s Little Theorem. Thus, by Theorem \[thm:targetCTF\], an $(m,m-1)$-system, for $m$ prime, does not exist, which, by Theorem \[thm:EnumDGoodIfNoBadSys\], implies that returns an optimal solution to any  problem with modulus $m$, as desired. Proof of Theorem \[thm:noMM-1Sys\] {#subsec:proofOfThmNoMM-1Sys} ---------------------------------- To disprove the existence of an $(m,m-1)$-system for $m=p^{\alpha}$ being a prime power, we consider the following cardinality transformation function of level $m-1$, whose existence is guaranteed by Corollary \[cor:polyBinomCTF\]: $$\label{eq:gForPrimePowers} g(x) = \sum_{\substack{1\leq k < m,\\ k \text{ odd}}} \binom{x}{k} + (p-1)\sum_{\substack{1\leq k < m,\\ k \text{ even}}} \binom{x}{k}\enspace.$$ To show in Lemma \[lem:gForPrimePowers\] that $g$ fulfils the conditions of Theorem \[thm:targetCTF\], we use the following relation for binomial coefficients over a field $\mathbb{F}_p$ for $p$ prime, which follows from elementary techniques. \[lem:binomMod\] Let $p$ be a prime, and let $a,b\in \mathbb{Z}_{\geq 0}$ and $\alpha \in \mathbb{Z}_{\geq 1}$ with $b< p^{\alpha}$. Then, it holds that $$\binom{a}{b} \equiv \binom{a \bmod p^{\alpha}}{b} \pmod{p}\enspace.$$ As a first step, we show that if $a - p^\alpha\geq 0$, then $\binom{a}{b}\equiv \binom{a-p^\alpha}{b}\pmod{p}$. Using Vandermonde’s identity, we obtain $$\binom{a}{b} = \binom{(a-p^\alpha)+p^\alpha}{b} = \sum_{k=0}^b \binom{a-p^\alpha}{b-k}\binom{p^\alpha}{k}\enspace.$$ Note that $p^\alpha>k\geq 1$ implies that $\binom{p^\alpha}{k} = \frac{p^\alpha}{k}\binom{p^\alpha-1}{k-1}$ is divisible by $p$. As $b<p^\alpha$, we see that after reducing the above equation $\bmod\ p$, the only possibly non-zero summand is $\binom{a-p^\alpha}{b}$, as desired. Iteratively applying $\binom{a}{b}\equiv \binom{a-p^\alpha}{b}\pmod{p}$, we immediately obtain that for all $\ell\in\mathbb{Z}_{\geq0}$ satisfying $a-\ell p^\alpha \geq 0$, we have $$\binom{a}{b} \equiv \binom{a-\ell p^\alpha}{b} \pmod{p}\enspace.$$ Choosing $\ell = \left\lfloor\frac{a}{p^\alpha}\right\rfloor$, we get that $a-\ell p^\alpha\in\{0,\ldots,p^\alpha-1\}$ is the residue class of $a$ $\bmod\ p^\alpha$, and the result follows. \[lem:gForPrimePowers\] Let $m=p^{\alpha}$ be a prime power. Then, the function $g$ defined by  fulfills property , i.e., for $x\in \mathbb{Z}$ we have $$g(x) \equiv \begin{cases} 0 \pmod{p} & \text{if } x\equiv 0 \pmod{m},\\ 1 \pmod{p} & \text{if } x\not\equiv 0 \pmod{m}. \end{cases}$$ Let $x\in \mathbb{Z}$. Due to Lemma \[lem:binomMod\], we have $g(x) = g(x\bmod p)$. Hence, we can assume $x\in \{0,\ldots, m-1\}$. For $x=0$ we clearly have $g(x) = 0$. Thus, assume $x\in \{1,\ldots, m-1\}$, and it remains to show $g(x)\equiv 1 \pmod{p}$, which holds due to $$\begin{aligned} g(x) &= \sum_{\substack{1\leq k < m,\\ k \text{ odd}}} \binom{x}{k} + (p-1)\sum_{\substack{1\leq k < m,\\ k \text{ even}}} \binom{x}{k}\\ &\equiv \sum_{\substack{1\leq k < m,\\ k \text{ odd}}} \binom{x}{k} - \sum_{\substack{1\leq k < m,\\ k \text{ even}}} \binom{x}{k}&\pmod{p} \\ &= 1-\sum_{k=0}^{m-1} (-1)^{k} \binom{x}{k} \\ &= 1-\sum_{k=0}^{x} (-1)^{k} \binom{x}{k} \\ &= 1-(1-1)^x = 1\enspace. \tag*{\qedhere}\end{aligned}$$ Combining the results of Lemma \[lem:gForPrimePowers\] and Theorem \[thm:targetCTF\], we obtain that for any prime power $m=p^\alpha$, there does not exist an $(m,m-1)$-system, which finishes the proof of Theorem \[thm:noMM-1Sys\]. Extension to  {#sec:GCCSM} ============= The methods for proving Theorem \[thm:mainGCCSM\], which states polynomial time solvability of for prime power moduli, closely follow those presented above for . As before, we establish a link between failure of the algorithm and set systems with certain properties. While this link leads to $(m,d)$-systems for problems, we need the more general notion of $(m,k,d)$-systems for . For two vectors $x,y\in\mathbb{Z}^k$, we write $x\not\equiv y\pmod*{m}$ if there exists $i\in [k]$ with $x_i\not\equiv y_i\pmod*{m}$. \[def:mkdSystem\] Let $N$ be a finite ground set, let $m,k,d\in \mathbb{Z}_{>0}$, and let $S_1,\ldots,S_k\subseteq N$. We say that a set system $\mathcal{H}\subseteq 2^N$ is an $(m,k,d)$-system (with respect to $(S_1,\ldots,S_k)$ on $N$) if 1. \[item:MKDIntClosed\] $\mathcal{H}$ is closed under intersections, 2. \[item:MKDDiffParity\] $(|H\cap S_1|,\ldots,|H\cap S_k|) \not\equiv (|S_1|,\ldots,|S_k|) \pmod{m} \quad\forall H\in \mathcal{H}$, and 3. \[item:MKDCoverage\] for any $S\subseteq N$ with $|S|\leq d$, there is a $H\in \mathcal{H}$ with $S\subseteq H$. Note that property \[item:MKDCoverage\] precisely states that every $(m,k,d)$-system is $d$-covering. Using the tools developed in Section \[sec:reduceToSetSys\], we can immediately prove the following analogon to Theorem \[thm:EnumDGoodIfNoBadSys\], thus reducing correctness of to a combinatorial question about nonexistence of $(m,k,d)$-systems. As for $(m,d)$-systems, nonexistence of $(m,k,d)$-systems without explicit reference to a ground set is to be understood to hold for any ground set, i.e., for every finite ground set $N$, there does not exist an $(m,k,d)$-system on $N$. Let $m,k,d\in\mathbb{Z}_{>0}$. If no $(m,k,d)$-system exists, then returns an optimal solution to any problem with modulus $m$ and $k$ congruency constraints. Consider a problem $\min\{f(S) \mid S\in \mathcal{L}, \; |S\cap S_i| \equiv r_i \pmod*{m} \;\;\forall i\in [k]\}$ with $k$ congruency constraints. Consequently, the set family $\mathcal{F}$ over which we want to minimize the function $f$ is given by $$\mathcal{F} = \{ S\in \mathcal{L} \mid |S\cap S_i| \equiv r_i \pmod*{m} \;\;\forall i\in [k]\}\enspace.$$ By Theorem \[thm:EnumDGoodIfNoBadSysGen\], it is sufficient to see that no $(\mathcal{F},d)$-systems and no $(\operatorname{comp}(\mathcal{F}),d)$-systems exist. To finish the proof, we show that each of these two systems are also $(m,k,d)$-systems. To this end, consider an $(\mathcal{F}, d)$-system $\mathcal{H}$, and let $Q=\bigcup_{H\in\mathcal{H}} H$. Without loss of generality, we may assume that $S_i\subseteq Q$ for all $i\in [k]$ (if not, we simply delete the elements in $S_i\setminus Q$ for all $i\in [k]$). By property \[item:FDQin\] of an $(\mathcal{F},d)$-system, we have $Q\in\mathcal{F}$, and hence $|S_i| = |Q\cap S_i| \equiv r_i \pmod{m} \;\forall i\in [k]$. Together with property \[item:FDSetsNotFeasible\] of $(\mathcal{F},d)$-systems, this implies property \[item:MKDDiffParity\] of an $(m,k,d)$-system. Moreover, properties \[item:FDIntClosed\] and \[item:FDCoverage\] of an $(\mathcal{F},d)$-system correspond to properties \[item:MKDIntClosed\] and \[item:MKDCoverage\] of an $(m,k,d)$-system. Moreover, for a $(\operatorname{comp}(\mathcal{F}),d)$-system, note that we have $$\begin{aligned} \operatorname{comp}(\mathcal{F}) &= \{ S\in\operatorname{comp}(\mathcal{L}) \mid |(N\setminus S)\cap S_i| \equiv r_i \pmod*{m} \;\;\forall i\in [k]\}\\ &= \{ S\in\operatorname{comp}(\mathcal{L}) \mid |S\cap S_i| \equiv |N\cap S_i|-r_i \pmod*{m} \;\;\forall i\in [k]\}\enspace,\end{aligned}$$ so $\operatorname{comp}(\mathcal{F})$ has the same form as $\mathcal{F}$, and is therefore also an $(m,k,d)$-system. The previous theorem implies that in order to prove Theorem \[thm:mainGCCSM\], it remains to show that no $(m,k,d)$-systems exist for $m$ being a prime power and some $d = km + O(1)$. This is the content of the following theorem. \[thm:noMKKM-1Sys\] For $m\in\mathbb{Z}_{>0}$ being a prime power, there is no $(m,k,k(m-1))$-system. The idea for proving Theorem \[thm:noMKKM-1Sys\] is to assume existence of an $(m,k,k(m-1))$-system and apply set system transformations to obtain more structured systems. More precisely, our proof involves two transformations. First, we apply a transformation very similar to the one given in  that we used in the proof of Theorem \[thm:noMM-1Sys\]. Through this transformation, we obtain a well-structured $(m,k,k)$-system in which the vectors $(|H\cap S_1|,\ldots,|H\cap S_k|)$ take only very restricted values $\bmod\ p$, where $p$ is the prime such that $m=p^{\alpha}$ for some $\alpha\in \mathbb{Z}_{\geq 1}$. In a second step, we show that the previously obtained system can in turn be transformed to a system contradicting Lemma \[lem:structuredM1SystemNotPossible\]. This step requires a more general type of transformation functions than the ones seen before, which we introduce in the next section, before finally proving Theorem \[thm:noMKKM-1Sys\]. Set transformations for the generalized setting ----------------------------------------------- Generalized cardinality transformation functions are very similar to the cardinality transformation functions seen earlier in Definition \[def:cardTrans\]. Here, the cardinality $|G(S)|$ of a transformed set $S\subseteq N$ depends on the sizes of $|S\cap S_i|$ for $i\in [k]$, instead of just the size of $S$. Formally, the definition is as follows. \[def:genCardTrans\] A map $g\colon\mathbb{Z}^k_{\geq 0} \rightarrow \mathbb{Z}_{\geq 0}$ is a *generalized cardinality transformation function* if for every finite set $N$ and all sets $S_1,\ldots,S_k\subseteq N$, there is a finite set $W$ and a map $G\colon2^N \rightarrow 2^W$ such that 1. $G(N) = W$, 2. \[item:gctfCardinalities\] $|G(S)| = g(|S\cap S_1|,\ldots,|S\cap S_k|) \;\;\forall S\subseteq N$, and 3. \[item:gctfIntClosed\] $G(S)\cap G(T) = G(S\cap T) \;\;\forall S,T\subseteq N$. Moreover, for $\ell\in \mathbb{Z}_{\geq 1}$, we say that $g$ is of *level $\ell$* if $G$ can be chosen such that for every $w\in W$, there exists a set $S\subseteq N$ with $|S|\leq \ell$ such that $w\in G(S)$. In this case we call $G$ a set transformation of level $\ell$. We call $G$ a *$g$-realizing set transformation* for the ground set $N$ and the sets $S_1,\ldots,S_k$. Conversely, $g$ is called the *cardinality transformation function corresponding to $G$*. As pointed out before, the only difference to cardinality transformation functions as introduced in Definition \[def:cardTrans\] is property \[item:gctfCardinalities\]. For this reason, the properties that we proved for cardinality transformation functions also hold true for generalized cardinality transformation functions. In particular, if $G$ is a set transformation function of level $\ell$ realizing a generalized cardinality transformation function, and $\mathcal{F}$ is a set system, we have the following. If $\mathcal{F}$ is intersection-closed, then so is $G(\mathcal{F})$ (this follows from property \[item:gctfIntClosed\] above), and if $\mathcal{F}$ is $k$-covering, then $G(\mathcal{F})$ is $\lfloor\frac{k}{\ell}\rfloor$-covering (analogous to Lemma \[lem:transCoverage\]). Moreover, $G$ is a monotone function. There are various ways to construct generalized cardinality transformation functions, but we restrict our attention to the precise function that we need for our proofs. \[lem:productGCTF\] For every $k\in\mathbb{Z}_{\geq 1}$, the function $g(x_1,\ldots,x_k) = x_1x_2\cdots x_k$ is a generalized cardinality transformation function of level $k$. Let $N$ be a finite set and let $S_1,\ldots,S_k\subseteq N$. Let $W=S_1\times\ldots\times S_k$ and define $G\colon 2^N\rightarrow 2^W$ by $$G(S) = (S\cap S_1) \times \ldots \times (S\cap S_k)$$ for all $S\subseteq N$. We claim that $G$ is a $g$-realizing set transformation function. Indeed, it is easy to see that $G(N)=W$ by definition. Moreover, we have $$\begin{aligned} |G(S)|=|(S\cap S_1) \times \ldots \times (S\cap S_k)| = |S\cap S_1| \cdot\ldots\cdot |S\cap S_k| = g(|S\cap S_1|, \ldots, |S\cap S_k|)\enspace.\end{aligned}$$ To see that $G$ also fulfills property \[item:gctfIntClosed\] in Definition \[def:genCardTrans\], note that for all sets $S,T\subseteq N$, having $e\in(S\cap T\cap S_1) \times \ldots \times (S\cap T\cap S_k)$ is equivalent to having $e\in(S\cap S_1) \times \ldots \times (S\cap S_k)$ and $e\in(T\cap S_1) \times \ldots \times (T\cap S_k)$. Hence, $G(S\cap T) = G(S)\cap G(T)$, as desired. To see that $g$ is of level $k$, note that every $w\in W$ is a sequence of elements $(s_1,\ldots,s_k)$ with $s_i\in S_i$ for $i\in [k]$. Let $S_w = \{s_1,\ldots,s_k\}$, then $w\in G(S_w)$ and $|S_w| \leq k$ (notice that we may have $|S_w|< k$, because some of $s_i$ may be identical). Thus $g$ is of level $k$. Disproving existence of $(m,k,k(m-1))$-systems ---------------------------------------------- As outlined above, the proof of Theorem \[thm:noMKKM-1Sys\], namely that there do not exist $(m,k,k(m-1))$-systems, has two steps. In a first step, we disprove the existence of a very structured version of an $(m,k,k)$-system. In a second step, we prove Theorem \[thm:noMKKM-1Sys\] by showing that any $(m,k,k(m-1))$-system for $m$ being a prime power can be reduced to this structured version of an $(m,k,k)$-system. \[lem:GCTFapplication\] Let $m,k,p\in\mathbb{Z}_{\geq 0}$, let $N$ be a finite set and let $S_1,\ldots,S_k\subseteq N$. There does not exist a non-empty $(m,k,k)$-system $\mathcal{H}$ with respect to $(S_1,\ldots,S_k)$ on $N$ such that 1. $|S_i|\equiv 1 \pmod{p} \;\;\forall i\in [k]$, and 2. $(|H\cap S_1|,\ldots,|H\cap S_k|)\in\{0,1\}^k\setminus\{(1,\ldots,1)\} \pmod{p} \;\;\forall H\in\mathcal{H}$. Fix $m,k\in\mathbb{Z}_{\geq 0}$, a finite set $N$ and $S_1,\ldots,S_k\subseteq N$, and assume with the goal of deriving a contradiction that the system $\mathcal{H}$ specified in Lemma \[lem:GCTFapplication\] exists. Consider the generalized cardinality transformation function $g(x_1,\ldots,x_k)=x_1\cdots x_k$, and let $G$ be a $g$-realizing set transformation of level $k$ for the ground set $N$ and the sets $S_1,\ldots,S_k$, whose existence is guaranteed by Lemma \[lem:productGCTF\]. The lemma now follows by observing that $G(\mathcal{H})$ is a set system that satisfies all conditions of Lemma \[lem:structuredM1SystemNotPossible\] with $r=0$. Indeed, we see that the new ground set $G(N)$ has cardinality $|G(N)|=g(|S_1|,\ldots,|S_k|)=|S_1|\cdot\ldots\cdot |S_k|\equiv1\pmod{p}$ by the first assumption, settling property \[item:impSystemCardN\] in the assumptions of Lemma \[lem:structuredM1SystemNotPossible\]. On the other hand, every set in $G(\mathcal{H})$ is of the form $G(H)$ for some $H\in\mathcal{H}$, and has cardinality $|G(H)|=g(|H\cap S_1|,\ldots,|H\cap S_k|)=|H\cap S_1|\cdot\ldots\cdot|H\cap S_k|\equiv 0\pmod{p}$ because, by the second assumption, at least one of the factors vanishes $\bmod\ {p}$. This proves that $G(\mathcal{H})$ has property \[item:impSystemCardH\] of Lemma \[lem:structuredM1SystemNotPossible\]. Property \[item:impSystemCovering\] follows from the fact that $\mathcal{H}$ is $k$-covering and $g$ is of level $k$, hence $G(\mathcal{H})$ is still $1$-covering. Moreover, as the image of a non-empty intersection-closed set system, $G(\mathcal{H})$ is non-empty and intersection-closed, as well. This shows that $G(\mathcal{H})$ fulfills all conditions of Lemma \[lem:structuredM1SystemNotPossible\]. Consequently, by the same lemma, we obtain the desired contradiction. We assume for the sake of contradiction that for some prime power $m=p^\alpha$, there exists an $(m,k,k(m-1))$-system $\mathcal{H}$ with respect to $(S_1,\ldots,S_k)$ on $N$ for some finite ground set $N$ and subsets $S_i\subseteq N$ for $i\in [k]$. If, for some $i\in [k]$, $|S_i|\not\equiv 0\pmod{m}$, let $r_i\in\{1,\ldots,m-1\}$ such that $|S_i|\equiv r_i \pmod{m}$. We introduce $m-r_i$ new elements and add them to $S_i$ and all sets in $\mathcal{H}$ to obtain a new $(m,k,k(m-1))$-system with $|S_i|\equiv 0\pmod{m}$. After doing so for all $i\in [k]$ with $|S_i|\not\equiv 0\pmod{m}$, we obtain a corresponding set system with $|S_i|\equiv 0\pmod{m}$ for all $i\in[k]$. Let $g$ be the cardinality transformation function of level $m-1$ defined in , and let $g'\colon \mathbb{Z}_{\geq0}\to\mathbb{Z}_{\geq0}$ be defined by $g'(x)=1+(p-1) g(x)$. By Lemma \[lem:addCTF\], $g'$ is a cardinality transformation function of level $m-1$. Moreover, by Lemma \[lem:gForPrimePowers\], we have $$\label{eq:g'mod} g'(x) \equiv 1-g(x) \equiv \begin{cases} 1 \pmod{p} & \text{if } x\equiv 0 \pmod{m},\\ 0 \pmod{p} & \text{if } x\not\equiv 0 \pmod{m}. \end{cases}$$ Let $G'$ be a $g'$-realizing set transformation function, and note that $G'(\mathcal{H})$ is an $(m,k,k)$-system with respect to $(G'(S_1),\ldots,G'(S_k))$ on $G'(N)$. To see this, we verify the properties in Definition \[def:mkdSystem\]. Note that $G'(\mathcal{H})$ is indeed closed under intersections because $\mathcal{H}$ is, and $G'$ preserves intersections. Furthermore, by , we have $$\label{eq:setsTo1} |G'(S_i)|\equiv g'(|S_i|)\equiv 1\pmod{p}$$ for all $i\in[k]$. Moreover, any set in $G'(\mathcal{H})$, which is of the form $G'(H)$ for some $H\in\mathcal{H}$, fulfills $$\label{eq:vecsNotTo1} \begin{aligned} (|G'(H)\cap G'(S_1)|,\ldots,|G'(H)\cap G'(S_k)|) &= (|G'(H\cap S_1)|,\ldots,|G'(H\cap S_k)|)\\ &= (g'(|H\cap S_1|),\ldots,g'(|H\cap S_k|))\\ &\not\equiv (1,\ldots,1)\enspace, &\pmod{p} \end{aligned}$$ which follows from and the assumption $(|H\cap S_1|,\ldots,|H\cap S_k|)\not\equiv (0,\ldots,0)\pmod{m}$. Together,  and  imply property \[item:MKDDiffParity\] in Definition \[def:mkdSystem\]. Finally, observe that $G'(\mathcal{H})$ is $k$-covering because $\mathcal{H}$ is $k(m-1)$-covering and $G'$ is of level $m-1$. Hence, $G'(\mathcal{H})$ is indeed an $(m,k,k)$-system. Together with  and , we see that $G'(\mathcal{H})$ fulfills all conditions of Lemma \[lem:GCTFapplication\], and hence we obtain the desired contradiction. Barriers for extensions beyond prime powers {#sec:barriers} =========================================== In this section, we reveal limits of our techniques by showing that they cannot extend beyond prime power moduli. This points to an interesting structural difference for $m$ being a prime power versus $m$ having at least two different prime factors, and opens up the question whether  or  may be substantially harder for $m$ not being a prime power. This may also shed further light on the complexity of ILPs with a constraint matrix containing subdeterminants that are not prime powers. When proving correctness of for and , a crucial step that requires the restriction to prime power moduli $m$ is Lemma \[lem:gForPrimePowers\], where we prove that a suitable transformation function $g\colon\mathbb{Z}_{\geq0}\to\mathbb{Z}_{\geq0}$ has the property $$\label{eq:crucialProperty} g(x) \equiv \begin{cases} 0 \pmod{p} & \text{if } x\equiv 0 \pmod{m},\\ 1 \pmod{p} & \text{if } x\not\equiv 0 \pmod{m} \end{cases}$$ for some prime number $p$. The following two theorems present strong implications that result from imposing the above condition with composite moduli $m$. \[thm:barriersSuperpolyBound\] Let $m,p,r\in\mathbb{Z}_{>0}$, where $p$ is prime, and $m$ is a composite number with $r$ different prime factors. There is a constant $c=c(m)>0$ such that for every cardinality transformation function $g\colon\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0}$ fulfilling  with respect to $p$ and $m$, there is a constant $\kappa\in\mathbb{Z}_{\geq 0}$ with the property that for all $n\in\mathbb{Z}$ with $n\geq\kappa$, $$\label{eq:superpolyUpperBoundCTF} g(n) \geq n^{c\cdot\left(\frac{\log n}{\log\log n}\right)^{r-1}} \enspace.$$ Notice that the cardinality transformation functions that we used, which are all of the form described by Corollary \[cor:polyBinomCTF\], have the property that for any constant level, they are polynomially bounded. Hence, the above theorem implies that for an extension beyond prime power moduli based on the cardinality transformation functions we introduced, we would need a superconstant level. As our algorithmic approach relies on Theorems \[thm:targetCTF\] and \[thm:EnumDGoodIfNoBadSys\], this would in turn imply that the corresponding enumeration procedure has superconstant depth, prohibiting our algorithm to be efficient. However, the above theorem does not exclude that there may be other cardinality transformation functions, not covered by Corollary \[cor:polyBinomCTF\], that have constant level and fulfill . The next theorem rules out this possibility. More precisely, the next theorem shows that no cardinality transformation function with property  exists even if the level is allowed to depend on $n$, i.e., the size of the ground set, and grows moderately in terms of $n$. To capture this setting in the following, we allow the level $\ell$ of a cardinality transformation function $g$ to be a function $\ell\colon\mathbb{Z}_{\geq 0} \rightarrow \mathbb{Z}_{\geq 0}$, with the semantics that on any ground set of cardinality $n$, there is a $g$-realizing set transformation of level $\ell(n)$. To emphasize this difference to our original definition of level, which did not depend on $n$, we will also talk about *generalized level*. \[thm:barriersLargeLevel\] Let $m,p,r\in\mathbb{Z}_{>0}$ such that $p$ is prime, and $m$ is a composite number with $r$ different prime factors. Every cardinality transformation function $g\colon\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{>0}$ fulfilling  with respect to $p$ and $m$ has a generalized level $\ell$ that satisfies $$\label{eq:superconstUpperBoundLevel} \ell = \Omega\left( \left(\frac{\log n}{\log \log n}\right)^{r-1} \right)\enspace.$$ We highlight that in the above $\Omega$-notation, $m$ and $p$ are considered to be constant. The barriers highlighted by the above theorems originate from combinatorial results on set systems with restricted intersections. On the one hand, we have the following classical result by Frankl and Wilson. \[thm:FranklWilson\] Let $p$ be a prime number, let $s\in [p-1]$, and let $\mu_0,\ldots,\mu_s\in\{0,\ldots,p-1\}$ be distinct numbers. Let $\mathcal{H}$ be a set system on a ground set of $n$ elements such that for some $k\in \mathbb{Z}_{\geq 0}$ with $k\equiv \mu_0 \pmod{p}$, 1. $\mathcal{H}$ is a $k$-uniform set system, i.e., $|H|=k$ for all $H\in\mathcal{H}$, and 2. for all distinct $H_1,H_2\in\mathcal{H}$, we have $|H_1\cap H_2| \equiv \mu_i$ for some $i\in[s]$. Then, $|\mathcal{H}|\leq\binom{n}{s}$. While the above theorem shows that restricting the cardinalities of intersections modulo a prime number reduces the size of the set system to a polynomial in the size of the ground set, the situation changes if the prime modulus is replaced by a composite number. This surprising fact was observed by Grolmusz, who proved the following theorem. \[thm:Grolmusz\] Let $m\in\mathbb{Z}_{\geq 0}$ be a composite number with $r>1$ different prime divisors. Then, there is a constant $c_0=c_0(m)>0$ with the property that for every $n\in\mathbb{Z}_{> 0}$, there exists a set system $\mathcal{H}$ on a ground set of $n$ elements such that 1. $|\mathcal{H}|\geq n^{c_0\cdot\left(\frac{\log n}{\log\log n}\right)^{r-1}}$, 2. for all $H\in\mathcal{H}$, we have $|H|\equiv 0\pmod{m}$, and 3. for all distinct $H_1,H_2\in\mathcal{H}$, we have $|H_1\cap H_2| \not\equiv 0\pmod{m}$. The value of the constant $c_0$ in the above theorem equals roughly $p_r^{-r}$, where $p_r$ is the largest prime divisor of $m$ [@grolmusz_2000], and the constant $c$ in Theorem \[thm:barriersSuperpolyBound\] depends on $c_0$. We actually show that $c<c_0$ is a feasible choice. The proofs of Theorems \[thm:barriersSuperpolyBound\] and \[thm:barriersLargeLevel\] follow a common idea. In both, we assume existence of the respective transformation functions, and then use these functions to transform a set system of the type given by Theorem \[thm:Grolmusz\] to a new set system. Adjusting the new set systems so that they fulfill the assumptions of Theorem \[thm:FranklWilson\] gives an upper bound on their size, and combining these bounds with the lower bound coming from Theorem \[thm:Grolmusz\] allows for deducing the results. We show that for every composite number $m$, we can choose any constant $c<c_0$, where $c_0$ is the corresponding constant guaranteed by Theorem \[thm:Grolmusz\]. Let $m$ be a composite number, and let $g\colon\mathbb{Z}_{\geq0}\to\mathbb{Z}_{\geq0}$ be a cardinality transformation function fulfilling property  with respect to the prime number $p$ and $m$. For $n\in\mathbb{Z}$, let $\mathcal{H}$ be a set system on a ground set of size $n$ fulfilling the properties listed in Theorem \[thm:Grolmusz\] with respect to the composite number $m$. The set system $\mathcal{H}$ is not necessarily a uniform set system, but it contains a large uniform subsystem. To see this, define $\mathcal{H}_i=\{H\in\mathcal{H} \mid |H|=i\}$ for $i\in[n]$ and let $\ell\in\argmax_{i\in[n]} |\mathcal{H}_i|$. Then $\mathcal{H}_\ell$ is an $\ell$-uniform set system with $|\mathcal{H}_\ell|\geqslant\frac1n|\mathcal{H}|$. Let $G$ be a $g$-realizing set system transformation function and consider the set system $G(\mathcal{H}_\ell)$. We claim that $G(\mathcal{H}_\ell)$ is a set system on a ground set of size $g(n)$ that fulfills the assumptions of Theorem \[thm:FranklWilson\]. Obviously, $G(\mathcal{H}_\ell)$ is a uniform system, as for all $H\in\mathcal{H}_\ell$, we have $|H|=\ell$ and hence $|G(H)|=g(|H|)=g(\ell)$, so the set system is $g(\ell)$-uniform. Moreover, as by assumption, $|H|\equiv0\pmod{m}$, property  implies $g(\ell)=g(|H|)\equiv 0\pmod{p}$. Note that for any two distinct sets $H_1,H_2\in\mathcal{H}$, we have $$\label{eq:intersectionInImage} |G(H_1)\cap G(H_2)| = |G(H_1\cap H_2)| = g(|H_1\cap H_2|) \equiv 1 \pmod{p} \enspace,$$ where we used the assumption that $|H_1\cap H_2|\not\equiv 0 \pmod{m}$ for all distinct $H_1,H_2\in\mathcal{H}$, and property . Hence, $G(\mathcal{H}_\ell)$ fulfills the conditions of Theorem \[thm:FranklWilson\] with $s=1$, $\mu_0=0$, $\mu_1=1$, and $k=g(\ell)$. As $\mathcal{H}_\ell$ is a system on a ground set of size $n$, $G(\mathcal{H}_\ell)$ is one on a ground set of size $g(n)$. By Theorem \[thm:FranklWilson\], we thus obtain the upper bound $|G(\mathcal{H}_\ell)| \leq g(n)$. Note that if in , $G(H_1)$ and $G(H_2)$ are not distinct, then $|G(H_1)\cap G(H_2)|=|G(H_1)|=g(|H_1|)\equiv 0\pmod{p}$. This contradicts , hence $H_1=H_2$ whenever $G(H_1)=G(H_2)$, so $G$ is injective when restricting its domain to $\mathcal{H}$. Injectivity of $G$ on $\mathcal{H}$ implies $$|G(\mathcal{H}_\ell)|=|\mathcal{H}_\ell| \geq \frac{|\mathcal{H}|}{n} \geq n^{c_0\cdot\left(\frac{\log n}{\log\log n}\right)^{r-1}-1}\enspace.$$ Combining the obtained upper and lower bounds on $|G(\mathcal{H}_\ell)|$, we get the inequality $$g(n) \geq n^{c_0\cdot\left(\frac{\log n}{\log\log n}\right)^{r-1}-1}\enspace.$$ From the above, it is easy to see that whenever $c<c_0$, there exists a constant $\kappa\in\mathbb{Z}_{\geq0}$ such that every $n\in\mathbb{Z}$ with $n\geq\kappa$ satisfies $$g(n) \geq n^{c\cdot\left(\frac{\log n}{\log\log n}\right)^{r-1}}\enspace.\qedhere$$ To present the proof of Theorem \[thm:barriersLargeLevel\], we introduce the concept of *atoms* of a set system. When applying a set system transformation of level $\ell$, we cannot directly bound the size of the ground set of the new set system. Nonetheless, we can show that the size of the new ground set can be reduced to a polynomial in the size of the ground set of the initial set system without loosing the system’s structure. The key ingredient for this procedure is bounding the number of atoms in the transformed set system. This is formalized in Lemma \[lem:atoms\] and will be an important building block for the proof of Theorem \[thm:barriersLargeLevel\]. Let $\mathcal{H}$ be a set system on a finite ground set $N$. A non-empty set $A\subseteq N$ is an *atom* of $\mathcal{H}$ if it is a maximal set with the property that for all $H\in\mathcal{H}$, we have $A\subseteq H$ or $A\subseteq N\setminus H$. In particular, the above definition implies that two elements of the ground set $N$ are not in the same atom if and only if the set system $\mathcal{H}$ contains a set separating the two elements. \[lem:atoms\] Let $g\colon\mathbb{Z}_{\geq0}\to\mathbb{Z}_{\geq0}$ be a cardinality transformation function of level $\ell\in\mathbb{Z}_{\geq 0}$. Let $N$ be a set of size $n$, and let $G$ be a $g$-realizing set transformation function for the ground set $N$. Then, $G(2^N)$ has at most $1+\ell n^\ell$ many atoms. Since $g$ is of level $\ell$, for every $w\in W$ there is a set $S_w \subseteq N$ with $w\in G(S_w)$ and $|S_w|\leq \ell$. Among all such sets, let $S_w$ be one that is inclusion-wise minimal. (Actually, one can observe that $S_w$ is unique; however, we do not need this later.) Moreover, we denote by $A_w\subseteq G(N)$ the atom of $G(2^N)$ containing $w$. Because $|S_w|\leq \ell$ for all $w\in W$, the number of different sets $S_w$ can be bounded from above by the number of subsets of $N$ of size at most $\ell$, i.e., $$|\{ S_w \mid w\in W \}| \leq \sum_{i=0}^\ell \binom{n}{i} \leq 1+\ell n^\ell\enspace.$$ To finish the proof, we show that the map $A_w\mapsto S_w$ is an injection. If so, we get $|\{ A_w \mid w\in W \}| \leq |\{ S_w \mid w\in W \}|$, which, together with the above bound, proves the lemma. To see injectivity, let $w_1,w_2\in W$ with $A_{w_1}\neq A_{w_2}$, i.e., $w_1$ and $w_2$ are not in the same atom. Then, there is a set $G(S)\in G(2^N)$ separating the two elements, for some set $S\subseteq N$. Without loss of generality, assume that $w_1\in G(S)$, while $w_2\not\in G(S)$. On the one hand, this implies $w_1\in G(S)\cap G(S_{w_1})=G(S\cap S_{w_1})$, hence by minimality of $S_{w_1}$, we get $S\cap S_{w_1}=S_{w_1}$. On the other hand, we have $w_2\notin G(S)\cap G(S_{w_2})=G(S\cap S_{w_2})$, hence $S\cap S_{w_2}\subsetneq S_{w_2}$. This implies $S_{w_1}\neq S_{w_2}$, and hence injectivity of the map $A_w\mapsto S_w$, as desired. Before we start the proof of Theorem \[thm:barriersLargeLevel\], we remark that both the statement and the proof of the previous lemma remain unchanged even if we allow for using the notion of generalized level, thus leading to an upper bound of $1+\ell(n) n^{\ell(n)}$ many atoms. Let $m$ be a composite number, and let $g\colon\mathbb{Z}_{\geq0}\to\mathbb{Z}_{\geq0}$ be a cardinality transformation function that fulfills the property  for some prime number $p$. Moreover, let $\ell\colon\mathbb{Z}\to\mathbb{Z}$ denote the (generalized) level of $g$. We know that for every $n\in\mathbb{Z}_{\geq 0}$, there exists a set system $\mathcal{H}$ on a ground set $N$ of size $n$ fulfilling the properties listed in Theorem \[thm:Grolmusz\] with respect to the composite number $m$. Let $G$ be a $g$-realizing set transformation function on $N$ and consider $G(\mathcal{H})$. Note that by property  and the assumptions on $\mathcal{H}$, every set $G(H) \in G(\mathcal{H})$ satisfies $|G(H)|\equiv 0\pmod{p}$, while for every two distinct sets $G(H_1),G(H_2)\in G(\mathcal{H})$, we have $|G(H_1)\cap G(H_2)|=|G(H_1\cap H_2)|=g(|H_1\cap H_2|)\equiv 1\pmod{p}$. As we are only interested in the size of sets in $G(\mathcal{H})$ and their intersections $\bmod\ p$, and because every such set is a disjoint union of atoms of $G(\mathcal{H})$, we can delete elements of the ground set in the following way without loosing the observed properties. For every atom $A$ of $G(\mathcal{H})$, if $|A|\equiv a\pmod{p}$ with $a\in\{1,\ldots,p\}$, we can delete any $|A|-a$ elements of $A$ from the ground set $G(N)$, and update the sets in $G(\mathcal{H})$ correspondingly by removing the deleted elements from all sets containing them. By doing so, we thus obtain a new set system $\mathcal{I}$ with atoms of cardinality at most $p$. Note that none of the atoms were deleted completely, and thus distinct sets in $G(\mathcal{H})$ before the deletion of elements remain distinct after the deletion, i.e., in $\mathcal{I}$. Thus $$\label{eq:sameCardAfterDel} |\mathcal{I}| = |G(\mathcal{H})|\enspace.$$ In particular, the number of atoms in $\mathcal{I}$ equals the number of atoms in $G(\mathcal{H})$, which, by Lemma \[lem:atoms\], is bounded by $1+\ell n^\ell$. Altogether, $\mathcal{I}$ is a set system on a ground set of size at most $p(1+\ell n^\ell)$ such that $|I|\equiv 0\pmod{p}$ for all $I\in\mathcal{I}$, and $|I_1\cap I_2|\equiv 1\pmod{p}$ for all distinct sets $I_1, I_2\in \mathcal{I}$. In order to apply Theorem \[thm:FranklWilson\], we need a large uniform subsystem of $\mathcal{I}$. Thereto, let $\mathcal{I}_i=\{I\in\mathcal{I} \mid |I|=i \}$ for $i\in[p(1+\ell n^\ell)]$ be all uniform subsystems, and let $k\in[p(1+\ell n^\ell)]$ be such that $|\mathcal{I}_k|$ is the one of maximum cardinality. The $k$-uniform set system $\mathcal{I}_k$ satisfies the assumptions of Theorem \[thm:FranklWilson\] with $s=1$, $\mu_0=0$ and $\mu_1=1$, so by the same theorem, we get $$\label{eq:IkSmall} |\mathcal{I}_k|\leq p(1+\ell n^\ell)\enspace.$$ For a lower bound, first note that that $|\mathcal{I}_{k}| \geq \frac{1}{p(1+\ell n^\ell)}\,|\mathcal{I}|$. Furthermore, as already observed in the proof of Theorem \[thm:barriersSuperpolyBound\], $G$ is injective over the domain $\mathcal{H}$, and thus, we have $|G(\mathcal{H})|=|\mathcal{H}|$. Putting this together and using the lower bound on the size of $\mathcal{H}$, we get $$|\mathcal{I}_{k}| \geq \frac{|\mathcal{I}|}{p(1+\ell n^\ell)} = \frac{|\mathcal{H}|}{p(1+\ell n^\ell)} \geq \frac{n^{c_0\cdot\left(\frac{\log n}{\log\log n}\right)^{r-1}}}{p(1+\ell n^\ell)} \enspace,$$ where the equality follows from  and $|G(\mathcal{H})|=|\mathcal{H}|$. Combining this with  and rearranging terms, we obtain $$1+\ell n^\ell \geq \frac{n^{\frac{c_0}{2}\left(\frac{\log n}{\log\log n}\right)^{r-1}}}{p}\enspace.$$ Note that when transforming a system on a ground set of size $n$, the level is always at most $n$, i.e., $n\geq \ell$. Using this and absorbing constants into the asymptotic notation, we obtain $$n^{\ell+1} \geq \ell n^\ell = n^{\Omega\left(\left(\frac{\log n}{\log\log n}\right)^{r-1}\right)}\enspace,$$ which implies the desired $\ell=\Omega\left(\left(\frac{\log n}{\log\log n}\right)^{r-1}\right)$. Minimality of the enumeration depth $d$ {#sec:existenceMMm2Systems} ======================================= In this section, we show that for any $m \in \mathbb{Z}_{>0}$, does in general not solve  with modulus $m$ correctly if $d < m-1$. This shows in particular that our choice $d=m-1$ of the depth of is the smallest depth for which successfully solves  for prime power moduli $m$. This also implies the existence of $(m,m-2)$-systems for $m\in \mathbb{Z}_{>0}$; this follows from Theorem \[thm:EnumDGoodIfNoBadSys\], but can also be seen directly from our construction. We show that $d\geq m-1$ is necessary by constructing an explicit example where $d=m-2$ is not enough for to solve . Thereto, let $N=\{0,1,\ldots,n\}$ for some $n\in\mathbb{Z}$ with $n\geq m$. Consider the lattice $\mathcal{L}=2^N$ and define the modular (and thus also submodular) function $f\colon\mathcal{L}\to\mathbb{Z}$ by $$f(S) = \begin{cases} |S| & \text{if } 0\notin S,\\ |S|-1-m & \text{if } 0\in S \end{cases}$$ for all $S\subseteq N$. This function is indeed modular since it assigns weight $-m$ to the element $0$, and weight $1$ to all other elements of $N$, and the weight $f(S)$ of a subset $S\subseteq N$ is obtained by summing the weights of all elements. The problem that we consider is minimizing the function $f$ over the subfamily $$\mathcal{F} = \{ S\in\mathcal{L} \mid |S|\equiv 0 \pmod{m} \}\enspace.$$ It is easy to see that $\min\{f(S)\mid S\in\mathcal{F}\}=-1$, with minimizers being precisely all $m$-element subsets of $N$ containing $0$. However, does not solve this problem, as we now show. Consider a step of , i.e., fix $A,B\subseteq N$ with $|A|,|B|\leq m-2$ and $A\cap B=\emptyset$. It is easy to see that $$\argmin \{f(S) \mid S\in\mathcal{L}_{AB} \} = \begin{cases} \{A\} & \text{if } 0\in B,\\ \{A\cup\{0\}\} & \text{if } 0\notin B. \end{cases}$$ In all cases, the minimizers found will be sets of size at most $m-1$, while the actual minimizers of $f$ over $\mathcal{F}$ are of size $m$. So does indeed not solve this problem. As indicated above, existence of an $(m,m-2)$-system thus follows from Theorem \[thm:EnumDGoodIfNoBadSys\]. This system can be constructed by following the proof of Lemma \[lem:notDGoodToSys\], resulting in a system on a ground set of $m$ elements containing all sets of size at most $m-1$ containing a fixed element. Conclusions =========== We presented a new approach to deal with submodular function minimization problems under congruency constraints. The core of our approach is the analysis of a very natural algorithm, that enumerates over small subsets of elements to be included, respectively excluded, in a minimizer. Our analysis reduces the correctness of this procedure to a purely combinatorial question about the nonexistence of certain set systems, which we can settle when the modulus of the involved congruency constraints is a prime power, by using techniques from Combinatorics and Number Theory. This leads to polynomial time algorithms for  and when the modulus $m$ is a prime power bounded by a constant. The techniques we introduced to disprove such set systems can be seen as a general framework, which we hope may be useful for future extensions to solve submodular function minimization problems under even more general constraint families. It remains open whether and can be solved efficiently for a constant modulus $m$ that is not a prime power. However, as we highlighted in Section \[sec:barriers\], this would require new ingredients. A recent construction by Gopi [@gopi2017systems], which was found after submission of this work, strengthens the barriers pointed out in Section \[sec:barriers\] by showing that $(m,m-1)$-systems do actually exist if $m$ is not a prime power. Gopi’s construction is based on results by Barrington, Beigel, and Rudich [@barrington_1994_representing] on the representation of boolean functions. Results in [@barrington_1994_representing] were also leveraged by Grolmusz in his proof of Theorem \[thm:Grolmusz\]. Moreover, we highlight that our proofs imply that our enumeration algorithm, when applied to  or , enumerates *all* minimal optimal solutions. In particular, this shows that in the discussed settings where our approach finds an optimal minimal solution in polynomial time, the total number of minimal solutions is polynomially bounded. Since both for $m\geq 3$ and for $m\geq 2$ are not captured by triple or parity families, and neither do they generalize these families, it remains open to find a common generalization. In particular, submodular function minimization over the intersection of a constant number of parity families would be such a common generalization. It remains open whether this problem can be solved efficiently. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Karthekeyan Chandrasekaran for interesting discussions on related topics. Moreover, we are grateful to the anonymous referees for various comments and suggestions that helped to improve the quality of the presentation. [10]{} S. Artmann, R. Weismantel, and R. Zenklusen. A strongly polynomial algorithm for bimodular integer linear programming. In [*Proceedings of the 49th Annual ACM Symposium on Theory of Computing (STOC)*]{}, pages 1206–1219, 2017. F. Barahona and M. Conforti. A construction for binary matroids. , 66(3):213 – 218, 1987. D. A. Mix Barrington, R. Beigel, and S. Rudich. Representing boolean functions as polynomials modulo composite numbers. , 4(4):367–382, 1994. D. Chakrabarty, A. Sidford, Y. T. Lee, and Wong S. C. Subquadratic submodular function minimization. In [*Proceedings of 49th Annual ACM Symposium on the Theory of Computing (STOC)*]{}, pages 1220–1231, 2017. M. Conforti and M. R. Rao. Some new matroids on graphs: Cut sets and the max cut problem. , 12(2):193–204, 1987. W. H. Cunningham. On submodular function minimization. , 5(3):185–192, 1985. P. Frankl and R. M. Wilson. Intersection theorems with geometric consequences. , 1(4):357–368, 1981. S. Fujishige. . Elsevier, 2005. J. Geelen and R. Kapadia. Computing girth and cogirth in perturbed graphic matroids. , 2017. Available online at <http://dx.doi.org/10.1007/s00493-016-3445-3>. M. X. Goemans and V. S. Ramakrishnan. Minimizing submodular functions over families of sets. , 15(4):499–513, 1995. S. Gopi. Private communication, 2017. V. Grolmusz. Superpolynomial size set-systems with restricted intersections mod 6 and explicit ramsey graphs. , 20(1):71–86, 2000. M. Gr[ö]{}tschel, L. Lov[á]{}sz, and A. Schrijver. The ellipsoid method and its consequences in combinatorial optimization. , 1(2):169–197, 1981. M. Gr[ö]{}tschel, L. Lov[á]{}sz, and A. Schrijver. Corrigendum to our paper “[T]{}he ellipsoid method and its consequences in combinatorial optimization”. , 4(4):291–295, 1984. M. Gr[ö]{}tschel, L. Lov[á]{}sz, and L. Schrijver. , volume 2 of [*Algorithms and Combinatorics*]{}. Springer, second corrected edition, 1993. S. Iwata. Submodular function minimization. , 112(1):45–64, March 2008. S. Iwata, L. Fleischer, and S. Fujishige. A combinatorial strongly polynomial algorithm for minimizing submodular functions. , 48:761–777, July 2001. S. Iwata and J. B. Orlin. A simple combinatorial algorithm for submodular function minimization. In [*Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*]{}, pages 1230–1237, 2009. D. R. Karger. Global min-cuts in [RNC]{}, and other ramifications of a simple min-out algorithm. In [*Proceedings of the 4th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*]{}, pages 21–30, 1993. D. R. Karger and C. Stein. A new approach to the minimum cut problem. , 43(4):601–640, 1996. Y. T. Lee, A. Sidford, and S. C. Wong. A faster cutting plane method and its implications for combinatorial and convex optimization. In [*Proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS)*]{}, pages 1049–1065, 2015. S. T. McCormick. Submodular function minimization. In [*Discrete Optimization*]{}, volume 12 of [*Handbooks in Operations Research and Management Science*]{}, pages 321–391. Elsevier, 2005. Updated version of 2013 available at: <https://pdfs.semanticscholar.org/903d/12346be328623e41e7bea2791a6e6df570fc.pdf>. M. W. Padberg and M. R. Rao. Odd minimum cut-sets and b-matchings. , 7(1):67–80, 1982. A. Schrijver. A combinatorial algorithm minimizing submodular functions in strongly polynomial time. , 80(2):346 – 355, 2000. A. Schrijver. . Springer, 2003. Z. Svitkina and L. Fleischer. Submodular approximation: Sampling-based algorithms and lower bounds. , 40(6):1715–1737, 2011. [^1]: A set function $f\colon2^N \rightarrow \mathbb{R}$ on a finite ground set $N$ satisfies the diminishing returns property if $f(A\cup\{e\}) - f(A) \geq f(B\cup \{e\}) -f(B)$ for all $A\subseteq B\subseteq N$ and $e\in N\setminus B$. For more information on submodular functions, we refer the interested reader to [@schrijver_2003_combinatorial; @mccormick_2005_submodular; @fujishige_2005_submodular]. [^2]: A lattice $\mathcal{L}\subseteq 2^N$ over a ground set $N$ is a set family that is closed under unions and intersections, i.e., for any $A,B\in \mathcal{L}$, we have $A\cup B, A\cap B \in \mathcal{L}$. Whenever a lattice is given, we make the standard assumption that it is given by a compact encoding in terms of a digraph (see [@groetschel_1993_geometric Section 10.3]). What we call a lattice is sometimes also called a lattice family, a ring family, or a distributive lattice. [^3]: A set family $\mathcal{F}\subseteq N$ is *intersecting* if for any $A,B\subseteq \mathcal{F}$ such that $A\setminus B, B\setminus A, A\cap B \neq \emptyset$, we have $A\cup B, A\cap B\in \mathcal{F}$. Moreover, $\mathcal{F}$ is *crossing* if for any $A,B\in \mathcal{F}$ with $A\setminus B, B\setminus A, A\cap B, N\setminus (A\cup B)\neq\emptyset$, we have $A\cup B, A\cap B \in \mathcal{F}$. [^4]: Indeed, we can observe that any problem of the form $\min\{f(S) \mid S\in \mathcal{L}, \; |S\cap T| \equiv r \pmod*{2}\}$, for a submodular function $f\colon\mathcal{L} \rightarrow \mathbb{Z}$ with $\mathcal{L}\subseteq 2^N$ and a given set $T\subseteq N$, can be cast as a problem with respect to an auxiliary submodular function $g$ over a lattice $\mathcal{L}'$ as follows. For every $x\in N\setminus T$, introduce a new element $x'$, let $N'\coloneqq N\cup \{x'\mid x\in N\setminus T\}$, and let $\mathcal{L}'\coloneqq \{S\subseteq N' \mid S\cap N \in \mathcal{L} \text{ and }|S\cap\{x,x'\}|\neq 1\ \forall x\in N\setminus T\}$. Define $g\colon\mathcal{L}'\to\mathbb{Z}$ by $g(S)=f(S\cap N)$ for all $S\subseteq N'$. Then, for any $S^*\in\argmin\{g(S)\mid S\in\mathcal{L}',\; |S|\equiv r\pmod*{2}\}$, we can observe that $S^*\cap N$ solves the original problem. The same construction can be applied for moduli $m$ different from $2$ by introducing $m-1$ copies for each element in $N\setminus T$. [^5]: We employ the usual convention that $\binom{n}{k}=0$ for $k>n$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We introduce a new incremental preference elicitation procedure able to deal with noisy responses of a Decision Maker (DM). The originality of the contribution is to propose a Bayesian approach for determining a preferred solution in a multiobjective decision problem involving a *combinatorial* set of alternatives. We assume that the preferences of the DM are represented by an aggregation function whose parameters are unknown and that the uncertainty about them is represented by a density function on the parameter space. Pairwise comparison queries are used to reduce this uncertainty (by Bayesian revision). The query selection strategy is based on the solution of a mixed integer linear program with a combinatorial set of variables and constraints, which requires to use columns and constraints generation methods. Numerical tests are provided to show the practicability of the approach.' address: | Sorbonne Université, CNRS, LIP6, F-75005 Paris, France,\ email: [email protected] author: - Nadjet Bourdache - Patrice Perny - Olivier Spanjaard bibliography: - 'biblio.bib' title: Bayesian preference elicitation for multiobjective combinatorial optimization --- Multiple objective programming,Bayesian preference elicitation ,weighted sum ,ordered weighted average Introduction ============ The increasing complexity of problems encountered in applications is a permanent motivation for the development of intelligent systems for human decision support. Among the various difficulties to overcome for decision making in complex environments we consider here three sources of complexity that often coexist in a decision problem: 1) the combinatorial nature of the set of feasible alternatives 2) the fact that multiple points of view, possibly conflicting, about the value of solutions may coexist, 3) the need of formulating recommendations that are tailored to the objectives and preferences of users and that takes into account the uncertainty in preference elicitation (due to possible mistakes in the responses of users to preference queries). The first difficulty occurs as soon as the solutions to be compared are characterized by the combinations of elementary decisions. This is the case for instance for the selection problem of an optimal subset within a reference set, under a budget constraint (a.k.a. knapsack problem) where a solution is characterized by elementary decisions concerning items of the reference set. This difficulty prevents the explicit evaluation of all solutions and the determination of the best option requires implicit enumeration techniques. The second difficulty appears in multiagent decision contexts when the agents have different individual value systems or objectives leading to possibly conflicting preferences. It also appears in single-agent decision contexts when the alternatives are assessed w.r.t. different criteria. Finally, it appears in decision under uncertainty when several scenarios that have different impacts on the outcomes of the alternatives are considered. In all these situations, preference modeling requires the definition of multiple objectives to be optimized simultaneously. The combination of difficulties 1 and 2 is at the core of multiobjective combinatorial optimization [@Ehrgott05]. Let us now come to the third difficulty. The coexistence of multiple objectives makes the notion of optimality subjective and requires additional preference information to be collected from the users in order to discriminate between Pareto-optimal solutions. In multiobjective decision problems, the “optimal” solution fully depends on the relative importance attached to the different objectives under consideration and on how performances are aggregated. A standard tool used to generate compromise solutions tailored to the decision maker (DM) value system is to optimize a parameterized aggregation function summarizing the performance vector of any solution into a scalar value. This makes it possible to reformulate the initial problem as a single-objective optimization problem (see e.g., [@Steuer86]). However, a precise specification of the preference parameters (e.g., weighting coefficients), prior to the exploration of the set of alternatives, may be cumbersome because it requires a significant amount of preference information. To overcome this problem, incremental decision procedures aiming to integrate and combine the elicitation of preference parameters and the exploration of the set of feasible solutions are appealing (alternatively, one may also consider the approach consisting in computing the non-dominated solutions according to a scalarizing function whose parameters are only partially specified [@Kaddani17]). They make it possible to focus the elicitation burden on the information that is really useful to separate competing solutions during the optimization process, and this significantly reduces the number of queries asked to the user. In the fields of operations research and artificial intelligence, numerous contributions have addressed the problem of incrementally eliciting preferences. A first stream of research concerns preference elicitation for decision making in explicit sets (i.e., non-combinatorial problems), to assess multiattribute utility functions [@WhiteSD84], weights of criteria in aggregation functions [@BenabbouPV17], multicriteria sorting models [@ozpeynirci2018interactive], utility functions for decision making under risk [@Chajewska00; @WangBoutilier03; @hines10; @PernyVB16], or individual utilities in collective decision making [@LuB11]. Preference elicitation for decision support on combinatorial domains is a challenging issue that has also been studied in various contexts such as constraint satisfaction [@gelain10], matching under preferences [@DrummondB14], sequential decision making under risk [@Regan11; @WengP13; @gilbertSVW15; @BenabbouP17], and multiobjective combinatorial optimization [@branke2016using; @Benabbou18; @BourdacheP19]. Almost all incremental elicitation procedures mentioned above proceed by progressive reduction of the parameter space until an optimal decision can be identified. At every step of the elicitation process, a preference query is asked to the DM and the answer induces a constraint on the parameter space, thus a polyhedron including all parameter values compatible with the DM’s responses is updated after each answer (*polyhedral method* [@toubia04]). Queries are selected to obtain a fast reduction of the parameter space, in order to enforce a fast determination of the optimal solution. However, such procedures do not offer any opportunity to the DM to revise her opinion about alternatives and the final result may be sensitive to errors in preference statements. A notable exception in the list of contributions mentioned above is the approach proposed by Chajewska et al. [@Chajewska00]. The approach relies on a prior probabilistic distribution over the parameter space and uses preference queries over gambles to update the initial distribution using Bayesian methods. It is more tolerant to errors and inconsistencies over time in answering preference queries. The difficulties with this approach may lie in the choice of a prior distribution and in the computation of Bayesian updates at any step of the procedure. A variant, proposed in [@GuoS10], relies on simpler questions under certainty, so as to reduce the cognitive load. #### Motivation of the paper As far as we know, the works mentioned in the last paragraph has not been extended for decision making on combinatorial domains. Our goal here is to fill the gap and to propose a Bayesian approach for determining a preferred solution in a multiobjective combinatorial optimization problem. The main issue in this setting is the determination of the next query to ask to the DM, as there is an exponential number of possible queries (due to the combinatorial nature of the set of feasible solutions). #### Related work Several recently proposed Bayesian preference elicitation methods may be related to our work.\ \[0.5ex\] – Sauré and Vielma [@saure19] proposed an error tolerant variant of the polyhedral method, where the polyhedron is replaced by an ellipsoidal credibility region computed from a multivariate normal distribution on the parameter space. This distribution, and thus the ellipsoidal credibility region, is updated in a Bayesian manner after each query. In contrast with their work, where the set of alternatives is explicitly defined, our method applies on implicit sets of alternatives. Besides, although our method also involves a multivariate normal density function on the parameter space, our query selection strategy is based on the whole density function and not only on a credibility region.\ \[0.5ex\] – Vendrov et al. [@VeLHB20] proposed a query selection procedure able to deal with large sets of alternatives (up to hundreds of thousands) based on *Expected Value Of Information* (EVOI). The EVOI criterion consists in determining a query maximizing the expected utility of the recommended alternative conditioned on the DM’s answer (where the probability of each answer depends on a response model, e.g. the logistic response model). However, the subsequent optimization problem becomes computationally intractable with a large set of alternatives. The authors consider a continuous relaxation of the space of alternatives that allows a gradient-based approach. Once a query is determined in the relaxed space, the corresponding pair of fictive alternatives is projected back into the space of feasible alternatives. In addition, a second contribution of the paper is to propose an elicitation strategy based on *partial comparison queries*, i.e. queries involving partially specified multi-attribute alternatives, which limits the cognitive burden when the number of attributes is large. We tackle here another state-of-the-art query selection strategy that aims at minimizing the max regret criterion (instead of maximizing the EVOI criterion), a popular measure of recommendation quality.\ \[0.5ex\] – In a previous work [@BourdachePS19], we introduced an incremental elicitation method based on Bayesian linear regression for assessing the weights of rank-dependent aggregation functions used in decision theory (typically OWA and Choquet integrals). The query selection strategy we proposed is based on the min max regret criterion, similarly to the one we use in the present work. However, the method can only be applied to *explicit* sets of alternatives and does not scale to combinatorial domains. The computation of regrets in the provided procedure (in order to determine the next query) requires indeed the enumeration of all possible pairs of solutions for each query, which is impractical if the set of solutions is combinatorial in nature. The change in scale is considerable. For illustration, instances involving 100 alternatives were considered in the numerical tests of our previous work [@BourdachePS19] while in the multi-objective knapsack instances under consideration in Section 4 of the present paper, there are about $2^{99}$ feasible solutions, among which several millions are Pareto optimal. In order to scale to such large problems, we propose a method based on mixed integer linear programming that allows us to efficiently compute MER and MMER values on combinatorial domains. #### Organization of the paper In Section 2, we describe the incremental elicitation procedure proposed in [@BourdachePS19] for the determination of an optimal solution over *explicit* sets of solutions and we point out the main issues to overcome to extend the approach to combinatorial sets of solutions. Section 3 is devoted to the method we propose to compute expected regrets on combinatorial domains. The results of numerical tests we carried out are presented in Section 4, that show the operationality of the proposed procedure. Incremental elicitation {#sec:incremental} ======================= We first provide a general overview of the incremental Bayesian elicitation procedure on an *explicit set* and then discuss the extension to cope with *combinatorial optimization* problems. Let $\mathcal{X}$ denote the set of possible solutions. Since we are in the context of multiobjective optimization, we assume that a utility vector $u(x)\!\in\!\mathbb{R}^n$ is assigned to any solution $x\!\in\! \mathcal{X}$. Then we consider the problem of maximizing over $\mathcal{X}$ a scalarizing function of the form $f_w(x)\!=\!\sum_{k=1}^n w_k g_k(u(x))$ where $w_k$ are positive weights and $g_k\!:\!\mathbb{R}^n\!\rightarrow\!\mathbb{R}$ are *basis functions* [@bishop2006pattern] (introduced to extend the class of linear models to nonlinear ones). For simplicity, the reader can assume that $g_k(u(x))\!=\!u_k(x)$, i.e., the $k$-th component of $u(x)$, and $w_k$ is the (imperfectly known) weight of criterion $k$ in the following. At the start of the procedure, a prior density function $p$ is associated to the parameter space $W\!=\!\{w\in[0,1]^n|\sum_kw_k\!=\!1\}$, where the unknown weighting vector $w$ takes value. Then, at each step, the DM responds to a pairwise comparison query and, based on this new preference information, the density function is updated in a Bayesian manner. The aim is, in a minimum number of queries, to acquire enough information about the weighting vector $w$ to be able to recommend a solution $x\!\in\!\mathcal{X}$ that is near optimal. We present here the three main parts of the decision process: query selection strategy, Bayesian updating after each query and stopping condition. Query selection strategy {#subsec:qss} ------------------------ At each step of the algorithm, a new preference statement is needed to update the density function on $W$. In order to select an informative query, we use an adaptation of the *Current Solution Strategy* (CSS) introduced in [@BoutilierPPS06] and based on regret minimization. In our probabilistic setting, regrets are replaced by *expected* regrets. Before describing more precisely the query selection strategy, we recall some definitions about expected regrets [@BourdachePS19]. Given a density function $p$ on $W$ and two solutions $x$ and $y$, the pairwise expected regret is defined as follows: $${\mathrm{PER}}(x,y,p) = \int \max\{0,f_{w}(y)\!-\!f_{w}(x)\}p({w}) d{w}.$$ In other words, the Pairwise Expected Regret (PER) of $x$ with respect to $y$ represents the expected utility loss when recommending solution $x$ instead of solution $y$. In practice, the PER is approximated using a sample $S$ of weighting vectors drawn from $p$. This discretization of $W$ enables us to convert the integral into an arithmetic mean: $$\label{eq:per} {\mathrm{PER}}(x,y,S)=\frac{1}{|S|}\sum_{w \in S}\max\{0,f_{w}(y)-f_{w}(x)\}$$ Given a set $\mathcal{X}$ of solutions and a density function $p$ on $W$, the max expected regret of $x\!\in\!\mathcal{X}$ and the minimax expected regret over $\mathcal{X}$ are defined by: $$\begin{aligned} {\mathrm{MER}}(x,\mathcal{X},p)&=\max_{y\in \mathcal{X}} {\mathrm{PER}}(x,y,p),\\ {\mathrm{MMER}}(\mathcal{X},p)&=\min_{x \in \mathcal{X}} {\mathrm{MER}}(x,\mathcal{X},p). \end{aligned}$$ Put another way, the max expected regret of $x$ is the maximum expected utility loss incurred in selecting $x$ in $\mathcal{X}$ while the minimax expected regret is the minimal max expected regret value of a solution in $\mathcal{X}$. As for the PER computation, the MER and the MMER can be approximated using a sample $S$ of weight vectors: $$\label{eq:mer} {\mathrm{MER}}(x,\mathcal{X},S)=\max_{y\in \mathcal{X}} {\mathrm{PER}}(x,y,S)$$ $$\label{eq:mmer} {\mathrm{MMER}}(\mathcal{X},S)=\min_{x \in \mathcal{X}} {\mathrm{MER}}(x,\mathcal{X},S)$$ We can now describe the adaptation of CSS to the probabilistic setting. The max expected regret of a solution is used to determine which solution to recommend (the lower, the better) in the current state of knowledge characterized by $p$. At a step $i$ of the elicitation procedure, if the stopping condition (that will be defined below) is met, then a solution $x^{(i)}\!\in\!\arg\min_{x \in \mathcal{X}} {\mathrm{MER}}(x,\mathcal{X},S)$ is recommended. But if the knowledge about the value of $w$ needs to be better specified to make a recommendation, the DM is asked to compare $x^{(i)}$ to its best challenger $y^{(i)}\!\in\!\arg\max_{y\in \mathcal{X}} {\mathrm{PER}}(x^{(i)},y,S)$ (best challenger in the current state of knowledge). In the next subsection, we describe how one uses the DM’s answer to update the density function $p$. Bayesian updating {#subsec:bayesianUpsating} ----------------- At step $i$ of the procedure, a new query of the form ‘‘$x^{(i)} \succsim y^{(i)}?$" is asked to the DM. Her answer is translated into a binary variable $a^{(i)}$ that takes value $1$ if the answer is yes and $0$ otherwise. Using Bayes’ rule, the posterior density function reads as follows: $$\label{eq:bayesRule} p(w|a^{(i)}) = \frac{p(w)p(a^{(i)}|w)}{p(a^{(i)})}$$ where $p(w)$ is assumed to be multivariate Gaussian (the initialization used for $p(w)$ will be specified in the numerical tests section). The posterior density function $p(w|a^{(i)})$ is hard to compute analytically using Equation \[eq:bayesRule\]. Indeed, the likelihood $p(a^{(i)}|w)$ follows a Bernoulli distribution and no conjugate prior is known for this likelihood function in the multivariate case. Therefore, one uses a data augmentation method [@albert93] that consists in introducing a latent variable $z^{(i)}\!=\!w^Td^{(i)}\!+\!\varepsilon^{(i)}$ that represents the utility difference between the two compared solutions, where $w$ is a given weighting vector, $d^{(i)}$ is an explanatory variable defined by $d^{(i)}\!=\!x^{(i)}\!-\!y^{(i)}$ and $\varepsilon^{(i)}\!\sim\! \mathcal{N}(0,\sigma)$ is a Gaussian noise accounting for the uncertainty about the DM’s answer. The Gaussian nature of the density function for $\varepsilon^{(i)}$ implies that the conditional distribution $z^{(i)}|w$ is also Gaussian: $z^{(i)}|w\!\sim\!\mathcal{N}(w^T d^{(i)},\sigma)$. In order to make $z^{(i)}$ consistent with the DM’s answer, one forces $z^{(i)}\!\ge\!0$ if $a^{(i)}\!=\!1$ and $z^{(i)}\!<\!0$ otherwise. Thus, one obtains the following truncated density function: $$p(z^{(i)}|w, a^{(i)}) \propto \left\{ \begin{array}{ll} \mathcal{N}(w^T d^{(i)}, \sigma) \mathds{1}_{z^{(i)} \ge 0} & \text{if } a^{(i)} = 1\\ \mathcal{N}(w^T d^{(i)}, \sigma) \mathds{1}_{z^{(i)} < 0} & \text{otherwise} \end{array} \right.$$ Using the latent variable, the posterior distribution $p(w|a^{(i)})$ is formulated as: $$\label{eq:p_w_ai} p(w|a^{(i)})\!=\!\!\!\int\!p(w,z|a^{(i)})dz\!=\!\!\!\int\!p(w|z)p(z|a^{(i)})dz$$ If the prior density $p(w)$ is multivariate Gaussian then $p(w|z)$ is multivariate Gaussian too, as well as $p(w|a^{(i)})$. To approximate $p(w|a^{(i)})$, Tanner and Wong proposed in [@tanner87] an iterative procedure based on the fact that $p(z|a^{(i)})$ depends in turn on $p(w|a^{(i)})$: $$\label{eq:p_z_ai} p(z|a^{(i)})=\int p(\omega|a^{(i)})p(z|\omega,a^{(i)})~d\omega$$ The procedure consists in solving the fixed point equation obtained by replacing $p(z|a^{(i)})$ in Equation \[eq:p\_w\_ai\] by its expression in Equation \[eq:p\_z\_ai\]. More precisely, ones performs alternately series of samples of $m$ values $z_1, \dots, z_m$ from $z|a^{(i)}$, and updating of the posterior density function by $p(w|a^{(i)})\!=\!\frac{1}{m}\sum_{j=1}^m p(w|z_j)$, which is Gaussian because every $w|z_j, j\!\in\!\llbracket1, m\rrbracket$ is Gaussian. Note that each value of the sample $z_1, \dots, z_m$ is obtained by iteratively drawing a value $w$ from the current distribution $p(w)$ then drawing $z_j$ from $p(z_j|w, a^{(i)})$ for all $j$. For more details, we refer the reader to Algorithm $2$ in [@BourdachePS19]. Stopping condition {#subsec:stoppingCond} ------------------ The principle of the incremental elicitation procedure is to alternate queries and update operations on the density $p(w)$ until the uncertainty about the weighting vector $w$ is sufficiently reduced to be able to make a recommendation with a satisfactory confidence level. A stopping condition that satisfies this specification consists in waiting for the ${\mathrm{MMER}}(\mathcal{X}, S)$ value to drop below a predefined threshold, which can be defined as a percentage of the initial ${\mathrm{MMER}}$ value. Main obstacles for extending the approach ----------------------------------------- The main obstacles encountered while managing to extend the approach to a combinatorial setting are related to the computation of MER and MMER values as they are defined in Equations \[eq:mer\] and \[eq:mmer\]: - both values requires an exponential number of pairwise comparisons to be computed (because there is an exponential number of feasible solutions); - in addition, the use of linear programming to compute these values is not straightforward because the constraint $\max\{0,.\}$ in Equation \[eq:per\] is not linear. These issues are all the more critical given that the MER and MMER values are computed at every step of the incremental elicitation procedure to determine whether it should be stopped or not, and to select the next query. Computation of regrets {#sec:mmerComputation} ====================== While the use of mathematical programming is standard in minmax regret optimization, the framework of minmax *expected* regret optimization is more novel. We propose here a new method to compute ${\mathrm{MER}}(x, \mathcal{X}, S)$ and ${\mathrm{MMER}}(\mathcal{X}, S)$ by mixed integer linear programming, where $\mathcal{X}$ is implicitly defined by a set of linear constraints and $S$ is a sample drawn from the current density $p(w)$. We consider in this section that $f_w(x)$ is linear in $u(x)$, but the presented approach is adaptable to non-linear aggregation functions if there exists appropriate linear formulations (e.g., the linear formulation of the ordered weighted averages [@Ogryczak03]). We also assume that $f_w(x)\!\in\! [0,1]$. Linear programming for MER computation -------------------------------------- To obtain a linear expression for ${\mathrm{MER}}(x, \mathcal{X}, S)$, we replace the function $\max\{0,f_w(y)-f_w(x)\}$ in Equation \[eq:per\] by $b_w[f_w(y)-f_w(x)]$ for each weighting vector $w\!\in\!S,$ where $b_w$ is a binary variable such that $b_w\!=\!1$ if $f_w(y)-f_w(x)\!>\!0$ and $b_w\!=\!0$ if $f_w(y)-f_w(x)\!<\!0$ (the value of $b_w$ does not matter if $f_w(y)-f_w(x)\!=\!0$, because $b_w[f_w(y)-f_w(x)]\!=\!0$ anyway). For this purpose, we need the following additional constraints: $$\left\{ \begin{array}{llr} b_w \le f_w(y)-f_w(x)+1 & \forall w \in S & \qquad (c_\le)\\ b_w \ge f_w(y)-f_w(x) & \forall w \in S & \qquad (c_\ge) \end{array} \right.$$ \[prop:max0\] Given $w\!\in\!S$, $x\!\in\!\mathcal{X}$ and $y\!\in\!\mathcal{X}$, if $f_w$ is an aggregation function defined such that $f_w(z)\!\in\![0,1]$ for any $z\!\in\!\mathcal{X}$ and $w\!\in\!S$, and if $b_w$ satisfies the constraints $(c_\ge)$ and $(c_\le)$, then: $$\max\{0,f_w(y)-f_w(x)\}=b_w[f_w(y)-f_w(x)].$$ Let us denote by $d_w$ the value $f_w(y)-f_w(x)$ for any $w\!\in\!S$. First note that $d_w\!\in\![0,1], \forall w\!\in\!S$, because $f_w$ is such that $f_w(z)\!\in\![0,1], \forall z\!\in\!\mathcal{X}$. For any $w\!\in\!S$, three cases are possible: *Case 1.* $w$ is such that $d_w\!>\!0$: $(c_\ge)$ becomes $b_w\!\ge\!d_w\!>\!0$, thus $b_w\!=\!1$ and we indeed have $b_wd_w\!=\!d_w\!\ge\!0$. *Case 2.* $w$ is such that $d_w\!<\!0$: $(c_\le)$ becomes $b_w\!\le\!d_w\!+\!1\!<\!1$ and implies $b_w\!=\!0$ and thus $b_wd_w\!=\!0$. *Case 3.* $w$ is such that $d_w\!=\!0$ then $b_wd_w\!=\!0, \forall b_w\in\{0,1\}$. In the three cases we have thus $b_wd_w\!=\!\max\{0,d_w\}$. The constraints $(c_\le)$ and $(c_\ge)$ are linear as $f_w(x)$ is linear in $u(x)\!=\!(u_1(x),\ldots,u_n(x))$. Nevertheless, using variables $b_w$ and their constraints in the formulation of ${\mathrm{MER}}(x, \mathcal{X}, S)$ gives a system of linear constraints with a quadratic objective function: $$\begin{array}{ll} \max \frac{1}{|S|} \sum_{w \in S} b_w [f_w(y) - f_w(x)] \\ ~~ b_w \le f_w(y)-f_w(x)+1 & \forall w \in S \\ ~~ b_w \ge f_w(y)-f_w(x) & \forall w \in S \\ ~~ b_w \in \{0, 1\} & \forall w \in S \\ ~~ y \in \mathcal{X} \end{array}$$ The objective function is quadratic because the term $b_w f_w(y)$ is quadratic in variables $b_w$ and $y$. To linearize the program, we introduce a positive real variable $p_w$ for each $w\!\in\!S,$ that replace the product term $b_w f_w(y)$. Note that the term $b_wf_w(x)$ does not need linearization because solution $x$ is fixed in the MER computation. The obtained linear program is: $$(P_{{\mathrm{MER}}}): \begin{array}{ll} \max \frac{1}{|S|} \sum_{w \in S} [p_w - b_w f_w(x)] \\ ~~ b_w \le f_w(y)-f_w(x)+1 & \forall w \in S \\ ~~ b_w \ge f_w(y)-f_w(x) & \forall w \in S \\ ~~ p_w \le b_w & \forall w \in S \\ ~~ p_w \le f_w(y) & \forall w \in S \\ ~~ p_w \ge b_w + f_w(y) - 1 & \forall w \in S \\ ~~ b_w \in \{0, 1\} & \forall w \in S \\ ~~ p_w \in \mathbb{R}^+ & \forall w \in S \\ ~~ y \in \mathcal{X} \end{array}$$ It is easy to see that $p_w=b_wf_w(y)$ for all $w\!\in\!S$ thanks to the constraints on $p_w$. We indeed have $p_w\!=\!0$ when $b_w\!=\!0$ thanks to the constraint $p_w\!\le\!b_w$, and $p_w\!=\!f_w(y)$ when $b_w\!=\!1$ thanks to constraints $p_w\!\le\!f_w(y)$ and $p_w\!\ge\!b_w\!+\!f_w(y)\!-\!1\!=\!f_w(y)$. Overall, $2|S|$ variables are involved in the linearization of the expression $\frac{1}{|S|} \sum_{w \in S} \max\{0, [f_w(y) - f_w(x)\}$: $|S|$ binary variables $b_w$ are used to linearize the $\max\{0,.\}$ function, and $|S|$ real variables $p_w$ are used to linearize the product term $b_wf_w(y)$. Linear programming for MMER computation --------------------------------------- For computing ${\mathrm{MMER}}(\mathcal{X}, S)$, the objective function $$\min_{x\in \mathcal{X}}\max_{y\in \mathcal{X}}\frac{1}{|S|}\sum_{w \in S} \max\{0,f_w(y)\!-\!f_w(x)\}$$ can be linearized by using $|\mathcal{X}|$ constraints (standard linearization of a $\min\max$ objective function, where the max is taken over a finite set): $$\begin{array}{lr} \min t \\ ~~ t \ge \frac{1}{|S|} \sum_{w \in S} \max\{0, f_w(y)\!-\! f_w(x)\} ~ \forall y \in \mathcal{X} & (*)\\ ~~ t \in \mathbb{R} \end{array}$$ Note that computing the minmax expected regret over $\mathcal{X}$ requires the introduction of one binary variable $b_w^y$ for *each* solution $y\!\in\! \mathcal{X}$, so that $$\max\{0,f_w(y)-f_w(x)\}=b_w^y(f_w(y)-f_w(x))$$ for all $y \in \mathcal{X}$ (while computing the max expected regret of a *given* solution $x$ only required the introduction of a *single* binary variable $b_w$ such that $\max\{0,f_w(\hat{y})\!-\!f_w(x)\}\!=\!b_w(f_w(\hat{y})\!-\!f_w(x))$ for $\hat{y}\!\in\!\arg\max_{y \in \mathcal{X}}{\mathrm{PER}}(x,y,S)$). Let us consider the following program $P_{\mathrm{MMER}}$, involving quadratic constraints: $$\begin{array}{lr} \min t \\ ~~ t \ge \frac{1}{|S|} \sum_{w \in S} b^y_w [f_w(y) - f_w(x)] & \forall y \in \mathcal{X}\\ ~~ b^y_w \le f_w(y)-f_w(x)+1 & \forall w, y \in S \times \mathcal{X} \\ ~~ b^y_w \ge f_w(y)-f_w(x) & \forall w, y \in S \times \mathcal{X} \\ ~~ b^y_w \in \{0, 1\} & \forall w, y \in S \times \mathcal{X} \\ ~~ x \in \mathcal{X} \\ ~~ t \in \mathbb{R} \end{array}$$ A solution $x^*\!\in\!\mathcal{X}$ optimizing $P_{\mathrm{MMER}}$ is such that: $${\mathrm{MER}}(x^*, \mathcal{X}, S)\!=\!{\mathrm{MMER}}(\mathcal{X}, S).$$ \[prop:QPmmer\] We denote by $t^*$ the optimal value of $P_{\mathrm{MMER}}$. We now prove that $t^*$ is equal to ${\mathrm{MMER}}(\mathcal{X}, S)$. For a given instance of $x$, constraint $(*)$ must be satisfied for *any* possible instance of $y$. Thus, by Proposition \[prop:max0\], we have that $t\!\ge\!{\mathrm{PER}}(x,y,S)$ for all $y\!\in\! \mathcal{X}$ because: $$\abovedisplayskip = 8pt \frac{1}{|S|}\sum_{w \in S} b_w^y [f_w(y)\!-\!f_w(x)]\!=\!{\mathrm{PER}}(x,y,S). \belowdisplayskip = 8pt$$ It implies that $t\!\ge\!\max_y {\mathrm{PER}}(x,y,S)\!=\!{\mathrm{MER}}(x,\mathcal{X},S)$. As the objective function is $\min t$, for each instance of $x$, the variable $t$ takes value ${\mathrm{MER}}(x,\mathcal{X},S)$. The $\min$ objective function implies that (1) $t\!=\!{\mathrm{MER}}(x, \mathcal{X}, S)$ for a given $x$. Finally, varying $x$ over $\mathcal{X}$, we can easily see that $t^*\!\le\!{\mathrm{MER}}(x, \mathcal{X}, S)\ \forall x\!\in\!\mathcal{X}$, and thus (2) $t^*\!=\!{\mathrm{MMER}}(\mathcal{X}, S)$. The result follows from (1) and (2). The quadratic terms $b^y_w f_w(x)$ are linearized by introducing $|S|\!\times\!|\mathcal{X}|$ positive real variables $p_w^y$: $$(P_\mathcal{X})\!: \begin{array}{lr} \min t \\ ~ t \ge \frac{1}{|S|} \sum_{w \in S} b^y_w [f_w(y) - p^y_w] & \forall y \in \mathcal{X} \\ ~ b^y_w \le f_w(y)-f_w(x)+1 & \forall w, y \in S\!\times\!\mathcal{X} \\ ~ b^y_w \ge f_w(y)-f_w(x) & \forall w, y \in S\!\times\!\mathcal{X} \\ ~ p^y_w \le b^y_w & \forall w, y \in S\!\times\!\mathcal{X} \\ ~ p^y_w \le f_w(x) & \forall w, y \in S\!\times\!\mathcal{X} \\ ~ p^y_w \ge b^y_w + f_w(x) - 1 & \forall w, y \in S\!\times\!\mathcal{X} \\ ~ b^y_w \in \{0, 1\} & \forall w, y \in S\!\times\!\mathcal{X} \\ ~ p^y_w \in \mathbb{R}^+ & \forall w, y \in S\!\times\!\mathcal{X} \\ ~ x \in \mathcal{X} \\ ~ t\in \mathbb{R} \end{array}$$ One comes up with a mixed integer linear program $P_\mathcal{X}$ involving $|S|\!\times\!|\mathcal{X}|$ binary variables $b_w^y$, $|S|\!\times\!|\mathcal{X}|$ positive real variables $p_w^y$ and $|\mathcal{X}|\!+\!6\!\times\!|S|\!\times\!|\mathcal{X}|$ constraints, hence an exponential number of variables and constraints due to the combinatorial nature of the set $\mathcal{X}$. In the remainder of the section, we propose a method to overcome this issue. MMER computation method {#subsec:mmercomputation} ----------------------- The proposed method is based on mixed integer linear programming with dynamic generation of variables and constraints to compute ${\mathrm{MMER}}(\mathcal{X}, S)$, an optimal solution $x_S^*\!\in\!\arg\min_{x\in\mathcal{X}}{\mathrm{MER}}(x, \mathcal{X}, S)$ and its best challenger $\hat{y}_S\!\in\!\arg\max_{y\in\mathcal{X}}{\mathrm{PER}}(x_S^*, y, S)$. Let us first define a mixed integer linear program $P_A$ that contains only a subset of variables $b^y_w$ and $p^y_w$, and a subset of constraints of type $(*)$. Given a subset $A\!\subseteq\!\mathcal{X}$ of solutions, $P_A$ computes the minimax expected regret ${\mathrm{MMER}}_A(\mathcal{X}, S)$ defined by: $$\displaystyle\min_{x \in \mathcal{X}} {\mathrm{MER}}(x, A, S)\!\!=\!\!\min_{x\in\mathcal{X}} \max_{y\in A} {\mathrm{PER}}(x,y,S).$$ Put another way, ${\mathrm{MER}}(x, A, S)$ is the max expected regret of a solution $x\!\in\!\mathcal{X}$ w.r.t. solutions in $A$. More formally, $P_A$ is written as follows: $$(P_A)\!: \begin{array}{lr} \min t \\ ~ t \ge \frac{1}{|S|} \sum_{w \in S} b^y_w [f_w(y) - p^y_w] & \forall y \in A \\ ~ b^y_w \le f_w(y)-f_w(x)+1 & \forall w, y \in S\!\times\!A \\ ~ b^y_w \ge f_w(y)-f_w(x) & \forall w, y \in S\!\times\!A \\ ~ p^y_w \le b^y_w & \forall w, y \in S\!\times\!A \\ ~ p^y_w \le f_w(x) & \forall w, y \in S\!\times\!A \\ ~ p^y_w \ge b^y_w + f_w(x) - 1 & \forall w, y \in S\!\times\!A \\ ~ b^y_w \in \{0, 1\} & \forall w, y \in S\!\times\!A \\ ~ p^y_w \in \mathbb{R}^+ & \forall w, y \in S\!\times\!A \\ ~ x \in \mathcal{X} \\ ~ t \in \mathbb{R} \end{array}$$ Note that $P_A$ now only involves $|S|\!\times\!|A|$ variables $b^y_w$, $|S|\!\times\!|A|$ variables $p^y_w$ and $|A|\!+\!6\!\times\!|S|\!\times\!|A|$ constraints. The algorithm we propose consists in alternatively solving $P_A$ and $P_{\mathrm{MER}}$. Let $x_A$ (resp. $\hat{y}$) denote the optimal solution returned by solving $P_A$ (resp. $P_{{\mathrm{MER}}}$ for $x\!=\!x_A$). The algorithm starts with a small set $A$ of feasible solutions (see Section \[subsec:IncrementalApproach\] for details regarding the initialization), and then iteratively grows this set by adding to $A$ the best challenger $\hat{y}$ of $x_A$. Convergence is achieved when $P_{\mathrm{MER}}$ returns a solution $\hat{y}$ that already belongs to $A$, which implies that ${\mathrm{MMER}}_A(\mathcal{X}, S)\!=\!{\mathrm{MMER}}(\mathcal{X}, S)$. Algorithm \[algo:mmer\] describes the procedure. $\hat{y} \leftarrow null$\ *mmer*$_A$, $x_A$, $\hat{y}$ By abuse of notation, ${\mathrm{MMER}}_A(\mathcal{X}, S)$ is viewed in the algorithm as a procedure returning the couple consisting of the optimal value *mmer*$_A$ of $P_A$ and the corresponding optimal solution $x_A$. Similarly, ${\mathrm{MER}}(x_A, \mathcal{X}, S)$ is viewed as a procedure returning the couple consisting of the optimal value *\_$\,x_A$* of $P_{\mathrm{MER}}$ and the corresponding optimal solution $\hat{y}$. At the termination of the algorithm, *mmer*$_A$ corresponds to ${\mathrm{MMER}}(\mathcal{X}, S)$ and $x_A$ is the MMER solution (and $\hat{y}$ its best challenger). Algorithm \[algo:mmer\] terminates and returns a minmax expected regret solution and its best challenger. \[prop:validiteAlgo\] First, it is easy to see that Algorithm \[algo:mmer\] always terminates. Indeed, at every step of the algorithm, if the stopping condition is not satisfied then a new solution $\hat{y}\!\not\in\!A$ is added to $A$ and a new iteration is performed. In the worst case, all the solutions of $\mathcal{X}$ are added to the set $A$ and the stopping condition is trivially satisfied. We now prove the validity of Algorithm \[algo:mmer\], i.e.: $${\mathrm{MMER}}_A(\mathcal{X}, S)\!=\!{\mathrm{MMER}}(\mathcal{X}, S) \mbox{ if } \hat{y}\!\in\!A.$$ Assume that $A\!\subsetneq\!\mathcal{X}$ (if $A\!=\!\mathcal{X}$ the equality ${\mathrm{MMER}}_A(\mathcal{X},S)\!=\!{\mathrm{MMER}}(\mathcal{X},S)$ is trivially true). On the one hand, at any step of the algorithm, we have (1) $\emph{mmer}_A\!\le$ *mer\_$\,x_A$* because ${\mathrm{MER}}(x_A,A,S)\!\le\!{\mathrm{MER}}(x_A,\mathcal{X},S)$. On the other hand, if $\hat{y}\!\in\!A$ then the constraint $t\!\ge\!\frac{1}{|S|} \sum_{w \in S} [f_w(\hat{y}) - f_w(x_A)]$ is satisfied for $t\!=$ *mmer*$_A$, i.e., $\emph{mmer}_A\!\ge\!{\mathrm{PER}}(x_A, \hat{y}, S)$. As ${\mathrm{PER}}(x_A, \hat{y}, S)\!=\!{\mathrm{MER}}(x_A, \mathcal{X}, S)$ by definition of $\hat{y}$, it implies that (2) $\emph{mmer}_A\!\ge\!{\mathrm{MER}}(x_A, \mathcal{X}, S)=$ *mer\_$\,x_A$*. By (1) and (2), we conclude that *mmer*$_A$ $=\emph{\text{mer}}$\_$\,x_A$. Finally, *mmer*$_A$ minimizes $\frac{1}{|S|}\sum_{w \in S} b_w [f_w(\hat{y}) - f_w(x)]$ for $x \in \mathcal{X}$, thus *mmer*$_A$ minimizes ${\mathrm{PER}}(x, \hat{y}, S)$ over $\mathcal{X}$. Consequently, *\_$\,x_A$* $\le {\mathrm{PER}}(x, \hat{y}, S)$ for all $x \in \mathcal{X}$ and then *\_$\,x_A$* $\le {\mathrm{MER}}(x, \mathcal{X}, S), \forall x \in \mathcal{X}$. Thus, by definition of the MMER, we have *\_$\,x_A$* $={\mathrm{MMER}}(\mathcal{X}, S)$ and thereby $\emph{mmer}_A\!=\!\emph{mer}$\_$\,x_A\!=\!{\mathrm{MMER}}(\mathcal{X}, S)$. Clustering the samples {#subsec:clustering} ---------------------- In order to decrease the computation times between queries, we propose to reduce the number of variables and constraints in $P_A$ by applying a clustering method on each sample $S$ drawn from density $p(w)$. Let $C$ denote the set of cluster centers. The idea is to replace the $|S|$ weights by the $|C|$ cluster centers, the formula for the pairwise expected regret becoming: $${\mathrm{PER}}(x,y,C)=\sum_{c \in C}\rho_c\max\{0,f_{c}(y)-f_{c}(x)\}$$ where $\rho_c$ is the weight of the cluster center $c\!\in\!C$ and represents the proportion of weighting vectors of $S$ that are in the cluster of center $c$. The formulas for ${\mathrm{MER}}(x,\mathcal{X},C)$ and ${\mathrm{MMER}}(\mathcal{X},C)$ are adapted in the same way. Incremental decision making approach {#subsec:IncrementalApproach} ------------------------------------ As detailed in section \[sec:incremental\], the MMER computation is used to determine which query to ask at each step as well as to trigger the stopping condition. The whole incremental decision making procedure is summarized in Algorithm \[algo:eliciation\]. The set $A$ is heuristically defined as the set of $f_w$-optimal solutions for $w$ in $C$ (Line \[algoLine:defA\]) but can be defined otherwise without any impact on the result in proposition \[prop:validiteAlgo\]. The variable *mmer* (Line \[algoLine:mmer\]) represents the current minmax expected regret value and is computed using Algorithm \[algo:mmer\] by replacing the sample $S$ by the set of cluster centers $C$. $p(w) \leftarrow p_0(w); i \leftarrow 1$\ $x^*$ selected in $\arg\min_{x \in \mathcal{X}} {\mathrm{MER}}(x , \mathcal{X}, C)$ Experimental results ==================== Algorithm \[algo:eliciation\] has been implemented in Python using the *SciPy* library for Gaussian sampling, the *Scikit-Learn* library for the clustering operations[^1] and the *gurobipy* module for solving the mixed integer linear programs. The numerical tests have been carried out on $50$ randomly generated instances of the multi-objective *knapsack* and *allocation* problems. For all tests we used an Intel(R) Core(TM) i7-4790 CPU with 15GB of RAM. #### Multi-objective Knapsack Problem (MKP) This vector optimization problem is formulated as $\max z\!=\!Ux$ subject to $\sum_{i=1}^p\alpha_i x_i\le \gamma$, where $U$ is an $n\!\times\!p$ matrix of general term $u_{ki}$ representing the utility of item $i\!\in\!\{1,\ldots,p\}$ w.r.t objective $k\!\!\in\!\!\{1,\ldots,n\}$, $x\!\!=\!\!(x_1,\ldots,x_p)^T$ is a vector of binary decision variables such that $x_i\!=\!1$ if item $i$ is selected and $x_i\!=\!0$ otherwise, $\alpha_i$ is the weight of item $i$ and $\gamma$ is the knapsack’s capacity. The set of feasible knapsacks is $\mathcal{X}\!=\!\{x\!\in\!\{0,1\}^p | \sum_{i=1}^{p} \alpha_i x_i\!\le\!\gamma\}$, and the performance vector $z\!\in\!\mathbb{R}^n$ associated to a solution $x$ is $z\!=\!Ux$. To simulate elicitation sessions, we consider the problem $\max_{x\in\mathcal{X}} f_w(x)$ where $f_w(x)\!=\!\sum_kw_k\sum_iu_{ki}x_i$. The weighting vector $w$ in $W=\{w\!\in\![0,1]^n:\sum_kw_k\!=\!1\}$ is initially unknown. We generated instances of MKP for $n\!=\!5$ objectives and $p\!=\!100$ items. Every item $i$ has a positive weight $\alpha_i$ uniformly drawn in $\{1,\ldots,20\}$, and $\gamma\!=\!\frac{1}{2}\sum_{k=1}^{100}\alpha_k$. Utilities $u_{ki}$ are uniformly drawn in $[0, \frac{1}{p}]$ to make sure that $f_w(x)\!\in\![0,1], \forall x\!\in\!\mathcal{X}$. #### Multi-objective Allocation Problem (MAP) Given $m$ agents, $r\!<\!m$ *shareable* resources, and $b$ a bound on the number of agents that can be assigned to a resource, the set $\mathcal{X}$ of feasible allocations of resources to agents consists of binary matrices $X$ of general term $x_{ij}$ such that $\sum_{j=1}^rx_{ij}\!=\!1, \forall i\!\in\!\{1,\ldots,m\}$ and $\sum_{i=1}^mx_{ij}\!\le\!b, \forall j\!\in\!\{1,\ldots,r\}$, where $x_{ij}$ are decision variables such that $x_{ij}\!=\!1$ if agent $i$ is assigned resource $j$, and $x_{ij}\!=\!0$ otherwise. The cost of an allocation $x$ w.r.t. criterion $k$ is defined by $z_k\!=\!\sum_{i=1}^m\sum_{j=1}^r c_{ij}^kx_{ij}$, where $c_{ij}^k$ is the cost of assigning agent $i$ to resource $j$ w.r.t. criterion $k$. We consider the problem $\min_{x\in\mathcal{X}} f_w(x)$ where $f_w(x)\!=\!\sum_k w_k\sum_i\sum_jc_{ij}^kx_{ij}$ and $w\!\in\!W$ is initially unknown. For the tests, we generated instances with $n\!=\!5$ criteria, $m\!=\!50$ agents, $r\!=\!5$ resources and a bound $b\!=\!15$ on the number of agents that can be assigned to a resource. The values $c_{ij}^k$ are randomly generated in $[0,20]$, then normalized ($c_{ij}^k/\sum_i\sum_j c_{ij}^k$) to ensure that $f_w(x)\!\in\![0,1], \forall x\!\in\!\mathcal{X}$. #### Simulation of the DM’s answers In order to simulate the interactions with the DM, for each instance, the hidden weighting vectors $w$ are uniformly drawn in the canonical basis of $\mathbb{R}^n$ (the more vector $w$ is unbalanced, the worse the initial recommendation). At each query, the answer is obtained using the response model given in Section \[subsec:bayesianUpsating\], i.e., for query $i$, the answer depends on the sign of $z^{(i)}\!=\!w^Td^{(i)}\!+\!\varepsilon^{(i)}$, where $\varepsilon^{(i)}\!\sim\!\mathcal{N}(0, \sigma^2)$. We used different values of $\sigma$ to evaluate the tolerance of the approach to wrong answers. We set $\sigma\!=\!0$ to simulate a DM that is perfectly reliable in her answers. The strictly positive values are used to simulate a DM that may be inconsistent in her answers. For MKP (resp. MAP), setting $\sigma\!=\!0.01$ led to $16\%$ (resp. $14\%$) of wrong answers, while $\sigma\!=\!0.02$ led to $24\%$ (resp. $21\%$) of wrong answers. #### Parameter settings in algorithms The prior density in Algorithm \[algo:mmer\] is set to $\mathcal{N}((10, \dots 10)^T, 100I_5)$, where $I_5$ is the identity matrix $5\!\times\!5$, so that the distribution is rather flat. At each step of Algorithm \[algo:eliciation\], a new sample $S$ of 100 weighting vectors is generated; the vectors $w\!\in\!S$ are normalized and partitioned into $20$ clusters. This number of clusters has been chosen empirically after preliminary numerical tests: considering the entire sample or using more than 20 clusters led to higher computation times and did not offer a significant improvement on the quality of the recommendations. Last but not least, we stopped the algorithm after 15 queries if the termination condition was not fulfilled before. #### Illustrative example Before coming to the presentation of the numerical results, let us first illustrate the progress of the elicitation procedure on the following example: we applied Algorithm \[algo:eliciation\] on a randomly generated instance of MKP with $3$ agents, $100$ items, a hidden weighting vector $w\!=\!(0, 1, 0)$, and we set $\sigma\!=\!0.02$, which led to an error rate of $20\%$. Figure \[fig:samples\] illustrates the convergence of the generated samples of weighting vectors (Line \[algoLine:sampling\] of Algorithm \[algo:eliciation\]) toward the hidden weight during the execution of the algorithm. As the weighting vectors are normalized, two components are enough to characterize them. Every graph shows the sample drawn at a given step of the algorithm: before starting the elicitation procedure (Query 0), after query $3$ and query $10$. -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- ![Evolution of the samples toward the hidden weight.[]{data-label="fig:samples"}](Images/sample1.png "fig:") ![Evolution of the samples toward the hidden weight.[]{data-label="fig:samples"}](Images/sample2.png "fig:") ![Evolution of the samples toward the hidden weight.[]{data-label="fig:samples"}](Images/sample3.png "fig:") Query 0 Query 3 Query 10 -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- #### Analysis of the results We first evaluated the efficiency of Algorithm \[algo:eliciation\] according to the value of $\sigma$. We observed the evolution of the quality of the recommendation (the minimax expected regret solution) after every query. The quality of a recommendation $x^*$ is defined by the score $s_{w_h}(x^*)\!=\!f_{w_h}(x^*)/f_{w_h}(x_h)$ for MKP and by $s_{w_h}(x^*)\!=\!\frac{1-f_{w_h}(x^*)}{1-f_{w_h}(x_h)}$ for MAP, where $w_h$ is the hidden weighting vector and $x_h$ is an optimal solution for $w_h$. The obtained curves for MKP are given in Figure \[fig:score\_ext\]. We observe that the quality of the recommendation (measured by the score function $s_{w_h}$) is of course negatively impacted when $\sigma$ increases. However, the score of the recommendation at the termination of Algorithm \[algo:eliciation\] is $\ge0.98$ for $\sigma\!\in\!\{0, 0.01\}$, and $\ge\!0.96$ for $\sigma\!=\!0.02$. Regarding the computation times, the mean time between two queries over the $50$ instances was around $4$ seconds. ------------------------------------------- --- ----------------------------------------- ![image](Images/Score_ext100.png)   ![image](Images/Boxes_score100.png) ![image](Images/Affect_score_ext100b.png)   ![image](Images/Det_boxes_score100.png) ------------------------------------------- --- ----------------------------------------- Concerning RAP, the curves are given in Figure \[fig:score\_alloc\]. As for MKP, we observe a negative impact on the quality of the recommendation when $\sigma$ increases. Yet, the score of the recommendation is, in the worst case ($\sigma\!=\!0.01$), around $0.86$ from query $4$. The algorithm converges very quickly for all $\sigma$ values, which may be explained by the fact that, for extreme weights $w$, there is a large number of $f_{w}$-optimal solutions; $w$ is indeed such that $w_{i}\!=\!1$ for a given $i$ and all other components takes value $0$, thus, any assignment $x$ such that all the agents are assigned to resources other than resource $i$ are such that $f_{w}(x)\!=\!0$. Regarding the computation times, the mean time between two queries over the $50$ instances was around $0.9$ seconds for $\sigma\!\in\!\{0, 0.05\}$, and around $1.7$ seconds for $\sigma\!=\!0.1$. Finally, we compared the performances of Algorithm \[algo:eliciation\] on MKP to the performance of a deterministic approach that does not take into account the possible errors in responses [@BourdacheP19] (approach based on the systematic reduction of the parameter space by minimizing the minimax regret at each step). The aim was to evaluate how much the DM’s inconsistencies in her answers impact the two procedures. In this purpose, we set $\sigma\!=\!0.02$. The obtained results are given in the box plots of Figure \[fig:box\_prob\] for Algorithm \[algo:eliciation\], and of Figure \[fig:box\_det\] for the deterministic algorithm. In these figures, the box plots give, for any given question, the score of the recommendation for every considered instance (the bottom and top bands of the whiskers are the minimum and maximum scores over the 50 instances, the bottom and top bands of the boxes are the first and third quartiles, the band in the box is the median, the dotted band is the average, and the circles are isolated values). The histogram gives the number of observed values for every query; the bin $i$ indeed gives the number of instances for which query $i$ is reached before the stopping condition is fulfilled. Figures \[fig:box\_prob\] and \[fig:box\_det\] show the interest of considering our Bayesian elicitation procedure in comparison with a deterministic approach. Indeed, the deterministic approach converges quickly and requires less queries than Algorithm \[algo:eliciation\]; however, the score of the current recommendation at every step of the algorithm does not exceed $0.94$ for any considered instance and is $\le\!0.9$ for $75\%$ of the instances. In contrast, for Algorithm \[algo:eliciation\], the score of the current recommendation is $\ge\!0.95$ in $75\%$ of the instances from query $6$. Conclusion ========== We introduced in this paper a Bayesian incremental preference elicitation approach for solving multiobjective combinatorial optimization problems when the preferences of the decision maker are represented by an aggregation function whose parameters are initially unknown. The proposed approach deals with the possibility of inconsistencies in the decision maker’s answers to pairwise preference queries. Our approach uses a columns and constraints generation solution method for the computation of expected regrets. The approach is general and can be applied to any problem having an efficient mixed integer linear programming formulation. An interesting research direction would be to refine the approach in the case of non-linear aggregation functions. The approach is indeed compatible with such aggregation functions provided they can be linearized (e.g., the linearization of the ordered weighted averages [@Ogryczak03]), but the subsequent linear formulations often involve many additional variables and constraints, thus the need for an optimization. [^1]: We used *k-means* clustering.
{ "pile_set_name": "ArXiv" }
--- abstract: | Physics experiments produce enormous amount of raw data, counted in petabytes per day. Hence, there is large effort to reduce this amount, mainly by using some filters. The situation can be improved by additionally applying some data compression techniques: removing redundancy and optimally encoding the actual information. Preferably, both filtering and data compression should fit in FPGA already used for data acquisition - reducing requirements of both data storage and networking architecture. We will briefly explain and discuss some basic techniques, for a better focus applied to design a dedicated data compression system basing on a sample data from a prototype of a tracking detector: 10000 events for 48 channels. We will focus on the time data here, which after neglecting the headers and applying data filtering, requires on average $\approx$1170 bits/event using the current coding. Encoding relative times (differences) and grouping data by channels, reduces this number to $\approx$ 798 bits/channel, still using fixed length coding: a fixed number of bits used for a given value. Using variable length Huffman coding to encode numbers of digital pulses for a channel and the most significant bits of values (simple binning) reduces further this number to $\approx$ 552bits/event. Using adaptive binning: denser for frequent values, and an accurate entropy coder we get further down to $\approx$ 455 bits/event - this option can easily fit unused resources of FPGA currently used for data acquisition. Finally, using separate probability distributions for different channels, what could be done by a software compressor, leads to $\approx$ 437bits/event, what is 2.67 times less than the original 1170 bits/event. address: | $^{\star}$ [Faculty of Mathematics and Computer Science, Jagiellonian University, Krakow, Poland]{}\ $^{\dagger}$ [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Krakow, Poland]{} bibliography: - 'ref.bib' title: | Designing dedicated data compression for physics experiments\ within FPGA already used for data acquisition --- introduction ============ Continuous development of measurement techniques, precision and readout rates, results in increasing data volume generated by readout systems in modern physics experiments. Currently used data reduction mechanisms rely on fast, on-line data analysis and filtering performed by hardware modules equipped with FPGA devices. Those methods implement low-level algorithms, which usually are limited by the architecture of readout systems (e.g. only separate parts of the entire detector can be analyzed on one device) and the nature of FPGAs (e.g. basic arithmetic functions, limited data buffering capabilities). Therefore, such filters select interesting data based on partial information and reject data considered as not interesting, which under more extensive analysis could turn out to be valuable. The great challenge in the design of acquisition system is the right balance between the final amount of generated data and its physical quality. Large data volumes require advanced networking infrastructure and expensive storage space, as the data from detector runs is supposed to be kept for decades. Hence introduction of advanced, adaptive data structures and compression algorithms can help reducing both: costs of developing and running experiments and the amount of valuable data rejected by real-time filtering mechanisms.\ We will discuss applying general techniques to design a dedicated data compression system. For a better focus, we will do it basing on a sample of timing data from a prototype of a tracking detector: 10000 events from 48 channels, having in mind required scaling to a larger number of channels. While a software archiver would be also useful, the primary motivation is putting data compression into FPGA already used for data acquisition (employing its unused resources). In this way we will be able not only to reduce storage media usage, but also lower transmission requirements and improve reading time, as decompression ($\approx$ 500MB/s/core) is usually much faster than reading from a data medium (50-120 MB/s for HDD). ![The summary of discussed approaches - while the current coding uses on average 1170 bits/event, the best discussed one requires 437 bits/event. The cost consists of storing 4 types of values: *distance* is time difference between successive activations, *width* is time difference between rising and falling time for a given activation, *start* is the time of first activation (is zero once per event), *pulses* is the number of digital pulses for a given channel. ](table.png){width="8.5cm"} \[table\] Every event corresponds to a particle passing through several channels of the detector and hitting the reference detector. The electronics convert analog impulses from detector channels and produces digital signals with a certain width. Time-to-Digital Converter measures the time of rising and falling edge of such signal and returns a numeric value representing absolute time. We assume that faulty measurements (e.g. one edge missing or corrupted value) are already filtered in FPGA and therefore we only have to encode successive pairs of times (rising and falling). For simplicity and clearance we will omit the general data packet structure (headers) and focus only on essential data coming from time-measurement devices: storing times within an event. This data uses on average 1170bits/event in the current data format, the columns of Fig. \[table\] contain comparison with successive optimizations that we will discuss here. The one before last (455 bits/event) is suitable to fit FPGA, the last one can be implemented in a software compressor.\ This analysis assumes that we encode only meaningful data which will be used in the data processing phase. This means shifting filtering close to digitization, favorably to FPGA, what results in saving of usage of both data storage and transmission lines. Counterargument against such approach is that unfiltered data can be used for the diagnostic purposes. To resolve this issue, there can be used two modes: the main run data saving mode, sometimes switched to the diagnostic mode, producing unfiltered raw data in the original format. Alternatively, the diagnostics can be included in FPGA while data acquisition, which directly reports suspicious behavior. Another counterargument against operating on packed information to save resources is no direct access by a human interpreter. However, it can be cheaply decoded for example by software and transformed into readable form. More important issue is susceptibility to data corruption - a single changed bit can make the entire compressed packet useless. This issue is usually neglected in standard protocols, where also a small change can result in large damage. It is usually resolved by including checksum and discarding entire corrupted packets, what can be also used here. In case data corruption is a serious problem, there can be applied a Forward Error Correction layer, which adds some redundancy to packets (increase their size), to be able to repair eventual damages. The discussed data format comes from an universal time measurement device, therefore presents a significant overhead in order to support wide range of its possible configurations. We will present a concept of rearrangement and compression of that data assuming specific configuration and detector characteristics. Those characteristics define correlations between activity on tracking detector channels and the reference detector. Efficient data representation ============================= We will focus on the situation for a single event. The data sample has 48 channels, but we want to design a method which will be scalable for a larger number. Each channel is expected to produce rising-falling pairs of times, each of such pair corresponds to single digital pulse. The original time information consists of three times: $fine\_time\ \in \{0,\ldots,499]$ counts time in 10ps unit, up to 5ns. The $coarse\_time\ \in\{0,\ldots,2047\}$ counts time in 5ns units. Finally 28bit $epoch\_time$ counts time in unit of 5ns $\cdot$ 2048 = 10240ns. Neglecting additional headers for clearance, the current coding consists of 2 types of 32bit data words: - main word consisting of 10 bits for $fine\_time$, 11 bits for $coarse\_time$, 7 bits for channel number, 1 bit for type of edge, - words with mainly 28 bits of $epoch\_counter$. First word type is used for each measurement. In case epoch has changed, the second word type is inserted additionally. In the data sample, the first type uses on average 700bits/event, the second 470 bits/event, what gives 1170 bits/event total. We can see that there is already some wastefulness stemming from the requirement of working in 32 bit blocks: $fine\_time$ uses in fact 9 bits, channel number 6, there are some unused bits in such data blocks - for optimizations we need to use a more flexible structures of data.\ Let us think of efficient data representation for this data. - Writing the channel number every time is a waste - instead, we could group values corresponding to a given channel, - Writing absolute time is ineffective - we could write relative times instead (differences), which should be much smaller and can have some characteristic probability distribution, especially for the difference between rising and falling time and correlations between channels. Let us introduce $time$ value for a given event: $$time=fine\_time + 500(coarse\_time + 2048 epoch\_time) - ref$$ where $ref$ is the reference time value (absolute time) for a given event (e.g. from a reference detector) - it might be required to be stored ($\approx$ 3 bytes per event). As events are usually treated as independent, in many situations this value could be stored using a lower accuracy or completely neglected. A universal way to choose the $ref$ is to take the minimal of all times in this event. This way we can assume here that $time$ is non-negative, takes zero value once per event, and in the analysed data sample it always fits 32 bits. A different choice of $ref$ can be easily handled (generally can allow for negative $time$), and will be omitted in this analysis. ![The proposed representation of time data from a single event and empirical cumulative distribution function (CDF) of values to store obtained from the data sample. For each channel we write the number of measured digital pulses ($pulses$). If it is nonzero, we also write $start$ as the difference between its first signal and the reference time (it is zero once per event), and then a series of $width$ then $distance$ until encoding all pulses. Four graphs show statistics for the analyzed data sample: the graph for *pulses* shows probabilities of successive values, the graphs for *start*, *width* and *distance* show empirical CDF, which was obtained by sorting the observed values.](general.png){width="8.5cm"} \[general\] Instead of writing the channel number, let us group all signals corresponding to a given channel. They are expected to appear in pairs corresponding to a single pulse: rising time, then falling time. Then a given channel waits some time for another digital pulse. So for each channel we need to store the moment of its first signal ($start$) which should be rising edge, then $width$ as the time difference until the signal of falling edge. If there are more pulses for this channel, we should store $distance$ as time difference to the next rising moment, then analogously $width$ and so on. Additionally, we need to indicated the number of pulses for each channel. There are different ways to realize it, for simplicity let us imagine that for each channel we write $pulses$ as the number of pulses for this channel. Finally, the situation looks like if Fig. \[general\]. We could now directly write such successive values using $\lfloor log_2 (max)\rfloor+1$ bits, where $max$ is the maximal value for a given type, getting correspondingly 4, 28, 27, 28 bits/value for $pulses$, $start$, $width$, $distance$ types of value. Multiplying them by average number of such values per event in our data sample and summing, we get on average 798 bits/event, what is 68% of the original 1170 bits/event. The disadvantage is no longer operating on 32 bit blocks, but on 4, 28, 27, 28 length bit blocks occurring in a complex, history dependent order. However, decoding such data is a simple task for a computer. Observe that by the way we have also saved the single bits characterizing type of edge, which is determined from the order here. However, this approach requires that data is already filtered - there is written only meaningful data: pairs of successive rising - falling times in our case. There could be written also data not fitting to such pattern, for example attached in the standard data format at the end of event. These exceptions, which will be usually discarded while analysis, are not fitting the pattern used for optimization, hence their storage is more costly than of the regular data. Note that, while we focus on 48 channels, this solution can be naturally scaled to a larger number of channels, what will be required in the real application. It also perfectly fits the current architecture, where a few peripheral FPGA units, with less resources process data from some separate subsets of channels, and send it to some powerful central FPGA unit, which combines them into data packets of variable length. The peripheral FPGA units obtain time signals which are already sorted in time - it can directly calculate time differences ($width$ and $distance$), eventually perform some filtering: discard meaningless signals (e.g. two successive rising edges, extremely long $width$), and maybe also apply some data compression techniques.\ Figure \[general\] contains empirical probability distribution for $pulses$ values for our data sample, and empirical cumulative distribution functions (CDF) for $start$, $width$ and $distance$. Specifically, denoting $val[i]:\ i=1\ldots n$ as sorted values for a given type, such empirical CDF graph is plot of $(v[i],i/n)$ points. Directly storing numbers is optimal if they have uniform probability distribution (CDF is line for values in $(0,1)$), what is not the case here as we can see from the graphs. Entropy coder can be used to nearly optimally encode symbols from a general probability distribution, at cost of variable length coding: lengths of stored values are no longer fixed (like 4, 28, 27, 28), but depend on probabilities. Entropy coding: prefix codes and ANS ==================================== Let us now focus on encoding values obtaining a small number of possibilities like $pulses$ in our case. In the next section we will look at storing larger numerical values. Data sample results in the following frequencies for the number of pulses per channel (starting with 0): $$(0.8825,\ 0.06591,\ 0.01948,\ 0.009375,\ 0.01503,$$ $$0.00653,\ 0.00101,\ 0.00013,\ 2\cdot 10^{-6})$$ These are 9 possibilities, so directly storing them would require 4 bits/value. We could use a base 9 numeral system to store a sequence of 9 possibilities using asymptotically $\log_2(9)\approx 3.17$ bits/value. Prefix codes: Exp-Golomb and Huffman ------------------------------------ These coding options are optimal for uniform probability distributions among possibilities, while here we almost certainly will get $pulses=0$ value, so we should store this value using a smaller number of bits in our case. There is a simple, so called Exp-Golomb code ([@Golomb]) which is effective in such case of quickly decreasing probabilities for small natural numbers. Specifically, to encode a natural number $x$, we first write $\lfloor\log_2(x+1)\rfloor$ times “0” bit, then the entire binary expansion of $x+1$. For example we get $0\to 1,\ 1\to 010,\ 2\to 011,\ 3\to 00100,\ 4\to 00101$ and so on. Generally, value $x$ corresponds to length $2\lfloor\log_2(x+1)\rfloor+1$ bit sequence. Multiplying these lengths by probabilities and summing over all possibilities, we get the expected number of used bits/value, which is 1.30 in our $pulses$ case, what is 3 times better than the original 4 bits/value. However, as we can see in Fig. \[table\], Exp-Golomb coding is completely inefficient for our larger numerical values. For a variable length coding like Exp-Golomb (the number of used bits depends on probability of symbol), we require prefix code condition: that bit sequence for a symbol/value is not a prefix of bit sequence for another value. Thanks of that, we can decode in unique way: the decoder always knows how many bits to use. While Exp-Golomb coding does not take the actual probability distribution into consideration, Huffman coding ([@HC]) finds the optimal prefix code for a given distribution. It is done by repeating: retrieve two least probable symbols, group them into a new symbol with probability being the sum of the two probabilities, and put this new symbol into the alphabet. Each such grouping corresponds to a node in a binary tree, so finally we get a tree with symbols as its leafs. Now path from root to a given symbol corresponds to its bit sequence by translating left/right edges as 0/1 bits. Finally, we get the following bit sequences for our $pulses$ values: $$(0,10,110,11110,1110,111110,1111110,11111110,111111110)$$ what in this case is very close to so called unary coding: for a value $x$ write “1” bit $x$ times, then write “0”. This time we get a bit better: 1.23 bits/value. We should also leave an option to handle exceptions, like $pulses>8$ possibilities. Assuming that they are very rare, despite the fact that such a single value is more costly, they should have practically negligible impact on average size, hence we are omitting them in this analysis. Shannon limit and ANS entropy coding ------------------------------------ Shannon entropy ([@Shannon]) is the theoretical limit for encoding a sequence from $\{p_s\}$ probability distribution - we asymptotically need on average: $$H= \sum_s p_s \log_2(1/p_s)\quad\textrm{bits/symbol}$$ in the case of $pulses$ values, it is 0.74 bits/value, what is much less than 1.23 for Huffman coding. Prefix codes operate on complete bits: have to use at least 1 bit/symbol. Generally symbol/event of probability $p$ carries $\log_2(1/p)$ bits of information, what can be much less than 1 bit/symbol. For example $pulses=0$ carries $\log_2(1/0.8825)\approx 0.18$ bits of information in our case.\ There are also “accurate” entropy coders like arithmetic coding or range coding ([@ari; @ran]), which allow to effectively approach the Shannon entropy limit by including the fractional numbers of bits. However, they require arithmetic multiplication, which is a costly resource, especially from the point of FPGA. There was recently introduced a more effective approach (Asymmetric Numeral Systems [@last; @pcs2015]), which has already replaced Huffman and range coding in a few compressors to improve performance, like zhuff [@zhuff], lzturbo [@lzturbo], LZA [@LZA] or ZSTD [@ZSTD]. Its tabled variant (tANS) allows to approach Shannon entropy for a large alphabet without using multiplication, requiring a few kilobyte coding tables for 256 size alphabet. Software decoding can process $\approx$ 500MB/s/core for a modern CPU, encoding $\approx 350$MB/s/core ([@fse]). In contrast, zlib implementation of Huffman coding ([@zlib]) has similar encoding speed, but only $\approx 300$MB/s/core decoding speed (and suboptimal compression ratio). tANS was also found suitable for FPGA implementations [@ansfpga]. We will now briefly present direct application of tANS method, more details can be found for example in [@pcs2015]. In this method we build a $L=2^R$ $(R\in \mathbb{N})$ state automaton dedicated for a given probability distribution, like depicted in Fig. \[autom\] for $L=4$ states and $\Pr(a)=3/4,\ \Pr(b)=1/4$ probability distribution. Top-left part of this figure shows encoding step for symbol $a$ (upper) and $b$ (lower): for every symbol there is a set of rules for changing the state and eventually producing some bits (blue numbers on arrows). The top-right part of this figure presents decoding in this example: every state determines $symbol$ to decode, new state is $newX$ plus $nbBits$ of bits from the data stream, where $newX$ and $nbBits$ are also determined by the state. While encoding we start from some chosen initial state ($x=4$ in the example), then encode successive symbols (“$baaaabb$”), leading to a bit sequence (“$00100001$”) and a final state ($x=5$). Decoder needs to know this final state to start with, it process the bit sequence in backward order, producing symbol sequence in backward order. The inconvenience of backward decoding is usually resolved by encoding in backward direction in data frames of size of kilobytes, then decoding is straightforward. For FPGA encoding we can encode in forward direction and then software decode in backward direction instead. The cost of storing the final state once per data frame can be compensated by encoding some information in the initial state of encoder. While prefix codes operated on integer number of bits, approximating probabilities by powers of $1/2$, tANS handles fractional number of bits thanks to the state $x\in \{L,\dots, 2L-1\}$ acting as a buffer containing $\log_2(x)\in [R,R+1)$ bits of information. This buffer gathers information and produces accumulated complete bits of information when needed. ![Example of 4 state tANS (top) and its application for stream coding (bottom) for two symbol alphabet of $\Pr(a)=3/4,\ \Pr(b)=1/4$ probabilities. State/buffer $x$ contains $\lg(x)\in [2,3)$ bits of information. Symbol $b$ carries 2 bits of information, while $a$ carries less than 1 - its information is gathered in $x$ until accumulating to a complete bit of information. The $\rho_x$ are probabilities of visiting state $x$ assuming i.i.d. input source.](autom.png){width="8.5cm"} \[autom\] In our example $x$ contains $\log_2(x)\in[2,3)$ bits of information. Symbol $b$ contains $\log_2(2)=2$ bits of information, and so we see that it always produces 2 bits of information here. Symbol $a$ contains $\log_2(4/3)<1$ bits of information, and so it sometimes produces one bit (from $x=6,7$), sometimes zero bits (from $x=4,5$) only increasing the state (accumulating information in the buffer) - symbol $a$ produces on average less than 1 bits/symbol. To quantitatively evaluate performance of such entropy coder, we need first to find the probability distribution of visiting successive states: $\rho_x$ as the stationary probability distribution for such a random walk among states: assuming corresponding i.i.d. input source of symbols. Finally the automaton from Fig. \[autom\] produces on average $H'\approx (0.241+0.188)\cdot 3/4\cdot 1 + 1\cdot 1/4 \cdot 2\approx H+0.01$ bits/symbol, where $H$ is the minimum: Shannon entropy. In other words, the inaccuracy of this entropy coder costs us using $\Delta H=H'-H \approx 0.01$ more bits/symbol than the optimum. It can be reduced by using more states, e.g. $L=8$ state automaton would give $\Delta H\approx 0.0018$ bits/symbol for the $\Pr(a)=3/4,\ \Pr(b)=1/4$ case. Generally, $\delta H$ drops approximately with square of the number of used states ($L$) and grows with square of alphabet size ($m$). So in practical applications there is chosen a fixed $L/m$ proportion, for example as 8 in FSE ([@fse]). In a general case, above decoding/encoding steps, which are the critical loops, can be optimized to a compact form: [0.5]{} Where $decodingTable$ for our example is: $symbol$ are correspondingly $\{a,b,a,a\}$, $nbBits$: $\{1,2,0,0\}$ and $newX$ are $\{6,4,4,5\}$. For encoding in our example we can choose $nb[a]=2,\ nb[b]=12,\ start[a]=-3,\ start[b]=2,\ encodingTable[0..3]=\{4,6,7,5\}$.\ We will now present algorithms to produce such automaton for a general alphabet and probability distribution $\{p_s\}_{s=1..m}$ by generating the tables used in above encoding/decoding steps. Assume that $L=2^R$ and $0<L_s \approx L\cdot p_s$ approximate the probability distribution of symbols, such that $\sum_s L_s=L$. Now we need to choose a symbol spread function: $symbol[X]:\ X\in\{0,\ldots,L-1\}\to\{1,\ldots,m\}$, such that symbol $s$ appears $L_s$ times there: $L_s = \{X:\ symbol[X]=s\}$. This function defines the coding. In our example the symbol spread is correspondingly $\{a,b,a,a\}$, $L_a=3,\ L_b=1$. The optimal choice of $symbol[X]$ symbol spread function is a complicated topic, we present only a fast simple way to spread symbols in a pseudorandom which is used in FSE and usually provide excellent performance. More sophisticated methods can offer a small improvements of $\Delta H$ - many of them can be found and tested in [@toolkit]. [0.2]{} After choosing the symbol spread $symbol[X]$, Method \[gen\] generates the $decodingTable$ for decoding step from Method \[dec0\]. For efficient memory handling while encoding step, the encoding table can be stored in one dimensional form $encodingTable[x + start[s]] \in I$ for $x\in \{L_s,\ldots,2L_s-1\}$, where $start[s]=- L_s + \sum_{s'<s}L_{s'} $. To encode symbol $s$ from state $x$, we need first to transfer $k[s]-1$ or $k[s]$ bits, where $k[s] = \lceil lg(L/L_s) \rceil$. The smallest $x$ for $k[s]$ bits is $bound[s] = L_s \cdot 2^{k[s]} \in \{L,\ldots,2L-1\}$. Finally, preparation and encoding step are written as Methods \[encprep\] and \[enc0\] correspondingly. Simple and adaptive binning =========================== Our numerical values $start$, $width$, $distance$ are rather too large to be directly written by an entropy coder. However, their least significant bits have often nearly uniform probability distribution, so directly writing them can be already effective. In contrast, their most significant bits may have very nonuniform probability distribution - we can use entropy coder to optimize their storage cost. The simplest approach is to write some number of the most significant bits using entropy coder and then directly write the remaining bits, what will be referred as simple binning. The top part of Fig. \[binning\] shows its example for our $start$ values - split into 15 bins for the most significant 4 bits, the remaining 24 bits should be directly written. The probability of a bin can be obtained as a percentage of values using this bin. ![Directly writing the *start* values would cost 28bits/value. In the above 24 bit simple binning we use entropy coder to encode the most significant 4 of these bits, what costs on average 1.92 bits/value and 24 bits to encode the remaining least significant bits. The lower part shows adaptive banning (and magnification for small $start$ values): we divide the range into variable-width bins, optimized to have approximately uniform probability distribution of the least significant bits (CDF is approximately linear there). The choice of bin costs on average 3.15 bits/symbol and 18.09 bits/value is the average cost of storing the remaining least significant bits. 10000 out of 56385 $start$ values were zero, hence the first bin of probability 17.7% just produces the zero value without using the further bits. For 20bit simple binning we get 24.85 bits/value for 237 bins, for 168 adaptive bins we get 21.06 bits/value.](binning.png){width="8.5cm"} \[binning\] The assumption of nearly uniform probability distribution is often violated, especially for small values. For example most of values in the first bin of our $start$ example will be zero - storing 24 low bits is a waste here. Hence, adjusting bin sizes to a given case can be beneficial, especially when - there are special cases like $start=0$ in our example, - for quickly decaying probability distributions like Gaussian (see $20.1 \to 15.5$ improvement for $width$ in Fig. \[table\]), - when we want to handle exceptions we can use large bins at both ends, what would require using a large number of simple bins, making it costly from the perspective of entropy coding. For adaptive binning we need two tables: $binStart[i]$ as the minimal value for $i$-th bin, and $binWidth[i]$ as the number of the least significant bits required to choose a value inside this bin. The next bin starts at the successive position: $startBin[i+1]=startBin[i]+2^{binWidth[i]}$. Instead of the bin number, entropy decoder can directly return such two numbers: $binStart$, $binWidth$ and decoded value is $$v=binStart + \textrm{readBits}(binWidth)$$ Encoding is a bit more complicated as we need to determine the bin. One possibility is to use a table which takes some number of the most significant bits of value (e.g. 8) and directly returns the bin number if it is unique. Otherwise, it could point to analogous another table for successive e.g. 8 bits, and so on if required. Special cases like our $start=0$ can be handled by a separate condition. There have remained a question of choosing adaptive binning. The optimal choice seems to be a complicated problem. The presented evaluation used a simple heuristics and manipulation of a $minVal$ parameter: the minimal number of values per bin. Specifically, after sorting values, we construct successive bins by increasing $binWidth$ until exceeding the $minValn$ number of values inside this bin or reaching the end. This simple algorithm quickly approaches some asymptotic average cost, suggesting it is sufficient for practical applications. Separate channel distributions and further optimizations ======================================================== In the previous sections we were assuming that all channels use the same probability distribution, the statistics were obtained by putting all values of given type into one box. Looking separately at each channel: dividing the data into 48 boxes, we can see that data from separate channels seems to be governed by separate statistical rules, as a result of geometry of the experiment. For example Fig. \[separate\] shows $width$ empirical CDFs for separate channels - nearly all of them resemble CDF for Gaussian distribution, but of different ones. These differences are a consequence of geometry of the experiment. We could exploit them by having separate sets of coding tables, each one optimized for a given channel, leading to approximately 4% income for our sample data. Such coding would become more memory demanding, so such full separation is rather restricted to software compressors. However, one could consider intermediate solution to get intermediate improvements, for example classifying channels into a few classes, and use a separate set of coding tables for every class. Additionally, in real experiment an FPGA unit should process data from detectors which are close to each other and so should have similar statistical behavior - there could be used coding tables optimized for a given FPGA. ![Empirical CDF for widths for separate channels - each color corresponds to a different channel. Using a single entropy coder for all of them we need on average $\approx$15.5 bits/value, while using a separate optimized for each of them reduces this value to $\approx$14.9 bits/value.](separate.png){width="8.5cm"} \[separate\] There have still remained place for other optimizations. For example we know that exactly once per event there will be $start$=0 value. Additionally, it is more likely to happen for a channel with large $pulses$. Approximately 2 bits/event could be saved by pointing the $start$=0 channel and encode only nonzero $start$ values for the remaining channels. Finally, while we have assumed that signals from separate channels are independent, there should be some hidden correlations which exploitation could lead to further essential savings. For example concentration of hits in some channels may suggest increased activity in other channels. An idealized compressor should first classify type of event and estimate its parameters, then use them to estimate probability distributions for channel activations, to be used for optimized data encoding. This topic requires further research. Conclusions =========== Efficient encoding of data from physics experiments can be one of tools to reduce required resources: data storage and transmission lines. It can also improve the processing speed as decoding is usually much faster than reading from HDD. The basic suggestions for designing such data acquisition systems and protocols are: - Separate diagnostic runs from the proper data acquisition - when early filtering can be used to transmit and store only the meaningful information - which will be actually used in the analysis. FPGA usually used for acquisition may have unused potential to include some initial filtering and optimized encoding. - Instead of writing labels (like channel) for each data block, try to group information having the same label. - Instead of writing absolute values like time, relative values (differences) have usually much smaller and more predictable values, often come from some characteristic distributions like exponential or Gaussian. - Use entropy coder, especially for probability distributions which are far from uniform. - Try to exploit correlations, for example predict values using previous ones and encode the difference.
{ "pile_set_name": "ArXiv" }
--- abstract: 'It has been shown in the last few years that 3-form fields present viable cosmological solutions for inflation and dark energy with particular observable signatures distinct from those of canonical single scalar field inflation. The aim of this work is to explore the dynamics of a single 3-form in five dimensional Randall-Sundrum II braneworld scenario, in which a 3-form is confined to the brane and only gravity propagates in the bulk. We compare the solutions with the standard four dimensional case already studied in the literature. In particular, we evaluate how the spectral index and the ratio of tensor to scalar perturbations are influenced by the presence of the bulk and put constraints on the parameters of the models in the light of the recent Planck 2015 data.' author: - 'Bruno J. Barros and Nelson J. Nunes' title: '3-form inflation in Randall-Sundrum II' --- Introduction ============ Primordial inflation provides solutions for cosmological puzzles such as the flatness and horizon problems and also explains the emergence of the primordial density fluctuations essential for the formation of the large scale structure observed today [@Guth:1980zm; @Linde:1983gd]. Inflation is typically studied considering a self interacting scalar field and has been widely studied in the literature (see [@Bassett:2005xm; @Martin:2013tda] for reviews). The possibility of the energy source of the inflationary expansion to be of a non-scalar nature has, however, never been excluded. It is, therefore, important to understand the nature of higher spin fields and how robust they are in order to fully test their applications in cosmology. Inflation considering higher spinor fields has been investigated in the past and these models are also important due to their connection to string theory scenarios [@Frey:2002qc; @Gubser:2000vg; @Groh:2012tf]. Vector inflation has been studied in Ref. [@Ford:1989me], however, for inflation to proceed, the vector needs a nonminimal coupling and the model appears to feature some instabilities. Inflation with a 2-form field resembles much the vector inflation with the same problems [@Germani:2009iq; @Koivisto:2009sd]. A 3-form has been shown to present viable solutions, not only for inflation [@Koivisto:2009ew; @Koivisto:2009fb; @Mulryne:2012ax; @DeFelice:2012jt], but also for describing dark energy [@Koivisto:2012xm]. Inflation driven by two 3-form fields has also been studied and does presents interesting results [@Kumar:2014oka]. The natural question that arises now is how these properties translate to an extra-dimensional cosmological scenario. For example, in the Randall-Sundrum II model, proposed in 1999 [@Randall:1999vf], our universe is confined to a four dimensional 3-brane, where the standard model particles reside, embedded in a five dimensional slice of an anti-de Sitter (AdS) space-time, the bulk. The presence of the bulk modifies the evolution equations [@Brax:2003fv], more specifically, the Friedmann equation leads to a non-standard expansion law of the universe at high energies, while reproducing the standard four dimensional cosmology at low energies. One particular feature of the RSII model is that the tensor modes are enhanced due to the presence of the five dimensional bulk [@Langlois:2000ns; @Langlois:2002bb]. Chaotic inflation on the brane has been investigated in Ref. [@Maartens:1999hf] and it was shown that the inflationary predictions are modified from those in the four dimensional standard cosmology. Quintessential inflation from brane worlds has also been explored in [@Nunes:2002wz] and also inflation in the context of a Gauss-Bonnet brane cosmology [@Lidsey:2003sj]. More recently, simple inflationary models in the context of braneworld cosmology were analysed against the 2015 Planck data [@Okada:2014eva; @Okada:2015bra]. It is important to compare the dynamics of inflation with scalar fields with the dynamics where higher order fields are considered. The purpose of this work is, therefore, to study braneworld inflationary models driven by a single 3-form, confined to the brane, in the light of the Planck 2015 results [@Ade:2015lrj; @Ade:2015xua]. In Sec. \[RSII\] we introduce the 3-form model in the Randall Sundrum II braneworld. We follow to rewrite the equations of motion in terms of a first order dynamical system for which we identify the critical points and analyse their stability for a specific form of the potential. We explore the main differences of the dynamics compared with the four dimensional case. In Sec. \[perturbations\] we write the power spectra for the scalar and tensor perturbations, calculate the cosmological parameters tensor to scalar ratio and spectral index and evaluate how sensitive they are to small changes in the brane tension. We find a lower bound on this parameter for a particular potential given the recent Planck data [@Ade:2015lrj; @Ade:2015xua]. Finally in Sec. \[conclusions\] we summarize and discuss our results. 3-form in Randall-Sundrum II {#RSII} ============================ In the RSII scenario, our universe is confined to a single positive tension four dimensional 3-brane embedded in a five dimensional Anti de Sitter spacetime with a negative (bulk) cosmological constant. A single 3-form field $A_{\mu\nu\rho}$ minimal coupled to Einstein gravity is confined to the brane, $$\begin{aligned} \label{action} S &=& -\int d^5 x \sqrt{-g^{(5)}} \left( \frac{R} {2\kappa_5^2} + \Lambda_5 \right) \nonumber \\ &-& \int d^4 x \sqrt{-g^{(4)}} \left(\lambda -\frac{1}{48}F^2 -V(A^2)\right).\end{aligned}$$ Here, $R$ is the Ricci scalar, $\Lambda_5$ is the bulk´s cosmological constant, $\lambda$ is the brane tension, $g^{(4)}$ and $g^{(5)}$ are the determinants of the four and five dimensional metrics, respectively. $\kappa^2=8\pi G$ and $F_{\alpha\beta\gamma\delta}$ is the Maxwell tensor given by, $$F_{\alpha\beta\gamma\delta} = 4 \nabla_{[\alpha} A_{\beta\gamma\delta]},$$ where square brackets denote antisymmetrization. In order to avoid an excessive use of indices, we use the notation in which squaring means contracting all the indices, $A^2=A_{\mu\nu\rho} A^{\mu\nu\rho}$, and dotting means contracting the first index, $(\nabla \cdot A )_{\alpha\beta} = \nabla^{\mu} A_{\mu\alpha\beta}$. We consider a Friedmann-Robertson-Walker Universe and take the scalar function $\chi (t)$ to parametrize the background contribution of the 3-form $A_{\mu\nu\rho}$. Thus, the non-vanishing components are given by, $$A_{ijk}=a^3 (t) \epsilon_{ijk} \chi(t),$$ and therefore, $A^2=6\chi^2 (t)$, where $\epsilon_{ijk}$ is the standard Levi-Civita symbol and $i$,$j$ and $k$ denote spatial indices. The action (\[action\]) leads to the equations of motion for the 3-form, $$\label{mot} \nabla\cdot F= 12V'(A^2)A,$$ and, due to antisymmetry, implies the additional set of constraints, $$\nabla\cdot V'(A^2)A = 0.$$ The equations of motion in terms of the comoving field, $\chi$, are unmodified with respect to the previously studied four dimensional case because the matter fields are confined to the brane, $$\label{motion} \ddot{\chi} + 3H\dot{\chi} + 3\dot{H}\chi + V_{,\chi} =0,$$ where the third term is a new feature from the 3-form model, not present in the standard scalar field theory. The generalization of the equations of motion to multiple 3-forms was done in Ref. [@Kumar:2014oka]. The presence of the bulk, however, modifies Einstein’s equations [@Brax:2003fv]. The five-dimensional Einstein’s equations lead to the Friedmann equation, $$H^2=\frac{\kappa ^2}{3} \rho\left[ 1 + \frac{\rho}{2 \lambda}\right] + \frac{\Lambda_4}{3} + \frac{\mu}{a^4},$$ where $\Lambda_4$ is the brane four-dimensional cosmological constant and the last term represents the influence of the bulk gravitons on the brane. In what follows we will use units where $\kappa^2=1$ and we will assume that $\Lambda_4=\mu=0$, leaving us with, $$\label{fridmannRSII} H^2=\frac{1}{3} \rho\left[ 1 + \frac{\rho}{2 \lambda}\right].$$ When we inspect Eq. (\[fridmannRSII\]), we note that the expansion rate is larger at high energies ($\rho \gg 2\lambda$), which means that the friction term in Eq. (\[motion\]) is larger in that regime. This means that the field $\chi(t)$ rolls slower and, for the same initial conditions, inflation can last longer in this five-dimensions set up than in the four-dimensional case. The Friedmann equation in the standard cosmology is reproduced in the limit of low energies, $\rho \ll 2\lambda$. We can define the energy density and pressure for the field in the form, $$\begin{aligned} \rho &=& \frac{1}{2} (\dot{\chi} + 3H\chi)^2 + V,\label{energy} \\ p &=& -\frac{1}{2} (\dot{\chi} + 3H\chi)^2 -V + V_{,\chi}\chi.\end{aligned}$$ Dynamics of the 3-form on the brane {#dynamics} ----------------------------------- In order to study the dynamics of the 3-form on the brane we introduce the dimensionless variables, $$\begin{aligned} x &\equiv& \kappa \chi, \label{x}\\ y^2 &\equiv& \frac{V}{\rho} \label{y}, \\ w &\equiv& \frac{\dot{\chi} + 3 H \chi}{\sqrt{2\rho}}, \label{w}\\ \Theta &\equiv& \left( 1+ \frac{\rho}{2 \lambda} \right)^{-1/2}, \label{theta}\end{aligned}$$ where $x$ represents the comoving field $\chi$, $y$ and $w$ are, respectively, the normalized potential and kinetic energies and $\Theta$ represents the correction term in Eq. (\[fridmannRSII\]). These variables are subject to the constraint, that follows from Eq. (\[energy\]), $$\label{constrangimento} w^2 + y^2 =1.$$ Using Eqs. (\[energy\]), (\[x\]), (\[w\]) and (\[theta\]), the modified Friedmann and Raychaudhuri equations can be written as, $$\begin{aligned} H^2 &=& \frac{1}{3} \frac{V}{(1-w^2)} \Theta^{-2}, \label{friedMod}\\ \dot{H} &=& -V_{,x} x \left(\Theta^{-2} - \frac{1}{2}\right). \label{rayMod}\end{aligned}$$ Substituting for $\rho$ in Eq. (\[constrangimento\]) using Eqs. (\[y\]) and (\[theta\]), we obtain the useful relation for $\Theta$ in terms of the $x$ and $w$ variables, $$\Theta^2= \frac{1-w^2}{1-w^2 + \frac{V}{2\lambda}}.$$ Next we follow to rewrite the equation of motion Eq. (\[motion\]) in terms of a system of first order differential equations for the new variables such that, $$\begin{aligned} x' &=& 3 \left( \sqrt{\frac{2}{3}} \Theta w - x \right), \label{xeq} \\ w' &=& \frac{3}{2} \frac{V_{,x}}{V} (1-w^2) \left[ xw - \Theta \sqrt{\frac{2}{3}} \right], \label{weq}\end{aligned}$$ where a prime means differentiating in respect to the number of e-folds $N=\ln a(t)$, so that $x'=dx/dN$. This system of equations closes as $\Theta$ depends only on $x$ and $w$. We immediately note that at low energies ($\rho\ll 2\lambda$ and therefore, $\Theta\approx1$) we end up recovering the four-dimensional equations studied in Ref. [@Kumar:2014oka] even though the variables were normalized to $H^2$ instead of $\rho$ as we do here. We would like to see now, how the presence of this correction term, $\Theta$, affects the dynamics of the system in comparison with the evolution in the four-dimensional case. Critical points {#critpoints} --------------- Let us assume for now that $\Theta$ evolves sufficiently slow such that we can take it to be a constant within a few $e$-folds. We will see later that this assumption is actually supported by the numerical solutions. We can then identify the [*instantaneous*]{} critical points of the dynamical system established by Eqs. (\[xeq\]) and (\[weq\]). These are shown in Table \[tabela\]. $x$ $w$ $V_{,x}/V$ Description --- ---------------------------------- -------------------------------------------------- ------------ -------------------- A $\pm \sqrt{\frac{2}{3}} \Theta$ $\pm 1$ any kinetic domination B $x_{\rm ext}$ $\sqrt{\frac{3}{2}}\frac{1}{\Theta} x_{\rm ext}$ 0 potential extrema : \[tabela\] Instantaneous critical points of the dynamical system. The critical points A do not exist for the standard scalar field models [@Copeland:1997et] and result from the extra $3 H \chi$ term in the equation of motion (\[motion\]). One of the eigenvalues vanishes, hence, we cannot infer anything regarding its stability from the linear analysis without specifying the form of the potential. The critical point B corresponds to the value of the field at the extrema of the potential, therefore, its stability is strongly dependent on whether it corresponds to a minimum or a maximum of the potential. From the analysis of the critical points we can see that, in the five dimensional set up, the critical points have a dependence on the correction term $\Theta$. This means that as the energy decreases, the instantaneous critical points move along the phase space and approach the four dimensional case at low energies, $\Theta =1$. In Figs. \[phase1\] and \[phase2\] is shown the phase space portrait for a potential of the form $V=e^{\chi^2} -1$. Comparing these figures we, again, note that the critical points A (upper and lower dots) are shifted along the $x$ axis as the system evolves and will eventually end at $x=\pm \sqrt{2/3}$ (4 dim case). As we will see in Sec. \[inflation\], at the critical points A (top and bottom dots), the universe inflates and critical point B (central dots) corresponds to the attractor and potential minimum for this potential where reheating happens as usual [@DeFelice:2012wy]. ![\[phase1\]Phase space $(\tanh (x),w)$ for $V=e^{\chi^2} -1$ at $\Theta=0.3$. ](phase1.png){width="8.5cm"} ![\[phase2\]Phase space $(\tanh (x),w)$ for $V=e^{\chi^2} -1$ at $\Theta=0.9$. ](phase2.png){width="8.5cm"} An alternative way to study the stability of the critical points is by defining the effective potential, $$V_{{\rm eff},\chi}= 3 \dot{H}\chi + V_{,\chi}.$$ We illustrate the potential and the corresponding effective potential for $V=e^{\chi^2} -1$ in Fig. \[effective1\]. ![\[effective1\] Potential $V(\chi)$ (solid line) and effective potential $V_{eff}$ (dashed lines) for the potential $V=e^{\chi^2} -1$ for different values of $\Theta$.](effective.png){width="8.5cm"} We can observe the shift in the value of the instant critical points as the energies decrease, i.e., as $\Theta$ approaches unity, where the critical points are $x = \pm \sqrt{\frac{2}{3}}$ as we can also verify in Table \[tabela\]. One interesting feature regarding the dynamics of a 3-form in RSII is that the $\Theta$ dependence of the dynamics can change the stability of the critical points as the energy decreases. For example, in Fig. \[effmexican\], we traced the Landau-Ginzburg potential $$V(\chi)=(\chi^2-c^2)^2,$$ with $c=0.5$ (solid), and its effective potential (dashed) at different values of $\Theta$ and we observe that at early times the potential minima at $x = \pm 0.5$ are initially unstable and, as the energy decreases, they become stable. ![\[effmexican\] Potential $V(\chi)$ (solid line) and effective potential $V_{eff}$ (dashed lines) for the potential $V=(\chi^2-0.5^2)^2$ for different values of $\Theta$.](effmexican.png){width="8.5cm"} Initial conditions and slow roll regime --------------------------------------- In order to study inflation we need to understand how the slow-roll parameters are modified in this set up. Analogously to the scalar field as well as 3-forms [@Koivisto:2009ew; @DeFelice:2012jt] the parameters are defined by $\epsilon \equiv -\dot{H} / H^2 = -d\ln H/ dN$ and $\eta=\epsilon ' / \epsilon - 2\epsilon$. One manner to establish a sufficient condition for inflation is, $\epsilon \ll 1$ and $|\eta|\ll 1$, which must last for at least $\approx 50$ $e$-folds. For our RSII model we have, $$\begin{aligned} \epsilon &=& \frac{3}{2} x \frac{V_{,x}}{V} (1-w^2) (2 - \Theta^2), \\ \eta &=& \frac{x'(V_{,x} + V_{,xx}x)}{V_{,x}x} + 6x \frac{V_{,x}}{V} (1-w^2) \frac{\Theta^2 -1}{2-\Theta^2},\end{aligned}$$ where the terms in $\Theta$ signal the new contributions to the slow-roll parameters. 3-form inflation on the brane {#inflation} ----------------------------- In this subsection we present inflationary solutions for the system (\[xeq\])–(\[weq\]). We also compare the evolutions between the four and five dimensional cases. Inspecting Fig. \[exp\] and Fig. \[epsilon\] we note that inflation happens when the field is on the plateau of the evolution that for the four dimensional case is flat and corresponds to the critical point $\chi=\pm \sqrt{2/3}$ [@Kumar:2014oka]. For the RSII case, however, the plateau has a gentle slope due to the dependence of the instantaneous critical points on $\Theta$ (we saw that $\chi = \pm \sqrt{2/3} \Theta$) up to the point in which $\chi=\pm \sqrt{2/3}$. We can also note that, for the same initial conditions, inflation lasts about 30 $e$-folds longer in the five dimensional set up due to the fact that there is additional friction to the field’s evolution. When inflation ends, the field goes to the attractor $\chi=0$ which is the potential minimum (critical point B in Table \[tabela\]). ![\[exp\] Solutions for the system (\[xeq\])–(\[weq\]) for the four dimensional case (dashed line) i.e. for $\Theta=1$ already studied in [@Kumar:2014oka] and for the RSII model (solid line) when $\Theta$ is given by Eq. (\[theta\]) for $V=V_0 (e^{\chi^2} -1)$, $V_0=10^{-14}$, $\lambda=10^{-12}$ and for the initial conditions $(x_0,w_0)=(2,0.9055)$. The smaller panel shows the change in $\Theta$, for the RSII model, as the system evolves. ](exp.png){width="8.5cm"} ![\[epsilon\] Change in the slow roll parameter $\epsilon$ for the solutions for the system (\[xeq\])–(\[weq\]) for the RSII model for $V=V_0 (e^{\chi^2} -1)$, $V_0=10^{-14}$, $\lambda=10^{-12}$ and for the initial conditions $(x_0,w_0)=(2,0.9055)$. The dashed line marks $\epsilon=1$ just for reference.](epsilon.png){width="7.5cm"} Cosmological perturbations {#perturbations} ========================== Since the 3-form is confined to the brane and neglecting any backreaction effects of the metric fluctuationsb in the fifth dimension [@Maartens:1999hf], the power spectrum of the curvature perturbations reads, $$\label{power} \mathcal{P}_{\zeta} = \left.\frac{2 H^4}{m_{\rm pl}^2 \pi V_{,\chi} \chi c_s} \right|_*,$$ where, $*$ indicates horizon crossing $ c_s k=aH$, and the sound speed is given by [@Koivisto:2009fb; @Mulryne:2012ax], $$c_s^2 = \frac{V_{,\chi\chi} \chi}{V_{,\chi}}.$$ From the Planck 2015 results [@Ade:2015xua], we fix the power spectrum of scalar perturbations as $\mathcal{P}_{\zeta}(k_0) = 2.196 \times 10^{-9}$ for the pivot scale chosen at $k_0 = 0.002$ Mpc$^{-1}$. The spectral index is given by$$\label{spectral} n_s -1 = -5\epsilon - \frac{\dot{c}_s}{c_s H} - \epsilon c_s^2 + \frac{V_{,\chi}}{3\chi H^2} (1+c_s^2),$$ which, as the power spectrum, also has a dependence on the speed of sound. In the Randall-Sundrum model, however, the amplitude of the tensor modes are modified and the respective power spectrum reads [@Langlois:2000ns], $$\label{at} \mathcal{P}_T = \frac{64\pi}{m_{\rm pl}^2} \left( \frac{H}{2\pi} \right)^2 F^2(x_0) |_*,$$ where $F$ is a correction function, $$\label{f} F(x)= \left[ \sqrt{1+x^2} - x^2 \ln \left( \frac{1}{x} + \sqrt{1+ \frac{1}{x^2}} \right) \right]^{-1/2},$$ and $$x_0 = \left(\frac{3}{4\pi\lambda} \right)^{1/2} H M_{\rm Pl}.$$ For $x_0 \ll 1$, $F(x_0) \simeq 1$ and Eq. (\[at\]) reduces to the standard cosmology formula, and for $x_0 \gg 1$, $F(x_0) \simeq \sqrt{3x_0 /2}$. Finally, the tensor to scalar ratio is then, $$\label{tsratio} r\equiv \frac{\mathcal{P}_T}{\mathcal{P}_{\zeta}} = \frac{8}{H^2} V_{,\chi} \chi c_s F^2 (x_0).$$ We are now ready to compare the cosmological parameters, scalar to tensor ratio and spectral index, of our inflationary setting with the 2015 Planck data [@Ade:2015lrj]. First we consider a form of the scalar potential which has been proven in Ref. [@Kumar:2014oka] to lead to a viable cosmology in the four dimensional set up (although for a two 3-form system) and to produce a good fit to the Planck 2013 results, $$\label{pot} V=V_0 (\chi^2 + b\chi^4),$$ where $V_0$ and $b$ are free parameters. In Fig. \[results\] the bottom bar represents the prediction for the five dimensional case with $\lambda=10^{-5}$. With this value of the brane tension, the evolution quickly reaches $\Theta \approx 1$ which means that this case is practically indistinguishable from the four dimensional solution. When we lower the brane tension and consequently increase the five dimensional effects, we observe that the predictions worsen due to the presence of the correction $F^2(x_0)$ in Eq. (\[at\]), which enhances the tensor to scalar ratio. For $\lambda =10^{-10}$, corresponding to $\lambda \simeq (3.9 \times 10^{16}\,\,{\rm GeV})^4$ (corresponding to the upper bar) the predictions are beyond the Planck TT+lowP contour limits. We find a lower bound, for 60 $e$-folds, of $\lambda \simeq 1.5 \times 10^{-9}$, corresponding to $\lambda \geq (7.6\times 10^{16}\,\,{\rm GeV})^4$, for the inflationary predictions to be within the Planck TT,TE,EE+lowP contour limits. ![\[results\] Comparison of the spectral index and the tensor to scalar ratio against the recent Planck 2015 data [@Ade:2015lrj] for 50 (small dot) and 60 (large dot) $e$-folds for different values of the brane tension $\lambda$. We considered the potential in Eq. (\[pot\]) with $b=-0.245$. The bars represent, from bottom to top, the solutions with $\lambda=10^{-5}$, $\lambda = 3 \times 10^{-9}$ and $\lambda =10^{-10}$ in units $\kappa^2=1$).](results.png){width="8.5cm"} In Fig. \[r1\] we analyse how the brane tension and the tensor to scalar ratio are related as $\lambda$ is lowered for 60 $e$-folds. For $\lambda < 10^{-7}$, $r$ quickly increases due to the presence of $F^2$ in Eq. (\[tsratio\]), making the predictions worse as we also saw in Fig. \[results\]. In Fig. \[ns\] we present the relation between the spectral index and the logarithm of the brane tension $\lambda$. As expected, $n_s$ is almost insensitive to $\lambda$ for large values of this quantity. This is the case because at large $\lambda$ the standard scenario is recovered and as in the scalar picture of the three-form the scalar potential is quadratic, the spectral index must be close to $n_s \sim 0.967$ [@Mulryne:2012ax]. When we lower the brane tension, in order to keep the power spectrum of scalar perturbations fixed as $\mathcal{P}_{\zeta}(k_0) = 2.196 \times 10^{-9}$, for the pivot scale chosen at $k_0 = 0.002$ Mpc$^{-1}$, we also have to change $V_0$ in order to ensure this normalization. This relation is shown in Fig. \[vzero\]. ![\[r1\] $\log \lambda$ vs $r$, for the potential (\[pot\]), with $b=-0.245$, for 60 $e$-folds, for different values of the brane tension $\lambda$. ](r1.png){width="7.5cm"} ![\[ns\] $\log \lambda$ vs $n_s$, for the potential (\[pot\]), with $b=-0.245$, for 60 $e$-folds, for different values of the brane tension $\lambda$. ](ns.png){width="7.5cm"} ![\[vzero\] $\log \lambda$ vs $V_0^* = V_0 \times 10^{12}$, for the potential (\[pot\]), with $b=-0.245$, for 60 $e$-folds, for different values of the brane tension $\lambda$. ](vzero.png){width="7.5cm"} Summary and discussion {#conclusions} ====================== In this work we explored the main differences between the dynamics of a single 3-form in the Randall-Sundrum II braneworld and the standard four dimensional case [@Koivisto:2009ew]. We followed to write the equations of motion for the 3-form model in terms of a system of first order differential equations (\[xeq\])–(\[weq\]). By defining a set of useful variables $(x,y,w,\Theta)$ we identified what we called the instantaneous critical points which now have a dependence on the correction term, $\Theta$, arising from the modified Friedmann equation. We illustrated the effects that take place at high energies by showing the phase space of the system at different stages of the universe, or in other words, for different values of $\Theta$, and by interpreting them as a modification to the effective potential. It was observed that in five dimensions the stability of some instantaneous critical points can change with the energy. We presented an inflationary solution for the potential in Eq. (\[pot\]) and computed the respective tensor to scalar ratio (\[tsratio\]) and spectral index (\[spectral\]). We were able to fit the cosmological predictions with the recent Planck 2015 data [@Ade:2015lrj] for a choice of parameters and saw that the effects of the braneworld bring the observables away from the central region of the data contours. By performing this study, we found a lower bound for the brane tension for the potential (\[pot\]) such that the observables’ values remain inside the contours of the Planck TT,TE,EE+lowP. The authors thank Carsten van de Bruck and Tomi Koivisto for comments on the manuscript. N.J.N was supported by the Fundação para a Ciência e Tecnologia (FCT) through the grants EXPL/FIS-AST/1608/2013 and UID/FIS/04434/2013. [9]{} A. H. Guth, Phys. Rev. D [**23**]{} (1981) 347. A. D. Linde, Phys. Lett. B [**129**]{} (1983) 177. B. A. Bassett, S. Tsujikawa and D. Wands, Rev. Mod. Phys.  [**78**]{}, 537 (2006) \[astro-ph/0507632\]. J. Martin, C. Ringeval and V. Vennin, Phys. Dark Univ.  [**5-6**]{}, 75 (2014) \[arXiv:1303.3787 \[astro-ph.CO\]\]. A. R. Frey and A. Mazumdar, Phys. Rev. D [**67**]{}, 046006 (2003) \[hep-th/0210254\]. S. S. Gubser, hep-th/0010010. K. Groh, J. Louis and J. Sommerfeld, JHEP [**1305**]{}, 001 (2013) \[arXiv:1212.4639 \[hep-th\]\]. L. H. Ford, Phys. Rev. D [**40**]{}, 967 (1989). C. Germani and A. Kehagias, JCAP [**0903**]{}, 028 (2009) \[arXiv:0902.3667 \[astro-ph.CO\]\]. T. S. Koivisto, D. F. Mota and C. Pitrou, JHEP [**0909**]{} (2009) 092 \[arXiv:0903.4158 \[astro-ph.CO\]\]. T. S. Koivisto and N. J. Nunes, Phys. Lett. B [**685**]{}, 105 (2010) \[arXiv:0907.3883 \[astro-ph.CO\]\]. T. S. Koivisto and N. J. Nunes, Phys. Rev. D [**80**]{} (2009) 103509 \[arXiv:0908.0920 \[astro-ph.CO\]\]. D. J. Mulryne, J. Noller and N. J. Nunes, JCAP [**1212**]{} (2012) 016 \[arXiv:1209.2156 \[astro-ph.CO\]\]. A. De Felice, K. Karwan and P. Wongjun, Phys. Rev. D [**85**]{}, 123545 (2012) \[arXiv:1202.0896 \[hep-ph\]\]. T. S. Koivisto and N. J. Nunes, Phys. Rev. D [**88**]{} (2013) 12, 123512 \[arXiv:1212.2541 \[astro-ph.CO\]\]. K. S. Kumar, J. Marto, N. J. Nunes and P. V. Moniz, JCAP [**1406**]{}, 064 (2014) \[arXiv:1404.0211 \[gr-qc\]\]. L. Randall and R. Sundrum, Phys. Rev. Lett.  [**83**]{} (1999) 4690 \[hep-th/9906064\]. P. Brax and C. van de Bruck, Class. Quant. Grav.  [**20**]{} (2003) R201 \[hep-th/0303095\]. D. Langlois, Prog. Theor. Phys. Suppl.  [**148**]{} (2003) 181 \[hep-th/0209261\]. D. Langlois, R. Maartens and D. Wands, Phys. Lett. B [**489**]{} (2000) 259 \[hep-th/0006007\]. R. Maartens, D. Wands, B. A. Bassett and I. Heard, Phys. Rev. D [**62**]{} (2000) 041301 \[hep-ph/9912464\]. N. J. Nunes and E. J. Copeland, Phys. Rev. D [**66**]{} (2002) 043524 \[astro-ph/0204115\]. J. E. Lidsey and N. J. Nunes, Phys. Rev. D [**67**]{} (2003) 103510 \[astro-ph/0303168\]. N. Okada and S. Okada, arXiv:1412.8466 \[hep-ph\]. N. Okada and S. Okada, arXiv:1504.00683 \[hep-ph\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1502.02114 \[astro-ph.CO\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1502.01589 \[astro-ph.CO\]. E. J. Copeland, A. R. Liddle and D. Wands, Phys. Rev. D [**57**]{} (1998) 4686 \[gr-qc/9711068\]. A. De Felice, K. Karwan and P. Wongjun, Phys. Rev. D [**86**]{}, 103526 (2012) \[arXiv:1209.5156 \[astro-ph.CO\]\]. A. R. Liddle, A. Mazumdar and F. E. Schunck, Phys. Rev. D [**58**]{}, 061301 (1998) \[astro-ph/9804177\].
{ "pile_set_name": "ArXiv" }
--- author: - 'Yevgeniy Kovchegov[^1]' date: title: 'Orthogonality and probability: beyond nearest neighbor transitions' --- Introduction ============ This paper was influenced by the approaches described in Deift [@deift] and questions considered in Grünbaum [@g]. The Karlin-McGreogor diagonalization can be used to answer recurrence/transience questions, as well as those of probability harmonic functions, occupation times and hitting times, and a large number of other quantities obtained by solving various recurrence relations, in the study of Markov chains, see [@km1], [@km2], [@km2a], [@km3], [@karlin], [@szego], [@schoutens], [@dksc], [@kmn]. However with some exceptions (see [@km4]) those were nearest neighbor Markov chains on half-line. Grünbaum [@g] mentions two main drawbacks to the method as (a) “typically one cannot get either the polynomials or the measure explicitly", and (b) “the method is restricted to ‘nearest neighbour’ transition probability chains that give rise to tridiagonal matrices and thus to orthogonal polynomials". In this paper we attempt to give possible answers to the second question of Grünbaum [@g] for general reversible Markov chains. In addition, we will consider possible applications of the newer methods in orthogonal polynomials such as using Riemann-Hilbert approach, see [@deift], [@deift1] and [@ku], and their probabilistic interpretations. In Section 2, we will give an overview of the Karlin-McGregor method from a naive college linear algebra perspective. In 2.3, we will give a Markov chain interpretation to the result of Fokas , Its and Kitaev, connecting orthogonal polynomials and Riemann-Hilbert problems. Section 3 deals with one dimensional random walks with jumps of size $\leq m$, the $2m+1$ diagonal operators. There we consider dioganalizing with orthogonal functions. In 3.2, as an example we consider a pentadiagonal operator and use Plemelj formula, and a two sided interval to obtain the respective diagonalization. In Section 4, we use the constructive approach of Deift [@deift] to produce the Karlin-McGregor diagonalization for all irreducible reversible Markov chains. After that, we revisit the example from Section 3. Eigenvectors of probability operators ===================================== Suppose $P$ is a tridiagonal operator of a one-dimensional Markov chain on $\{0,1,\dots\}$ with forward probabilities $p_k$ and backward probabilities $q_k$. Suppose $\lambda$ is an eigenvalue of $P$ and ${\bf q}^T(\lambda)=\left(\begin{array}{c}Q_0 \\ Q_1 \\ Q_2 \\ \vdots \end{array}\right)$ is the corresponding right eigenvector such that $Q_0=1$. So $\lambda {\bf q}^T=P{\bf q}^T$ generates the recurrence relation for $Q_j$. Then each $Q_j(\lambda)$ is a polynomial of $j$-th degree. The Karlin-McGregor method derives the existance of a probability distribution $\psi$ such that polynomials $Q_j(\lambda)$ are orthogonal with respect to $\psi$. In other words, if $\pi$ is stationary with $\pi_0=1$ and $<\cdot,\cdot>_{\psi}$ is the inner product in $L^2(d\psi)$, then $$<Q_i,Q_j>_{\psi}={\delta_{i,j} \over \pi_j}$$ Thus $\{\sqrt{\pi_j} Q_j(\lambda)\}_{j=0,1,\dots}$ are orthonormal polynomials, where $\pi_j={p_0 \dots p_{j-1} \over q_1 \dots q_j}$. Also observe from the recurrence relation that the leading coefficient of $Q_j$ is ${1 \over p_0\dots p_{j-1}}$. 0.2 in Now, $\lambda^t {\bf q}^T=P^t{\bf q}^T$ implies $\lambda^t q_i =(P^t {\bf q}^T)_i$ for each $i$, and $$<\lambda^t Q_i, Q_j>_{\psi}=<(P^t {\bf q}^T)_i,Q_j>_{\psi}={p_t(i,j) \over \pi_j}$$ Therefore $$p_t(i,j)=\pi_j <\lambda^t Q_i, Q_j>_{\psi}$$ Since the spectrum of $P$ lies entirely inside $[-1,1]$ interval, then so is the support of $\psi$. Hence, for $|z|>1$, the generating function $$G_{i,j}(z)=\sum_{t=0}^{+\infty} z^{-t} p_t(i,j)=-z \pi_j <{Q_i \over \lambda-z}, Q_j>_{\psi} =-z \pi_j \int {Q_i(\lambda) Q_j(\lambda) \over \lambda-z} d\psi(\lambda)$$ Converting to a Jacobi operator ------------------------------- Let $b_k=\sqrt{\pi_k \over \pi_{k+1}}p_k$, then $b_k=\sqrt{\pi_{k+1} \over \pi_k}q_{k+1}$ due to reversibility condition. Thus the recurrence relation for ${\bf q}$, $$\lambda \sqrt{\pi_k} Q_k=q_k \sqrt{\pi_k} Q_{k-1}+(1-q_k-p_k) \sqrt{\pi_k} Q_k+p_k \sqrt{\pi_k} Q_{k+1}~,$$ can be rewritten as $$\lambda \sqrt{\pi_k} Q_k=b_{k-1} \sqrt{\pi_k} Q_{k-1}+a_k \sqrt{\pi_k} Q_k+b_k \sqrt{\pi_k} Q_{k+1},$$ where $a_k= 1-q_k-p_k$. Therefore ${\bf \widetilde{q}}=(\sqrt{\pi_0} Q_0, \sqrt{\pi_1} Q_1, \dots )$ solves $\widetilde{P} {\bf \widetilde{q}} = \lambda {\bf \widetilde{q}}$, where $$\widetilde{P}=\left(\begin{array}{cccc}a_0 & b_0 & 0 & \dots \\b_0 & a_1 & b_1 & \ddots \\0 & b_1 & a_2 & \ddots \\\vdots & \ddots & \ddots & \ddots\end{array}\right)$$ is a Jacoby (symmetric triangular with $b_k>0$) operator. Observe that $\widetilde{P}$ is self-adjoint. The above approach extends to all reversible Markov chains. Thus every reversible Markov operator is equivalent to a self-adjoint operator, and therefore has an all real spectrum. Karlin-McGregor: a simple picture --------------------------------- It is a basic fact from linear algebra that if $\lambda_1, \dots, \lambda_n$ are distinct real eigenvalues of an $n \times n$ matrix $A$, and if $u_1,\dots,u_n$ and $v_1,\dots, v_n$ are the corresponding left and right eigenvectors. Then $A$ diagonalizes as follows $$A^t=\sum_{j} {\lambda^t v^T_j u_j \over u_j v^T_j}=\int_{\sigma(A)} \lambda^t v^T(\lambda) u(\lambda) d \psi(\lambda)~,$$ where $u(\lambda_j)=u_j$, $v(\lambda_j)=v_j$, spectrum $\sigma(A)=\{\lambda_1, \dots, \lambda_n \}$, and $$\psi(\lambda)=\sum_{j} { 1 \over u(\lambda) v^T(\lambda)} \delta_{\lambda_j} (\lambda)={n \over u(\lambda) v^T(\lambda)} U_{\sigma(A)}(\lambda)$$ Here $U_{\sigma(A)}(\lambda)$ is the uniform distribution over the spectrum $\sigma(A)$. It is [**important**]{} to observe that the above integral representation is only possible if $u(\lambda)$ and $v(\lambda)$ are well defined - each eigenvalue has multiplicity one, i.e. all distinct real eigenvalues. As we will see later, this will become crucial for Karlin-McGregor diagonalization of reversible Markov chains. The operator for a reversible Markov chain is bounded and is equivalent to a self-adjoint operator, and as such has a real bounded spectrum. However the eigenvalue multiplicity will determine whether the operator’s diagonalization can be expressed in a form of a spectral integral. Since the spectrums $\sigma(P)=\sigma(P^*)$, we will extend the above diagonalization identity to the operator $P$ in the separable Hilbert space $l^2(\mathbb{R})$. First, observe that ${\bf u}(\lambda)=(\pi_0 Q_0, \pi_1 Q_1, \dots)$ satisfies $${\bf u}P =\lambda P$$ due to reversibility. Hence, extending from a finite case to an infinite dimensional space $l^2(\mathbb{R})$, obtain $$P^t=\int \lambda^t {\bf q}^T(\lambda) {\bf u}(\lambda) d \psi(\lambda) =\int \lambda^t \left(\begin{array}{ccc}\pi_0 Q_0 Q_0 & \pi_1 Q_0 Q_1 & \cdots \\\pi_0 Q_1 Q_0 & \pi_1 Q_1 Q_1 & \cdots \\\vdots & \vdots & \ddots\end{array}\right) d \psi(\lambda)~,$$ where $$\psi(\lambda)= \lim_{n \rightarrow +\infty} \psi_n(\lambda)$$ The above is the weak limit of $$\psi_n(\lambda)={n \over {\bf u}(\lambda) {\bf q}^T(\lambda)} U_{\sigma(A_n)}(\lambda)~,$$ where $A_n$ is the restriction of $P$ to the first $n$ coordinates, $<e_0,\dots,e_{n-1}>$ $$A_n= \left(\begin{array}{ccccc}1-p_0 & p_0 & 0 & \cdots & 0 \\q_1 & 1-q_1-p_1 & p_1 & \ddots & \vdots \\0 & q_2 & 1-q_2 & \ddots & 0 \\\vdots & \ddots & \ddots & \ddots & p_{n-2} \\0 & \cdots & 0 & q_{n-1} & 1-q_{n-1}-p_{n-1}\end{array}\right)$$ Observe that if $Q_n(\lambda)=0$ then $(Q_0(\lambda),\dots,Q_{n-1}(\lambda))^T$ is the corresponding right eigenvector of $A_n$. Thus the spectrum of $\sigma(A_n)$ is the roots of $$Q_n(\lambda)=0$$ So $$\psi_n(\lambda)={n \over {\bf u}(\lambda) {\bf q}^T(\lambda)} U_{Q_n=0}(\lambda) ={n \over \sum_{k=0}^{n-1} \pi_k Q_k^2(\lambda)} U_{Q_n=0}(\lambda)~.$$ The orthogonality follows if we plug in $t=0$. Since $\pi_0 Q_0 Q_0 =1$, $\psi$ should integrate to one. 0.3 in [**Example.**]{} [*Simple random walk and Chebyshev polynomials.*]{} The Chebyshev polynomials of the first kind are the ones characterizing a one dimensional simple random walk on half line, i.e. the ones with generator $$P_{ch}=\left(\begin{array}{ccccc}0 & 1 & 0 & 0 & \cdots \\{1 \over 2} & 0 & {1 \over 2} & 0 & \cdots \\0 & {1 \over 2} & 0 & {1 \over 2} & \ddots \\0 & 0 & {1 \over 2} & 0 & \ddots \\\vdots & \vdots & \ddots & \ddots & \ddots\end{array}\right)$$ So, $T_0(\lambda)=1$, $T_1(\lambda)=\lambda$ and $T_{k+1}(\lambda)=2 \lambda T_k(\lambda)-T_{k-1}(\lambda)$ for $k=2,3, \dots$. The Chebyshev polynomials satisfy the following trigonometric identity: $$T_k(\lambda)=\cos(k \cos^{-1}(\lambda))$$ Now, $$\psi_n(\lambda)={n \over \sum_{k=0}^{n-1} \pi_k T_k^2(\lambda)} U_{\{\cos(n \cos^{-1}(\lambda))=0\}}(\lambda)~,$$ where $\pi(0)=1$ and $\pi(1)=\pi(2)=\dots=2$. Here $$U_{\{\cos(n \cos^{-1}(\lambda))=0\}}(\lambda)=U_{\{\cos^{-1}(\lambda))={\pi \over 2n}+{\pi k \over n}, ~k=0,1,\dots,n-1\}}(\lambda)$$ Thus if $X_n \sim U_{\{\cos(n \cos^{-1}(\lambda))=0\}}$, then $Y_n=\cos^{-1}(X_n) \sim U_{\{{\pi \over 2n}+{\pi k \over n}, ~k=0,1,\dots,n-1\}}$ and $Y_n$ converges weakly to $Y \sim U_{[0,\pi]}$. Hence $X_n$ converges weakly to $$X=\cos(Y) \sim {1 \over \pi \sqrt{1 -\lambda^2}} \chi_{[-1,1]}(\lambda) d\lambda~,$$ i.e. $$U_{\{\cos(n \cos^{-1}(\lambda))=0\}}(\lambda) \rightarrow {1 \over \pi \sqrt{1 -\lambda^2}} \chi_{[-1,1]}(\lambda) d\lambda$$ Also observe that if $x=\cos(\lambda)$, then $$\sum_{k=0}^{n-1} \pi_k T_k^2(\lambda)=-1+2\sum_{k=0}^{n-1} \cos^2(kx)=n-{1 \over 2}+{\sin((2n-1)x) \over 2\sin(x)}$$ Thus $$d\psi_n(\lambda) \rightarrow d \psi(\lambda) ={1 \over \pi \sqrt{1 -\lambda^2}} \chi_{[-1,1]}(\lambda) d\lambda$$ Riemann-Hilbert problem and a generating function of $p_t(i,j)$ --------------------------------------------------------------- Let us write $\sqrt{\pi_j} Q_j(\lambda)=k_j P_j(\lambda)$, where $k_j={1 \over \sqrt{p_0 \dots p_{j-1}} \sqrt{q_1 \dots q_j}}$ is the leading coefficient of $\sqrt{\pi_j} Q_j(\lambda)$, and $P_j(\lambda)$ is therefore a [*monic*]{} polynomial. In preparation for the next step, let $w(\lambda)$ be the probability density function associated with the spectral measure $\psi$: $d \psi(\lambda)=w(\lambda) d \lambda$ on the compact support, $supp(\psi) \subset [-1,1]=\Sigma$. Also let $$C(f)(z)={1 \over 2\pi i}\int_{\Sigma} {f(\lambda) \over \lambda -z} d \psi(\lambda)$$ denote the Cauchy transform w.r.t. measure $\psi$. First let us quote the following theorem. [**\[Fokas, Its and Kitaev, 1990\]**]{} Let $$v(x)=\left(\begin{array}{cc}1 & w(x) \\0 & 1\end{array}\right)$$ be the jump matrix. Then, for any $n \in \{0,1,2,\dots \}$, $$m^{(n)}(z)=\left(\begin{array}{cc}P_n(z) & C(P_n w)(z) \\-2\pi i k^2_{n-1}P_{n-1}(z) & -2\pi i k^2_{n-1}C(P_{n-1} w)(z) \end{array}\right),~\text{ for all } z \in \mathbb{C} \setminus \Sigma,$$ is the unique solution to the Riemann-Hilbert problem with the above jump matrix $v(x)$ and $\Sigma$ that satisfies the following condition $$\label{RHcondition} m^{(n)}(z) \left(\begin{array}{cc}z^{-n} & 0 \\0 & z^n\end{array}\right) \rightarrow I~\text{ as }~z \rightarrow \infty~.$$ The Riemann-Hilbert problem, for an oriented smooth curve $\Sigma$, is the problem of finding $m(z)$, analytic in $\mathbb{C} \setminus \Sigma$ such that $$m_+(z)=m_-(z)v(z),~~~\text{ for all }z \in \Sigma,$$ where $m_+$ and $m_-$ denote respectively the limit from the left and the limit from the right, for the function $m$, as we approach a point on $\Sigma$. Suppose we are given the weight function $w(\lambda)$ for the Karlin-McGregor orthogonal polynomials ${\bf q}$. If $m^{(n)}(z)$ is the solution of the Riemann-Hilbert problem as in the above theorem, then for $|z|>1$, $$m^{(n)}(z)=\left(\begin{array}{cc}{1 \over k_n\sqrt{\pi_n}} Q_n(z) & -{1 \over 2\pi i k_n \sqrt{\pi_n} z^{n+1}}G_{0,n} \\-2\pi i{ k_{n-1} \over \sqrt{\pi_{n-1}}}Q_{n-1}(z) & { k_{n-1} \over \sqrt{\pi_{n-1}}z^n} G_{0,n-1} (z) \end{array}\right)$$ $$=\left(\begin{array}{cc}q_1\dots q_nQ_n(z) & -{q_1\dots q_n \over 2\pi i z^{n+1}}G_{0,n} \\{-2\pi i \over p_0\dots p_{n-2}}Q_{n-1}(z) & { 1 \over p_0\dots p_{n-2} z^n} G_{0,n-1} (z) \end{array}\right)$$ Beyond nearest neighbor transitions =================================== Observe that the Chebyshev polynomials were used to diagonalize a simple one dimensional random walk reflecting at the origin. Let us consider a random walk where jumps of sizes one and two are equiprobable $$P=\left(\begin{array}{ccccccc}0 & {1 \over 2} & {1 \over 2} & 0 & 0 & 0 & \dots \\{1 \over 4} & {1 \over 4} & {1 \over 4} & {1 \over 4} & 0 & 0 & \dots \\{1 \over 4} & {1 \over 4} & 0 & {1 \over 4} & {1 \over 4} & 0 & \dots \\0 & {1 \over 4} & {1 \over 4} & 0 & {1 \over 4} & {1 \over 4} & \ddots \\0 & 0 & {1 \over 4} & {1 \over 4} & 0 & {1 \over 4} & \ddots \\0 & 0 & 0 & {1 \over 4} & {1 \over 4} & 0 & \ddots \\\cdots & \cdots & \cdots & \ddots & \ddots & \ddots & \ddots\end{array}\right)$$ The above random walk with the reflector at the origin is reversible with $\pi(0)=1$ and $\pi(1)=\pi(2)=\dots=2$. The Karlin-McGregor representation with orthogonal polynomials will not automatically extend to this case. However this does not rule out obtaining a Karlin-McGregor diagonalization with orthogonal functions. In the case of the above pentadiagonal Chebyshev operator, some eigenvalues will be of geometric multiplicity two as $$P=P^2_{ch}+{1 \over 2}P_{ch}-{1 \over 2}I~,$$ where $P_{ch}$ is the original tridiagonal Chebyshev operator. $2m+1$ diagonal operators ------------------------- Consider a $2m+1$ diagonal reversible probability operator $P$. Suppose it is Karlin-McGregor diagonalizable. Then for a given $\lambda \in \sigma(P)$, let ${\bf q}^T(\lambda)=\left(\begin{array}{c}Q_0 \\ Q_1 \\ Q_2 \\ \vdots \end{array}\right)$ once again denote the corresponding right eigenvector such that $Q_0=1$. Since the operator is more than tridiagonal, we encounter the problem of finding the next $m-1$ functions, $Q_1(\lambda)=\mu_1(\lambda)$, $Q_2(\lambda)=\mu_2(\lambda), \dots, Q_{m-1}(\lambda)=\mu_{m-1}(\lambda)$. Observe that ${\bf q}={\bf q_0}+{\bf q_1} \mu_1+\dots+{\bf q_{m-1}} \mu_{m-1}$, where each ${\bf q}_j^T(\lambda)=\left(\begin{array}{c}Q_{0,j} \\ Q_{1,j} \\ Q_{2,j} \\ \vdots \end{array}\right)$ solves $P{\bf q}_j^T=\lambda {\bf q}_j^T$ recurrence relation with the initial conditions $$Q_{0,j}(\lambda)=0,~~ \dots,~~ Q_{j-1,j}(\lambda)=0,~~ Q_{j,j}(\lambda)=1,~~ Q_{j+1,j}(\lambda)=0,~~ \dots,~~ Q_{m-1,j}(\lambda)=0$$ In other words, ${\bf q}^T(\lambda)={\bf Q}(\lambda) \mu^T~,$ where ${\bf Q}(\lambda)=\left[\begin{array}{cccc} & & & \\| & | & & | \\{\bf q}_0^T & {\bf q}_1^T & \cdots & {\bf q}_{m-1}^T \\| & | & & | \\ & & & \end{array}\right]$ and $\mu^T=\left(\begin{array}{c}1 \\\mu_1(\lambda) \\\vdots \\\mu_{m-1}(\lambda)\end{array}\right)$ is such that ${\bf q}(\lambda) \in l^2(\mathbb{R})$ for each $\lambda \in \sigma(P)$. Let again $A_n$ denote the restriction of $P$ to the first $n$ coordinates, $<e_0,\dots,e_{n-1}>$ Observe that if $Q_n(\lambda)=\dots=Q_{n+m-1}(\lambda)=0$ then $(Q_0(\lambda),\dots,Q_{n-1}(\lambda))^T$ is the corresponding right eigenvector of $A_n$. Thus the spectrum of $\sigma(A_n)$ consists of the roots of $$\det\left(\begin{array}{cccc}Q_{n,0}(\lambda) & Q_{n,1}(\lambda) & & Q_{n,m-1}(\lambda) \\Q_{n+1,0}(\lambda) & Q_{n+1,1}(\lambda) & & Q_{n+1,m-1}(\lambda) \\\vdots & \vdots & \cdots & \vdots \\Q_{n+m-1,0}(\lambda) & Q_{n+m-1,1}(\lambda) & & Q_{n+m-1,m-1}(\lambda)\end{array}\right)=0$$ Chebyshev operators ------------------- Let us now return to the example generalizing the simple random walk reflecting at the origin. There one step and two step jumps were equally likely. The characteristic equation $z^4+z^3-4\lambda z^3+z^2+z=0$ for the recurrence relation $$c_{n+2}+c_{n+1}-4\lambda c_n +c_{n-1}+c_{n-2}=0$$ can be easily solved by observing that if $z$ is a solution then so are $\bar{z}$ and ${1 \over z}$. The solution in radicals is expressed as $~z_{1,2}=r_1 \pm i \sqrt{1-r_1^2}~$ and $~z_{3,4}=d_2 \pm i \sqrt{1-r_2^2}~$, where $r_1={-1 + \sqrt{9+16\lambda} \over 4}$ and $ r_2={-1 - \sqrt{9+16\lambda} \over 4}$. Observe that $r_1$ and $r_2$ are the two roots of $s(x)=\lambda$, where $s(x)=x^2+{1 \over 2}x-{1 \over 2}$ is the polynomial for which $$P=s(P_{ch})$$ In general, the following is true for all operators $P$ that represent symmetric random walks reflecting at the origin, and that allow jumps of up to $m$ flights: there is a polynomial $s(x)$ such that $P=s(P_{ch})$ and the roots $z_j$ of the characteristic relation in $\lambda {\bf c} = P {\bf c}$ will lie on a unit circle with their real parts $Re(z_j)$ solving $s(x)=\lambda$. The reason for the latter is the symmetry of the corresponding characteristic equation of order $2m$, implying ${1 \over z_j}=\bar{z_j}$, and therefore the characteristic equation for $\lambda {\bf c} = P {\bf c}$ can be rewritten as $$s\left( {1 \over 2}\left[z+{1 \over z} \right]\right)=\lambda~,$$ where ${1 \over 2}\left[z+{1 \over z} \right]$ is the Zhukovskiy function. In our case, $s(x)=\left(x+{1 \over 4}\right)^2-{9 \over 16}$, and for $\lambda \in \left(-{9 \over 16},0 \right]$, there will be two candidates for $\mu_1(\lambda)$, $$\mu_+(\lambda)=r_1={-1 + \sqrt{9+16\lambda} \over 4}~~~\text{ and }~~\mu_-(\lambda)=r_2={-1 - \sqrt{9+16\lambda} \over 4}$$ Taking $0 \leq \arg{z}<2\pi$ branch of the logarithm $\log{z}$, and applying Plemelj formula, one would obtain $$\mu_1(z)=-{1 \over 4}+z^{1 \over 2} \exp\left\{{1 \over 2} \int_{-{9 \over 16}}^0 {ds \over s-z}\right\}~,$$ where $\mu_+(\lambda)=\lim_{z \rightarrow \lambda,~Im(z)>0} \mu_1(z)$ and $\mu_-(\lambda)=\lim_{z \rightarrow \lambda,~Im(z)<0} \mu_1(z)$. Now, as we defined $\mu_1(z)$, we can propose the limits of integration to be a contour in $\mathbb{C}$ consisting of $\left[-{9 \over 16},0\right)_+=\lim_{\varepsilon \downarrow 0} \left\{z=x+i\varepsilon~:~x \in \left[-{9 \over 16},0\right) \right\}$, and $\left[-{9 \over 16},0\right)_-=\lim_{\varepsilon \downarrow 0} \left\{z=x-i\varepsilon~:~x \in \left[-{9 \over 16},0\right) \right\}$, and the $[0,1]$ segment. Then $$P^t=\int_{\left[-{9 \over 16},0\right)_- \cup \left[-{9 \over 16},0\right)_+ \cup [0.1]} \lambda^t {\bf q}^T(\lambda) {\bf u}(\lambda) d \psi(\lambda),$$ where ${\bf u}(\lambda)$ is defined as before, and $$d\psi(\lambda)={1 \over 2\pi \sqrt{\lambda+{9 \over 16}}}\left( {\chi_{[-{9 \over 16},0)_-}(\lambda) \over \sqrt{1-\left(\sqrt{\lambda+{9 \over 16}}+{1 \over 4}\right)^2}} +{\chi_{[-{9 \over 16},0)_+}(\lambda) +\chi_{[0,1]}(\lambda) \over \sqrt{1-\left(\sqrt{\lambda+{9 \over 16}}-{1 \over 4}\right)^2}}\right)d\lambda$$ Let us summarize this section as follows. If the structure of the spectrum does not allow Karlin-McGregor diagonalization with orthogonal functions over $[-1,1]$, say when there are two values of $\mu^T(\lambda)$ for some $\lambda$, then one may use Plemelj formula to obtain an integral diagonalization of $P$ over the corresponding two sided interval. Spectral Theorem and why orthogonal polynomials work ==================================================== The constructive proofs in the second chapter of Deift [@deift] suggest the reason why Karlin-McGregor theory of diagonalizing with orthogonal polynomials works for all time reversible Markov chains. Using the same logical steps as in [@deift], we can construct a map ${\cal M}$ which assigns a probability measure $d\psi$ to a reversible transition operator $P$ on a countable state space $\{0,1,2,\dots\}$. W.l.o.g. we can assume $P$ is symmetric as one can instead consider $$\left(\begin{array}{ccc}\sqrt{\pi_0} & 0 & \cdots \\0 & \sqrt{\pi_1} & \ddots \\\vdots & \ddots & \ddots\end{array}\right) P\left(\begin{array}{ccc}{1 \over \sqrt{\pi_0}} & 0 & \cdots \\0 & {1 \over \sqrt{\pi_1}} & \ddots \\\vdots & \ddots & \ddots\end{array}\right)$$ which is symmetric, and its spectrum coinciding with spectrum $\sigma(P) \subset [-1,1]$. Now, for $z \in \mathbb{C} \setminus \mathbb{R}$ let $G(z)=(e_0, (P-zI)^{-1}e_0)$. Then $$ImG(z)={1 \over 2i}\left[(e_0, (P-zI)^{-1}e_0)-(e_0, (P-\bar{z}I)^{-1}e_0)\right]=(Im(z))|(P-zI)^{-1}e_0|^2$$ and therefore $G(z)$ is a [**Herglotz function**]{}, i.e. $G(z)$ is an analytic map from $\{Im(z)>0\}$ into $\{Im(z)>0\}$, and as all such functions, it can be represented as $$G(z)=az+b+\int_{-\infty}^{+\infty} \left({1 \over s-z} -{s \over s^2+1} \right) d\psi(s),~~Im(z)>0$$ In the above representation $a \geq 0$ and $b$ are real constants and $d\psi$ is a Borel measure such that $$\int_{-\infty}^{+\infty} {1 \over s^2+1} d\psi(s) < \infty$$ Deift [@deift] uses $G(z)=(e_0, (P-zI)^{-1}e_0)=-{1 \over z}+O(z^{-2})$ to show $a=0$ in our case, and $$b=\int_{-\infty}^{+\infty} {s \over s^2+1} d\psi(s)$$ as well as the uniqueness of $d\psi$. Hence $$G(z)=\int{d\psi(s) \over s-z},~~Im(z)>0$$ The point of all these is to construct the spectral map $$\mathcal{M}: \{\text{reversible Markov operators P} \} \rightarrow \{\text{probability measures }\psi\text{ on }[-1,1]\text{ with compact }supp(\psi)\}$$ The asymptotic evaluation of both sides in $$(e_0, (P-zI)^{-1}e_0)=\int{d\psi(s) \over s-z},~~Im(z)>0$$ implies $$(e_0, P^k e_0)=\int s^k d\psi(s)$$ Until now we were reapplying the logical steps in Deift [@deift] for the case of reversible Markov chains. However, in the original, the second chapter of Deift [@deift] gives a constructive proof of the following spectral theorem, that summarizes as $$\mathcal{U}: \{\text{bounded Jacobi operators on }l^2 \} \rightleftharpoons \{\text{probability measures }\psi\text{ on }\mathbb{R}\text{ with compact }supp(\psi)\},$$ where $\mathcal{U}$ is one-to-one onto. \[spectralJacobi\][**\[Spectral Theorem\]**]{} For every bounded Jacobi operator $\mathcal{A}$ there exists a unique probability measure $\psi$ with compact support such that $$G(z)=\left(e_0, (\mathcal{A}-zI)^{-1}e_0 \right)=\int_{-\infty}^{+\infty} {d \psi(x) \over x-z}$$ The spectral map $\mathcal{U}:\mathcal{A} \rightarrow d\psi$ is one-to-one onto, and for every $f \in L^2(d\psi)$, $$(\mathcal{UAU}^{-1} f)(s)=sf(s)$$ in the following sense $$(e_0, \mathcal{A} f(\mathcal{A})e_0)=\int s f(s) d\psi(s)$$ So suppose $P$ is a reversible Markov chain, then $$\mathcal{M}:P \rightarrow d\psi~~~\text{and}~~~\mathcal{U}^{-1}: d\psi \rightarrow P_{\triangle}~,$$ where $P_{\triangle}$ is a unique Jacobi operator such that $$(e_0,P^k e_0)=\int s^k d\psi(s)=(e_0,P_{\triangle}^k e_0)$$ Now, if $Q_j(\lambda)$ are the orthogonal polynomials w.r.t. $d\psi$ associated with $P_{\triangle}$, then $Q_j(P_{\triangle})e_0=e_j$ and $$\delta_{i,j}=(e_i,e_j)=(Q_i(P_{\triangle})e_0,Q_j(P_{\triangle})e_0)=(Q_i(P)e_0,Q_j(P)e_0)$$ Thus, if $P$ is irreducible, then $f_j=Q_j(P)e_0$ is an orthonormal basis for Karlin-McGregor diagonalization. If we let $F=\left[\begin{array}{ccc}| & | & \\ f^T_0 & f^T_1 & \cdots \\| & | & \end{array}\right]$, then $$P^t=\left(\begin{array}{ccc} & & \\ & (P^t e_i,e_j) & \\ & & \end{array}\right)=F\left(\begin{array}{ccc} & & \\ & \int_{-1}^1 s^t Q_i(s)Q_j(s) d\psi(s) & \\ & & \end{array}\right)F^{T},$$ where $F^T=F^{-1}$. Also Deift [@deift] provides a way for constructing $$\mathcal{U}^{-1} \mathcal{M}: P \rightarrow P_{\triangle}$$ Since $P_{\triangle}$ is a Jacobi operator, it can be represented as $$P_{\triangle}=\left(\begin{array}{cccc}a_0 & b_0 & 0 & \cdots \\b_0 & a_1 & b_1 & \ddots \\0 & b_1 & a_2 & \ddots \\\vdots & \ddots & \ddots & \ddots\end{array}\right)~~~~b_j>0$$ Now, $$(e_0,Pe_0)=(e_0,P_{\triangle}e_0)=a_0,~~~(e_0,P^2e_0)=(e_0,P_{\triangle}^2e_0)=a_0^2+b_0^2$$ $$(e_0,P^3e_0)=(e_0,P^3_{\triangle}e_0)=(a_0^2+b_0^2)a_0+(a_0+a_1)b_0^2$$ $$\text{ and }(e_0,P^4e_0)=(e_0,P^4_{\triangle}e_0)=(a_0^2+b_0^2)^2+(a_0+a_1)^2b_0^2+b_0^2b_1^2$$ thus providing us with the coefficients of the Jacobi operator, $a_0$, $b_0$, $a_1, \dots$, and therefore the orthogonal polynomials $Q_j$. 0.3 in [**Example.**]{} [*Pentadiagonal Chebyshev operator.*]{} For the pentadiagonal $P$ that represents the symmetric random walk with equiprobable jumps of sizes one and two, $$(e_0,Pe_0)=0,~~(e_0,P^2e_0)={1 \over 4},~~(e_0,P^3e_0)={3 \over 32},~~(e_0,P^4e_0)={9 \over 64},~~ \dots$$ Thus $$a_0=0,~~b_0={1 \over 2},~~a_1={3 \over 8},~~b_1={\sqrt{11} \over 8}, \text{ etc. }$$ So $$P_{\triangle}=\left(\begin{array}{cccc} 0 & {1 \over 2} & 0 & \cdots \\{1 \over 2} & {3 \over 8} & {\sqrt{11} \over 8} & \ddots \\0 & {\sqrt{11} \over 8} & \ddots & \ddots \\\vdots & \ddots & \ddots & \ddots\end{array}\right)$$ and $$Q_0(\lambda)=1,~~~Q_1(\lambda)=2 \lambda,~~~ Q_2(\lambda)={32 \over \sqrt{11}} \lambda^2-{6 \over \sqrt{11}}\lambda-{4 \over \sqrt{11}}, ~~\dots$$ Then applying classical Fourier analysis, one would obtain $$\left(e_0, (P-zI)^{-1}e_0 \right)={1 \over 2\pi} \int_0^{2\pi} {d\theta \over {1 \over 2}[\cos(\theta)+\cos(2\theta)]-z} =\int_{-{9 \over 16}}^1 {d\psi(s) \over s-z}~,$$ where $$d\psi(s)={1 \over 2\pi \sqrt{s+{9 \over 16}}}\left( {\chi_{[-{9 \over 16},1]}(s) \over \sqrt{1-\left(\sqrt{s+{9 \over 16}}-{1 \over 4}\right)^2}} +{\chi_{[-{9 \over 16},0)}(s) \over \sqrt{1-\left(\sqrt{s+{9 \over 16}}+{1 \over 4}\right)^2}}\right)ds$$ To obtain the above expression for $d\psi$ we used the fact that $\left(e_0, (P-zI)^{-1}e_0 \right)$ would be the same if there were no reflector at zero. Applications of Karlin-McGregor diagonalization ----------------------------------------------- Let us list some of the possible applications of the diagonalization. - One can extract a sharp rate of convergence to a stationary probability distribution, if there is one, see Diaconis et. al. [@dksc]. - The generator $$G(z)=\left(\begin{array}{ccc} & & \\ & G_{i,j}(z) & \\ & & \end{array}\right)=F\left(\begin{array}{ccc} & & \\ & -z \int_{-1}^1 {Q_i(\lambda) Q_j(\lambda) \over \lambda-z} d\psi(\lambda) & \\ & & \end{array}\right)F^{T}$$ - One can use the Fokas, Its and Kitaev results, and benefit from the connection between orthogonal polynomials and Riemann-Hilbert problems. - One can interpret random walks in random environment as a random spectral measure. [99]{} P.Deift, [Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach.]{} Amer. Math. Soc., Providance, RI, (2000) P.Deift, [Riemann-Hilbert Methods in the Theory of Orthogonal Polynomials]{} Spectral Theory and Mathematical Physics, Vol. [76]{}, Amer. Math. Soc., Providance, RI, (2006) pp.715-740 P.Diaconis, K.Khare and L.Saloff-Coste [Gibbs Sampling, Exponential Families and Orthogonal Polynomials]{} Statistical Science, Vol. [23]{}, No.2, (2008), pp.151-178 H.Dym and H.P.McKean, [Gaussian processes, function theory, and the inverse spectral problem]{} Probability and Mathematical Statistics, [31]{}, Academic, New York - London (1976) F.A. Grünbaum, [Random walks and orthogonal polynomials: some challenges]{} Probability, Geometry and Integrable Systems - MSRI Publications, Vol. [55]{}, (2007), pp.241-260. S.Karlin, [Total Positivity]{} Stanford University Press, Stanford, CA (1968) S.Karlin and J.L.McGregor, [The differential equations of birth and death processes, and the Stieltjes moment problem]{} Transactions of AMS, [85]{}, (1957), pp.489-546. S.Karlin and J.L.McGregor, [The classification of birth and death processes]{} Transactions of AMS, [86]{}, (1957), pp.366-400. S.Karlin and J.L.McGregor, [Random Walks]{} Illinois Journal of Math., [3]{}, No. 1, (1959), pp.417-431. S.Karlin and J.L.McGregor, [Occupation time laws for birth and death processes]{} Proc. 4th Berkeley Symp. Math. Statist. Prob., [2]{}, (1962), pp.249-272. S.Karlin and J.L.McGregor, [Linear Growth Models with Many Types and Multidimensional Hahn Polynomials]{} In: R.A. Askey, Editor, Theory and Applications of Special Functions, Academic Press, New York (1975), pp. 261Ð288. Y.V.Kovchegov, N.Meredith and E.Nir [Occupation times via Bessel functions]{} preprint A.B.J.Kuijlaarsr, [Riemann-Hilbert Analysis for Orthogonal Polynomials]{} Orthogonal Polynomials and Special Functions (Springer-Verlag), Vol. [1817]{}, (2003) W.Schoutens, [Stochastic Processes and Orthogonal Polynomials.]{} Lecture notes in statistics (Springer-Verlag), Vol. [146]{}, (2000) G.Szegö, [Orthogonal Polynomials.]{} Fourth edition. AMS Colloquium Publications, Vol. [23]{}, (1975) [^1]: Department of Mathematics, Oregon State University, Corvallis, OR 97331-4605, USA `[email protected]`
{ "pile_set_name": "ArXiv" }
--- abstract: | We study the complexity of some algorithmic problems on directed hypergraphs and their strongly connected components ([<span style="font-variant:small-caps;">Scc</span>]{}s). The main contribution is an almost linear time algorithm computing the terminal strongly connected components ([*i.e.*]{} [<span style="font-variant:small-caps;">Scc</span>]{}s which do not reach any components but themselves). *Almost linear* here means that the complexity of the algorithm is linear in the size of the hypergraph up to a factor $\alpha(n)$, where $\alpha$ is the inverse of Ackermann function, and $n$ is the number of vertices. Our motivation to study this problem arises from a recent application of directed hypergraphs to computational tropical geometry. We also discuss the problem of computing all [<span style="font-variant:small-caps;">Scc</span>]{}s. We establish a superlinear lower bound on the size of the transitive reduction of the reachability relation in directed hypergraphs, showing that it is combinatorially more complex than in directed graphs. Besides, we prove a linear time reduction from the well-studied problem of finding all minimal sets among a given family to the problem of computing the [<span style="font-variant:small-caps;">Scc</span>]{}s. Only subquadratic time algorithms are known for the former problem. These results strongly suggest that the problem of computing the [<span style="font-variant:small-caps;">Scc</span>]{}s is harder in directed hypergraphs than in directed graphs. address: 'INRIA Saclay – Ile-de-France and CMAP, Ecole Polytechnique, France' author: - Xavier Allamigeon title: On the complexity of strongly connected components in directed hypergraphs --- Introduction {#sec:introduction} ============ Directed hypergraphs consist in a generalization of directed graphs, in which the tail and the head of the arcs are sets of vertices. Directed hypergraphs have a very large number of applications, since hyperarcs naturally provide a representation of implication dependencies. Among others, they are used to solve several problems related to satisfiability in propositional logic, in particular on Horn formulas, see for instance [@Ausiello91; @Ausiello97; @Gallo95; @Gallo98; @Pretolani03]. They also appear in problems relative to network routing [@Pretolani00], functional dependencies in database theory [@AusielloJACM83], model checking [@Liu98], chemical reaction networks [@Ozturan08], transportation networks [@Nguyen89; @Nguyen98], and more recently, tropical convex geometry [@AllamigeonGaubertGoubaultDCG2013; @AllamigeonGaubertGoubaultSTACS10]. Many algorithmic aspects of directed hypergraphs have been studied, in particular optimization related ones, such as determining shortest paths [@Nguyen89; @NielsenORL06], maximum flows, minimum cardinality cuts, or minimum weighted hyperpaths (we refer to the surveys of Ausiello [*et al.*]{} [@Ausiello01] and of Gallo [*et al.*]{} [@GalloDAM93] for a comprehensive list of contributions). Naturally, some problems raised by the reachability relation in directed hypergraphs have also been studied. For instance, determining the set of the vertices reachable from a given vertex is known to be solvable in linear time in the size of the directed hypergraph (see for instance [@GalloDAM93]).[^1] In directed graphs, many other problems can be solved in linear time, such as testing acyclicity or strong connectivity, computing the strongly connected components ([<span style="font-variant:small-caps;">Scc</span>]{}s), determining a topological sorting over them, [*etc*]{}. Surprisingly, the analogues of these elementary problems in directed hypergraphs have not received any particular attention (as far as we know). Unfortunately, none of the direct graph algorithms can be straightforwardly extended to directed hypergraphs. The main reason is that the reachability relation of hypergraphs does not have the same structure: for instance, establishing that a given vertex $u$ reaches another vertex $v$ generally involves vertices which do not reach $v$. Moreover, as shown by Ausiello [*et al.*]{} in [@AusielloISCO12], the vertices of a hypercycle do not necessarily belong to a same strongly connected component. Naturally, the aforementioned problems can be solved by determining the whole graph of the reachability relation, calling a linear time reachability algorithm on every vertex of the directed hypergraph. This naive approach is obviously not optimal, in particular when the hypergraph coincides with a directed graph. #### Contributions We first present in Section \[sec:maxscc\] an algorithm able to determine the terminal strongly connected components of a directed hypergraph in time complexity $O(N \alpha(n))$, where $N$ is the size of the hypergraph, $n$ the number of vertices, and $\alpha$ is the inverse of the Ackermann function. An [<span style="font-variant:small-caps;">Scc</span>]{} is said to be *terminal* when no other [<span style="font-variant:small-caps;">Scc</span>]{} is reachable from it. The time complexity is said to be *almost linear* because $\alpha(n) \leq 4$ for any practical value of $n$. As a by-product, the following two properties: is a directed hypergraph strongly connected? does a hypergraph admit a sink ([*i.e.*]{} a vertex reachable from all vertices)? can be determined in almost linear time. Problems involving terminal [<span style="font-variant:small-caps;">Scc</span>]{}s have important applications in computational tropical geometry. In particular, the algorithm presented here is the cornerstone of an analog of the double description method in tropical algebra [@AllamigeonGaubertGoubaultDCG2013]. We refer to Section \[subsec:other\_properties\] for further details, where other applications to Horn formulas and nonlinear spectral theory are also discussed. The contributions presented in Section \[sec:combinatorics\] indicate that the problem of computing the complete set of [<span style="font-variant:small-caps;">Scc</span>]{}s is very likely to be harder in directed hypergraphs than in directed graphs. In Section \[subsec:transitive\_reduction\], we establish a lower bound result which shows that the size of the transitive reduction of the reachability relation may be superlinear in the size of the directed hypergraph (whereas it is linearly upper bounded in the setting of directed graphs). An important consequence is that any algorithm computing the [<span style="font-variant:small-caps;">Scc</span>]{}s in directed hypergraphs by exploring the entire reachability relation, or at least a transitive reduction, has a superlinear complexity. In Section \[subsec:set\_pb\_reduction\], we prove a linear time reduction from the minimal set problem to the problem of computing the strongly connected components. Given a family ${\mathcal{F}}$ of sets over a certain domain, the minimal set problem consists in determining all the sets of ${\mathcal{F}}$ which are minimal for the inclusion. While it has received much attention (see Section \[subsec:set\_pb\_reduction\] and the references therein), the best known algorithms are only subquadratic time. #### Related Work Reachability in directed hypergraphs has been defined in different ways in the literature, depending on the context and the applications. The reachability relation which is discussed here is basically the same as in [@AusielloTCS90; @Ausiello91; @Ausiello01], but is referred to as *$B$-reachability* in [@GalloDAM93; @Gallo95]. It precisely captures the logical implication dependencies in Horn propositional logic, and also the functional dependencies in the context of relational databases. Some variants of this reachability relation have been introduced, for instance with the additional requirement that every hyperpath has to be provided with a linear ordering over the alternating sequence of its vertices and hyperarcs [@ThakurTripathiTCS09]. These variants are beyond the scope of the paper. As mentioned above, determining the set of the reachable vertices from a given vertex has been thoroughly studied. Gallo [*et al.*]{}  provide a linear time algorithm in [@GalloDAM93]. In a series of works [@AusielloTCS90; @Ausiello91; @Ausiello97], Ausiello [*et al.*]{}  introduce online algorithms maintaining the set of reachable vertices, or hyperpaths between vertices, under hyperarc insertions/deletions. Computing the transitive closure and reduction of a directed hypergraph has also been studied by Ausiello [*et al.*]{}  in [@Ausiello86]. In their work, reachability relations between sets of vertices are also taken into account, in contrast with our present contribution in which we restrict to reachability relations between vertices. The notion of transitive reduction in [@Ausiello86] is also different from the one discussed here (Section \[subsec:transitive\_reduction\]). More precisely, the transitive reduction of [@Ausiello86] rather corresponds to minimal hypergraphs having the same transitive closure (several minimality properties are studied, including minimal size, minimal number of hyperarcs, [*etc*]{}). In contrast, we discuss here the transitive reduction of the reachability relation (as a binary relation over vertices) and not of the hypergraph itself. Preliminary definitions and notations {#sec:preliminaries} ===================================== A *directed hypergraph* is a pair $({\mathcal{V}},A)$, where ${\mathcal{V}}$ is a set of vertices, and $A$ a set of hyperarcs. A *hyperarc* $a$ is itself a pair $(T,H)$, where $T$ and $H$ are both non-empty subsets of ${\mathcal{V}}$. They respectively represent the *tail* and the *head* of $a$, and are also denoted by $T(a)$ and $H(a)$. Note that throughout this paper, the term *hypergraph(s)* will always refer to directed hypergraph(s). The size of a directed hypergraph ${\mathcal{H}}= ({\mathcal{V}},A)$ is defined as $\operatorname{\mathsf{size}}({\mathcal{H}}) = \card{{\mathcal{V}}} + \sum_{(T,H) \in A} (\card{T} + \card{H})$ (where $\card{S}$ denotes the cardinality of any set $S$). Given a directed hypergraph ${\mathcal{H}}= ({\mathcal{V}},A)$ and $u,v \in {\mathcal{V}}$, the vertex $v$ is said to be *reachable* from the vertex $u$ in ${\mathcal{H}}$, which is denoted $u {\rightsquigarrow}_{\mathcal{H}}v$, if $u = v$, or there exists a hyperarc $(T,H)$ such that $v \in H$ and all the elements of $T$ are reachable from $u$. This also leads to a notion of hyperpaths: a *hyperpath* from $u$ to $v$ in ${\mathcal{H}}$ is a sequence of $p$ hyperarcs $(T_1,H_1),\dots,(T_p,H_p) \in A$ satisfying $T_i \subseteq \cup_{j = 0}^{i-1} H_j$ for all $i = 1, \dots, p+1$, with the conventions $H_0 = \{u\}$ and $T_{p+1} = \{v\}$. The hyperpath is said to be *minimal* if none of its subsequences is a hyperpath from $u$ to $v$. The *strongly connected components* ([<span style="font-variant:small-caps;">Scc</span>]{}s for short) of a directed hypergraph ${\mathcal{H}}$ are the equivalence classes of the relation ${\equiv}_{\mathcal{H}}$, defined by $u {\equiv}_{\mathcal{H}}v$ if $u {{\rightsquigarrow}}_{\mathcal{H}}v$ and $v {{\rightsquigarrow}}_{\mathcal{H}}u$. A component $C$ is said to be *terminal* if for any $u \in C$ and $v \in {\mathcal{V}}$, $u {{\rightsquigarrow}}_{\mathcal{H}}v$ implies $v \in C$. If $f$ is a function from ${\mathcal{V}}$ to an arbitrary set, the image of the directed hypergraph ${\mathcal{H}}$ by $f$ is the hypergraph, denoted $f({\mathcal{H}})$, consisting of the vertices $f(v)$ ($v \in {\mathcal{V}}$) and the hyperarcs $(f(T(a)),f(H(a)))$ ($a \in A$), where $f(S) := \{f(x) \mid x \in S\}$. \(u) at (-2,-1) [$u$]{}; (v) at (0,0) [$v$]{}; (w) at (0,-2) [$w$]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; \(u) edge\[simpleedge,out=90,in=-180\] (v); at (-1,-0.5) [$a_1$]{}; (v) edge\[simpleedge,out=-90,in=90\] (w); at (-0.5,-1.3) [$a_2$]{}; (w) edge\[simpleedge,out=-120,in=-60\] (u); at (-1.5,-2.5) [$a_3$]{}; at (1.75,-0.5) [$a_4$]{}; at (2.5,-3.5) [$a_5$]{}; ; ; Consider the directed hypergraph depicted in Figure \[fig:hypergraph\]. Its vertices are $u, v, w, x, y, t$, and its hyperarcs $a_1 =(\{u\}, \{v\})$, $a_2 = (\{v\}, \{w\})$, $a_3 = (\{w\}, \{u\})$, $a_4 = (\{v,w\}, \{x,y\})$, and $a_5 = (\{w,y\}, \{t\})$. A hyperarc is represented as a bundle of arcs, and is decorated with a solid disk portion when its tail contains several vertices. Applying the recursive definition of reachability from the vertex $u$ discovers the vertices $v$, then $w$, which leads to the two vertices $x$ and $y$ through the hyperarc $a_4$, and finally $t$ through $a_5$. The vertex $t$ is reachable from $u$ through the hyperpath $a_1,a_2,a_4,a_5$ (which is minimal). As mentioned in Section \[sec:introduction\], some vertices play the role of ‘‘auxiliary” vertices when determining reachability. In our example, establishing that $t$ is reachable from $u$ requires to establish that $y$ is reachable from $u$, while $y$ does not reach $t$. Such a situation cannot occur in directed graphs. Observe that all the notions presented in this section are generalizations of their analogues on directed graphs. Indeed, any digraph $G = ({\mathcal{V}},A)$ ($A \subseteq {\mathcal{V}}\times {\mathcal{V}}$) can be equivalently seen as a directed hypergraph ${\mathcal{H}}= \bigl({\mathcal{V}},\bigl\{(\{u\},\{v\}) \mid (u,v) \in A \bigr\}\bigr)$. The reachability relations on $G$ and ${\mathcal{H}}$ coincide, and $G$ and ${\mathcal{H}}$ both have the same size. The notations introduced here will be consequently used for directed graphs as well. Computing the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s in almost linear time {#sec:maxscc} =================================================================================================== Principle of the algorithm {#subsec:maxscc_principle} -------------------------- Given a directed hypergraph ${\mathcal{H}}= ({\mathcal{V}}, A)$, an hyperarc $a \in A$ is said to be *simple* when $\card{T(a)} = 1$. Such hyperarcs generate a directed graph, denoted by ${\mathsf{graph}}({\mathcal{H}})$, defined as the couple $({\mathcal{V}}, A')$ where $A' = \{ (t,h) \mid (\{ t \},H) \in A \text{ and } h \in H \}$. We first point out a remarkable special case in which the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s of the directed hypergraph ${\mathcal{H}}$ and the digraph ${\mathsf{graph}}({\mathcal{H}})$ are equal. \[prop:terminal\_scc\] Let ${\mathcal{H}}$ be a directed hypergraph. Every terminal strongly connected component of ${\mathsf{graph}}({\mathcal{H}})$ reduced to a singleton is a terminal strongly connected component of ${\mathcal{H}}$. Besides, if all terminal strongly connected components of ${\mathsf{graph}}({\mathcal{H}})$ are singletons, then ${\mathcal{H}}$ and ${\mathsf{graph}}({\mathcal{H}})$ have the same terminal strongly connected components. Assume ${\mathcal{H}}= ({\mathcal{V}},A)$. Let $\{ u \}$ be a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}})$. Suppose that there exists $v \neq u$ such that $u {{\rightsquigarrow}}_{\mathcal{H}}v$. There is necessarily a hyperarc $(T, H) \in A$ such that $T = \{ u \}$ and $H \neq \{ u \}$. Let $w \in H \setminus \{ u \}$. Then $(u,w)$ is an arc of ${\mathsf{graph}}({\mathcal{H}})$. Since $\{ u \}$ is a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}})$, this enforces $w = u$, which is a contradiction. Hence $\{ u \}$ is a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of the hypergraph ${\mathcal{H}}$. Assume that every terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}})$ is a singleton. Let $C$ be a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathcal{H}}$, and $u \in C$. Consider $\{v\}$ a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}})$ such that $u {{\rightsquigarrow}}_{{\mathsf{graph}}({\mathcal{H}})} v$. Using the first part of the proof, $\{v\}$ is a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathcal{H}}$. Besides, $\{v\}$ is reachable from $C$ in ${\mathcal{H}}$. We conclude that $C = \{v\}$. The following proposition ensures that, in a directed hypergraph, merging two vertices of a same [<span style="font-variant:small-caps;">Scc</span>]{} does not alter the reachability relation. \[prop:collapse\] Let ${\mathcal{H}}= ({\mathcal{V}},A)$ be a directed hypergraph, and let $x,y \in {\mathcal{V}}$ such that $x {\equiv}_{\mathcal{H}}y$. Consider the function $f$ mapping any vertex distinct from $x$ and $y$ to itself, and both $x$ and $y$ to a same vertex $z$ (with $z \not \in {\mathcal{V}}$). Then $u {{\rightsquigarrow}}_{\mathcal{H}}v$ if, and only if, $f(u) {{\rightsquigarrow}}_{f({\mathcal{H}})} f(v)$. Let ${\mathcal{H}}' = f({\mathcal{H}})$. First assume that $u {{\rightsquigarrow}}_{\mathcal{H}}v$, and let us show by induction that $f(u) {{\rightsquigarrow}}_{f({\mathcal{H}})} f(v)$. The case $u = v$ is trivial. If there exists $(T, H) \in A$ such that $v \in H$ and for all $w \in T$, $u {{\rightsquigarrow}}_{\mathcal{H}}w$, then $f(u) {{\rightsquigarrow}}_{f({\mathcal{H}})} f(w)$ by induction, which proves that $f(v)$ is reachable from $f(u)$ in $f({\mathcal{H}})$. Conversely, suppose $f(u) {{\rightsquigarrow}}_{f({\mathcal{H}})} f(v)$. If $f(u) = f(v)$, then either $u = v$, or the two vertices $u$ and $v$ belong to $\{x,y\}$. In both cases, $v$ is reachable from $u$ in ${\mathcal{H}}$. Now suppose that there exists a hyperarc $(f(T), f(H))$ in $f({\mathcal{H}})$ such that $f(v) \in f(H)$, and for all $w \in T$, $f(u) {{\rightsquigarrow}}_{f({\mathcal{H}})} f(w)$. By induction hypothesis, we know that $u {{\rightsquigarrow}}_{\mathcal{H}}w$. If $v \in H$, we obtain the expected result. If not, $v$ necessarily belongs to $\{x, y\}$. If, for instance, $v = x$, then $y \in H$. Thus $y$ is reachable from $u$ in ${\mathcal{H}}$, and we conclude by $x {\equiv}_{\mathcal{H}}y$. It follows that the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s of ${\mathcal{H}}$ and $f({\mathcal{H}})$ are in one-to-one correspondence. This property can be straightforwardly extended to the operation of merging several vertices of a same [<span style="font-variant:small-caps;">Scc</span>]{} simultaneously. Using Propositions \[prop:terminal\_scc\] and \[prop:collapse\], we now sketch a method which computes the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s in a directed hypergraph ${\mathcal{H}}= ({\mathcal{V}},A)$. It performs several transformations on a hypergraph ${\mathcal{H}}_{\mathit{cur}}$ whose vertices are labelled by subsets of ${\mathcal{V}}$: ; Each time the *vertex merging step* (Step ) is executed, new arcs may appear in the directed graph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. This case is illustrated in Figure \[fig:merging\]. In both sides, the arcs of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ are depicted in solid, and the non-simple arcs of ${\mathcal{H}}_{\mathit{cur}}$ in dotted line. Note that the vertices of ${\mathcal{H}}_{\mathit{cur}}$ contain subsets of ${\mathcal{V}}$, but enclosing braces are omitted for readability. Applying Step  from vertex $u$ (left side) discovers a terminal [<span style="font-variant:small-caps;">Scc</span>]{} formed by $u$, $v$, and $w$ in the directed graph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. At Step  (right side), the vertices are merged, and the hyperarc $a_4$ is transformed into two graph arcs leaving the new vertex $\{u,v,w\}$. The termination of this method is ensured by the fact that the number of vertices in ${\mathcal{H}}_{\mathit{cur}}$ is strictly decreased each time Step  is applied. When the method is terminated, terminal [<span style="font-variant:small-caps;">Scc</span>]{}s of ${\mathcal{H}}_{\mathit{cur}}$ are all reduced to single vertices, each of them labelled by subsets of ${\mathcal{V}}$. Propositions \[prop:terminal\_scc\] and \[prop:collapse\] prove that these subsets are precisely the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s of ${\mathcal{H}}$. \(u) at (-2,-1) [$u$]{} node\[node distance=3.5ex,left of=u\] [$0$]{}; (v) at (0,0) [$v$]{} node\[node distance=3.5ex,below left of=v\] [$1$]{}; (w) at (0,-2) [$w$]{} node\[node distance=3.5ex,below of=w\] [$2$]{}; (x) at (3.5,-0.25) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; at (1.75,-0.3) [$\begin{aligned} r_{a_4} & = v \\[-1.5ex] c_{a_4} & = 2 \end{aligned}$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; \(u) edge\[simpleedge,out=90,in=-180\] (v); (v) edge\[simpleedge,out=-90,in=90\] (w); (w) edge\[simpleedge,out=-120,in=-60\] (u); ; ; (uvw) at (0,-1.5) [$u$ $v$\ $w$]{} node\[node distance=6ex,left of=uvw\] [$0$]{}; (x) at (3.5,0) [$x$]{} node\[node distance=3.5ex,right of=x\] [$3$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; (uvw) edge\[simpleedge,out=40,in=-180\] (x); (uvw) edge\[simpleedge,out=40,in=120\] (y); ; Optimized algorithm {#subsec:optimized_algorithm} ------------------- The sketch given in Section \[subsec:maxscc\_principle\] is naturally not optimal (each vertex can be visited $O(\card{{\mathcal{V}}})$ times). We propose to incorporate the vertex merging step directly into an algorithm determining the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s in directed graphs, in order to gain efficiency. The resulting algorithm on directed hypergraphs is given in Figure \[fig:maxscc\]. We suppose that the directed hypergraph ${\mathcal{H}}$ is provided with the lists $A_u$ of hyperarcs $a$ such that $u \in T(a)$, for each $u \in {\mathcal{V}}$ (these lists can be built in linear time in a preprocessing step). The algorithm consists of a main function which initializes data, and then iteratively calls the function on the vertices which have not been visited yet. Following the sketch given in Section \[subsec:maxscc\_principle\], the function repeats the following three tasks: it recursively searches a terminal [<span style="font-variant:small-caps;">Scc</span>]{} in the underlying directed graph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$, starting from the vertex $u$, once a terminal [<span style="font-variant:small-caps;">Scc</span>]{} is found, it performs a vertex merging step on it, and finally, it discovers the new graph arcs (if any) arising from the merging step. Before discussing each of these three operations, we explain how the directed hypergraph ${\mathcal{H}}_{\mathit{cur}}$ is manipulated by the algorithm. Observe that the vertices of the hypergraph ${\mathcal{H}}_{\mathit{cur}}$ always form a partition of the initial set ${\mathcal{V}}$ of vertices. Instead of referring to them as subsets of ${\mathcal{V}}$, we use a union-find structure, which consists in three functions , , and (see [@Cormen01 Chapter 21] for instance). A call to $\Call{Find}{u}$ returns, for each original vertex $u \in {\mathcal{V}}$, the unique vertex of the hypergraph ${\mathcal{H}}_{\mathit{cur}}$ containing $u$. Two vertices $U$ and $V$ of ${\mathcal{H}}_{\mathit{cur}}$ can be merged by a call to $\Call{Merge}{U, V}$, which returns the new vertex. Finally, the “singleton” vertices $\{ u \}$ of the initial instance of the hypergraph ${\mathcal{H}}_{\mathit{cur}}$ are created by the function $\Call{MakeSet}{}$. In practice, each vertex of ${\mathcal{H}}_{\mathit{cur}}$ is encoded as a representative element $u \in {\mathcal{V}}$, in which case the vertex corresponds to the subset $\{ v \in {\mathcal{V}}\mid \Call{Find}{v} = u \}$. In other words, the hypergraph ${\mathcal{H}}_{\mathit{cur}}$ is precisely the image of ${\mathcal{H}}$ by the function $\Call{Find}{}$. To avoid confusion, we denote the vertices of the hypergraph ${\mathcal{H}}$ by lower case letters, and the vertices of ${\mathcal{H}}_{\mathit{cur}}$ (and subsequently ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$) by capital ones. By convention, if $u \in {\mathcal{V}}$, ${\Call{Find}}{u}$ will correspond to the associated capital letter $U$. #### Discovering terminal [<span style="font-variant:small-caps;">Scc</span>]{}s in the directed graph ${\mathsf{graph}}({\mathcal{H}}_{{\mathit{cur}}})$ This task is performed by the parts of the algorithm which are not shaded in gray. Similarly to Tarjan’s algorithm [@Tarjan72], it uses a stack $S$ and two arrays indexed by vertices, ${\mathit{index}}$ and ${\mathit{low}}$. The stack $S$ stores the vertices $U$ of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ which are currently visited by $\Call{Visit}{}$. The array ${\mathit{index}}$ tracks the order in which the vertices are visited, [*i.e.*]{} ${\mathit{index}}[U] < {\mathit{index}}[V]$ if, and only if, $U$ has been visited by before $V$. The value ${\mathit{low}}[U]$ is used to determine the minimal index of the visited vertices which are reachable from $U$ (see Line [[\[scc:min\]]{}]{}). A (non necessarily terminal) strongly connected component $C$ of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ is discovered when a vertex $U$ satisfies ${\mathit{low}}[U] = {\mathit{index}}[U]$ (Line [[\[scc:root\]]{}]{}). Then $C$ consists of all the vertices stored in the stack $S$ above $U$. The vertex $U$ is the element of the [<span style="font-variant:small-caps;">Scc</span>]{} which has been visited first, and is called its *root*. Once the visit of the [<span style="font-variant:small-caps;">Scc</span>]{} is terminated, its vertices are collected in a set ${\mathit{Finished}}$ (Line [[\[scc:finished\]]{}]{}). Additionally, the algorithm uses an array ${\mathit{is\_term}}$ of booleans, allowing to track whether a [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ is terminal. A [<span style="font-variant:small-caps;">Scc</span>]{} is terminal if, and only if, its root $U$ satisfies ${\mathit{is\_term}}[U] = \True$. In particular, the boolean ${\mathit{is\_term}}[U]$ is set to $\False$ as soon as $U$ is connected to a vertex $W$ located in a distinct [<span style="font-variant:small-caps;">Scc</span>]{} (Line [[\[scc:not\_max1\]]{}]{}) or satisfying ${\mathit{is\_term}}[W] = \False$ (Line [[\[scc:not\_max2\]]{}]{}). $n \gets 0$, $S \gets {[\,]}$, ${\mathit{Finished}}\gets \emptyset$ $r_a \gets \Nil$, $c_a \gets 0$ ${\mathit{index}}[u] \gets \Nil$ ${\mathit{low}}[u] \gets \Nil$ $F_u \gets {[\,]}$, \[scc:makeset\] \[scc:end\_init\] \[scc:begin\_main\_loop\] \[scc:visit\_call\] \[scc:end\_main\_loop\] local $U \gets \Call{Find}{u}$\[scc:find1\], local $F \gets {[\,]}$\[scc:begin\] ${\mathit{index}}[U] \gets n$, ${\mathit{low}}[U] \gets n$\[scc:troot\_def\] $n \gets n+1$ ${\mathit{is\_term}}[U] \gets \True$ push $U$ on the stack $S$ \[scc:begin\_node\_loop\] push $a$ on $F$\[scc:simple\_edge\] $r_a \gets u$\[scc:root\_def\] local $R_a \gets \Call{Find}{r_a}$\[scc:find2\] \[scc:root\_reach\] $c_a \gets c_a + 1$\[scc:counter\_increment\] \[scc:counter\_reach\] push $a$ on stack $F_{R_a}$\[scc:stack\_edge\] \[scc:end\_aux\] \[scc:end\_node\_loop\] \[scc:begin\_edge\_loop\] pop $a$ from $F$ \[scc:begin\_edge\_loop2\] local $W \gets \Call{Find}{w}$\[scc:find3\] \[scc:membership\] ${\mathit{is\_term}}[U] \gets \False$ \[scc:not\_max1\] ${\mathit{low}}[U] \gets \min({\mathit{low}}[U],{\mathit{low}}[W])$ \[scc:min\] ${\mathit{is\_term}}[U] \gets {\mathit{is\_term}}[U] \And {\mathit{is\_term}}[W]$ \[scc:not\_max2\] \[scc:end\_edge\_loop2\] \[scc:end\_edge\_loop\] \[scc:root\] \[scc:begin2\] local $i \gets {\mathit{index}}[U]$\[scc:begin\_node\_merging\] pop each $a$ from $F_U$ and push it on $F$\[scc:push\_on\_fprime1\] pop $V$ from $S$ \[scc:begin\_node\_merging\_loop\] pop each $a$ from $F_V$ and push it on $F$ \[scc:push\_on\_fprime2\] $U \gets \Call{Merge}{U, V}$\[scc:merge\] pop $V$ from $S$ \[scc:end\_node\_merging\_loop\]\[scc:end\_node\_merging\] ${\mathit{index}}[U] \gets i$, push $U$ on $S$\[scc:index\_redef\] go to Line [[\[scc:begin\_edge\_loop\]]{}]{}\[scc:end\_node\_merging2\]\[scc:goto\] \[scc:begin\_non\_max\_scc\_loop\] pop $V$ from $S$, add $V$ to ${\mathit{Finished}}$ \[scc:finished\] \[scc:end\_non\_max\_scc\_loop\] \[scc:end2\] \[scc:end\] \(t) ++ (0ex,2ex) coordinate (t); (b) ++ (0ex,-0.5ex) coordinate (b); (l) ++ (-0.5ex,0ex) coordinate (l); (r) ++ (0.5ex,0ex) coordinate (r); let 1 = (l), 2 = (t) in coordinate (lt) at (1,2); let 1 = (l), 2 = (b) in coordinate (lb) at (1,2); let 1 = (r), 2 = (t) in coordinate (rt) at (1,2); let 1 = (r), 2 = (b) in coordinate (rb) at (1,2); (lt) – (rt) – (rb) – (lb) – cycle; (rt) ++ (-1ex,0ex) – (rt) – (rb) – ++ (-1ex,0ex); at ($(rb)$) [auxiliary data update step]{}; (t2) ++ (0ex,-0.5ex) coordinate (t2); (b2) ++ (0ex,-0.5ex) coordinate (b2); (l2) ++ (-0.5ex,0ex) coordinate (l2); (r2) ++ (0.5ex,0ex) coordinate (r2); let 1 = (l2), 2 = (t2) in coordinate (lt2) at (1,2); let 1 = (l2), 2 = (b2) in coordinate (lb2) at (1,2); let 1 = (r2), 2 = (t2) in coordinate (rt2) at (1,2); let 1 = (r2), 2 = (b2) in coordinate (rb2) at (1,2); (lt2) – (rt2) – (rb2) – (lb2) – cycle; (rt2) ++ (-1ex,0ex) – (rt2) – (rb2) – ++ (-1ex,0ex); at ($(rt2)!0.5!(rb2)$) [vertex merging step]{}; (t3) ++ (0ex,-0.5ex) coordinate (t3); (b3) ++ (0ex,-0.5ex) coordinate (b3); (l3) ++ (-0.5ex,0ex) coordinate (l3); (r3) ++ (0.5ex,0ex) coordinate (r3); let 1 = (l3), 3 = (t3) in coordinate (lt3) at (1,3); let 1 = (l3), 3 = (b3) in coordinate (lb3) at (1,3); let 1 = (r3), 3 = (t3) in coordinate (rt3) at (1,3); let 1 = (r3), 3 = (b3) in coordinate (rb3) at (1,3); (lt3) – (rt3) – (rb3) – (lb3) – cycle; (t6) ++ (0ex,-0.5ex) coordinate (t6); (b6) ++ (0ex,-0.5ex) coordinate (b6); (l6) ++ (-0.5ex,0ex) coordinate (l6); (r6) ++ (0.5ex,0ex) coordinate (r6); let 1 = (l6), 3 = (t6) in coordinate (lt6) at (1,3); let 1 = (l6), 3 = (b6) in coordinate (lb6) at (1,3); let 1 = (r6), 3 = (t6) in coordinate (rt6) at (1,3); let 1 = (r6), 3 = (b6) in coordinate (rb6) at (1,3); (lt6) – (rt6) – (rb6) – (lb6) – cycle; #### Vertex merging step This step is performed from Lines [[\[scc:begin\_node\_merging\]]{}]{} to [[\[scc:end\_node\_merging2\]]{}]{}, when it is discovered that the vertex $U = {\Call{Find}}{u}$ is the root of a terminal [<span style="font-variant:small-caps;">Scc</span>]{} in the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. All vertices $V$ which have been collected in that [<span style="font-variant:small-caps;">Scc</span>]{} are merged to $U$ (Line [[\[scc:merge\]]{}]{}). Let ${\mathcal{H}}_{\mathit{new}}$ be the resulting hypergraph. At Line [[\[scc:end\_node\_merging2\]]{}]{}, the stack $F$ is expected to contain the new arcs of the directed graph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{new}})$ leaving the newly “big” vertex $U$ (this point will be explained in the next paragraph). If $F$ is empty, the singleton $\{U\}$ constitutes a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{new}})$, hence also of ${\mathcal{H}}_{\mathit{new}}$ (Proposition \[prop:terminal\_scc\]). Otherwise, we go back to Line [[\[scc:begin\_edge\_loop\]]{}]{} to discover terminal [<span style="font-variant:small-caps;">Scc</span>]{}s from the new vertex $U$ in the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{new}})$. #### Discovering the new graph arcs In this paragraph, we explain informally how the new graph arcs arising after a vertex merging step can be efficiently discovered without examining all the hyperarcs. The formal proof of this technique is provided in Appendix \[sec:correctness\_proof\]. During the execution of , the local stack $F$ is used to collect the hyperarcs which represent arcs leaving the vertex ${\Call{Find}}{u}$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. Initially, when is called, the vertex is still equal to $u$. Then, the loop from Lines [[\[scc:begin\_node\_loop\]]{}]{} to [[\[scc:end\_node\_loop\]]{}]{} iterates over the set $A_u$ of the hyperarcs $a \in A$ such that $u \in T(a)$. At the end of the loop, it can be verified that $F$ is indeed filled with all the simple hyperarcs leaving $u = {\Call{Find}}{u}$ in ${\mathcal{H}}_{\mathit{cur}}$, as expected (see Line [[\[scc:simple\_edge\]]{}]{}). The main difficulty is to collect in $F$ the arcs which are added to the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ after a vertex merging step. To this aim, each non-simple hyperarc $a \in A$ is provided with two auxiliary data: - a vertex $r_a$, called the *root* of the hyperarc $a$, which is defined as the first vertex of the tail $T(a)$ to be visited by a call to , - a counter $c_a \geq 0$, which determines the number of vertices $x \in T(a)$ which have been visited and such that $\Call{Find}{x}$ is reachable from $\Call{Find}{r_a}$ in the current digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. These auxiliary data are maintained in the *auxiliary data update step*, located from Lines [[\[scc:root\_def\]]{}]{} to [[\[scc:end\_aux\]]{}]{}. Initially, the root $r_a$ of any hyperarc $a$ is set to the special value $\Nil$. The first time a vertex $u$ such that $a \in A_u$ is visited, $r_a$ is assigned to $u$ (Line [[\[scc:root\_def\]]{}]{}). Besides, at the call to , the counter $c_a$ of each non-simple hyperarc $a \in A_u$ is incremented, but only when $R_a = {\Call{Find}}{r_a}$ belongs to the stack $S$ (Line [[\[scc:counter\_increment\]]{}]{}). This is indeed a necessary and sufficient condition to the fact that ${\Call{Find}}{u}$ is reachable from ${\Call{Find}}{r_a}$ in the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ (see Invariant \[inv:call\_to\_visit3\] in Appendix \[sec:correctness\_proof\]). It follows from these invariants that, when the counter $c_a$ reaches the threshold value $\card{T(a)}$, all the vertices $X = {\Call{Find}}{x}$, for $x \in T(a)$, are reachable from $R_a$ in the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. Now suppose that, later, it is discovered that $R_a$ belongs to a terminal strongly connected component $C$ of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. Then the aforementioned vertices $X$ must all stand in the component $C$ (since it is terminal). Therefore, when the vertex merging step is applied on this [<span style="font-variant:small-caps;">Scc</span>]{}, the vertices $X$ are merged into a single vertex $U$. Hence, the hyperarc $a$ necessarily generates new simple arcs leaving $U$ in the new version of the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. Let us verify that in this case, $a$ is correctly placed into $F$ by our algorithm. As soon as $c_a$ reaches the value $\card{T(a)}$, the hyperarc $a$ is placed into a temporary stack $F_{R_a}$ associated to the vertex $R_a$ (Line [[\[scc:stack\_edge\]]{}]{}). This stack is then emptied into $F$ during the vertex merging step, at Lines [[\[scc:push\_on\_fprime1\]]{}]{} or [[\[scc:push\_on\_fprime2\]]{}]{}. In the left side of Figure \[fig:merging\], the execution of the loop from Lines [[\[scc:begin\_node\_loop\]]{}]{} to [[\[scc:end\_node\_loop\]]{}]{} during the call to sets the root of the hyperarc $a_4$ to the vertex $v$, and $c_{a_4}$ to $1$. Then, during , $c_{a_4}$ is incremented to $2 = \card{T(a_4)}$. The hyperarc $a_4$ is therefore pushed on the stack $F_{v}$ (because $R_{a_4} = {\Call{Find}}{r_{a_4}} = {\Call{Find}}{v} = v$). Once it is discovered that $u$, $v$, and $w$ form a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$, $a_4$ is collected into $F$ during the merging step. It then allows to visit the vertices $x$ and $y$ from the new vertex (rightmost hypergraph). A fully detailed execution trace is provided in Appendix \[sec:execution\_trace\] below. #### Correctness and complexity For the sake of simplicity, we have not included in the step returning the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s. However, they can be easily built by examining each vertex (hence in time $O(\card{{\mathcal{V}}})$), as shown below: \[th:correctness\] Let ${\mathcal{H}}= ({\mathcal{V}},A)$ be a directed hypergraph. After the execution of , the terminal strongly connected components of ${\mathcal{H}}$ are precisely the sets $C_U = \{ v \in {\mathcal{V}}\mid \Call{Find}{v} = U \text{ and } {\mathit{is\_term}}[U] = \True \}$. The proof of Theorem \[th:correctness\], which is too long to be included here, is provided in Appendix \[sec:correctness\_proof\]. It relies on successive transformations of intermediary algorithms to . When using disjoint-set forests with union by rank and path compression as union-find structure (see [@Cormen01 Chapter 21]), the time complexity of any sequence of $p$ operations , , or is known to be $O(p \times \alpha(\card{{\mathcal{V}}}))$, where $\alpha$ is the very slowly growing inverse of the Ackermann function. The following result states that the algorithm is also almost linear time: \[th:complexity\] Let ${\mathcal{H}}= ({\mathcal{V}},A)$ be a directed hypergraph. Then the algorithm terminates in time $O(\operatorname{\mathsf{size}}({\mathcal{H}}) \times \alpha(\card{{\mathcal{V}}}))$, and has linear space complexity. The analysis of the time complexity depends on the kind of the instructions. We distinguish: the operations on the global stacks $F_u$ and on the local stacks $F$, the call to the functions , , and , and the other operations, referred to as *usual operations* (by extension, their time complexity will be referred to as *usual complexity*). The complexity of each kind of operations is respectively described in the following three paragraphs. Each operation on a stack (pop or push) is performed in $O(1)$. A hyperarc $a$ is pushed on a stack of the form $F_u$ at most once during the whole execution of (when counter $c_a$ reaches the value $\card{T(a)}$). Once it is popped from it, it will never be pushed on a stack of the form $F_v$ again. Similarly, a hyperarc is pushed on a local stack $F$ at most once, and after it is popped from it, it will never be pushed on any local stack $F'$ in the following states. Therefore, the total number of stack operations on the local and global stacks $F$ and $F_u$ is bounded by $4 \card{A}$. It follows that the corresponding complexity is bounded by $O(\operatorname{\mathsf{size}}({\mathcal{H}}))$. The same argument proves that the total number of iterations of the loop from Lines [[\[scc:begin\_edge\_loop2\]]{}]{} to [[\[scc:end\_edge\_loop2\]]{}]{} occurring in a complete execution of is bounded by $\sum_{a \in A} \card{H(a)}$. During the execution of , the function is called exactly $\card{{\mathcal{V}}}$ times at Line [[\[scc:find1\]]{}]{}, at most $\sum_{u \in {\mathcal{V}}} \card{A_u} = \sum_{a \in A} \card{T(a)}$ times at Line [[\[scc:find2\]]{}]{}, and at most $\sum_{a \in A} \card{H(a)}$ times at Line [[\[scc:find3\]]{}]{} (see above). Hence it is called at most $\operatorname{\mathsf{size}}({\mathcal{H}})$ times. The function is always called to merge two distinct vertices. Let $C_1,\dots,C_p$ ($p \leq \card{{\mathcal{V}}}$) be the equivalence classes formed by the elements of ${\mathcal{V}}$ at the end of the execution of . Then is called at most $\sum_{i = 1}^p (\card{C_i}-1)$. Since $\sum_i \card{C_i} = \card{{\mathcal{V}}}$, is executed at most $\card{{\mathcal{V}}}-1$ times. Finally, is called exactly $\card{{\mathcal{V}}}$ times. It follows that the total time complexity of the operations , and is $O(\operatorname{\mathsf{size}}({\mathcal{H}}) \times \alpha(\card{{\mathcal{V}}}))$. The analysis of the usual operations is split into several parts: - the usual complexity of without the calls to the function is clearly $O(\card{{\mathcal{V}}} + \card{A})$. - during the execution of , the usual complexity of the block from Lines [[\[scc:begin\]]{}]{} to [[\[scc:end\_node\_loop\]]{}]{} is $O(1) + O(\card{A_u})$. Indeed, we suppose that the test at Line [[\[scc:root\_reach\]]{}]{} can be performed in $O(1)$ by assuming that the stack $S$ is provided with an auxiliary array of booleans which determines, for each element of ${\mathcal{V}}$, whether it is stored in $S$ (obviously, the push and pop operations are still in $O(1)$ under this assumption). Then the total usual complexity between Lines [[\[scc:begin\]]{}]{} and [[\[scc:end\_node\_loop\]]{}]{} is $O(\operatorname{\mathsf{size}}({\mathcal{H}}))$ for a complete execution of . - the usual complexity of the loop body from Lines [[\[scc:begin\_edge\_loop2\]]{}]{} to [[\[scc:end\_edge\_loop2\]]{}]{}, without the recursive calls to , is clearly $O(1)$ (the membership test at Line [[\[scc:membership\]]{}]{} is supposed to be in $O(1)$, encoding the set ${\mathit{Finished}}$ as an array of $\card{{\mathcal{V}}}$ booleans). This inner loop is iterated $\card{H(a)}$ times during each iteration of the outer loop from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{}. Since a hyperarc is placed in a local stack $F$ at most once, the total usual complexity of the loop from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{} (without the recursive calls to ) is bounded by $O(\operatorname{\mathsf{size}}({\mathcal{H}}))$. - the usual complexity of the loop between Lines [[\[scc:begin\_node\_merging\_loop\]]{}]{} and [[\[scc:end\_node\_merging\_loop\]]{}]{} for a complete execution of is $O(\card{{\mathcal{V}}})$, since in total, it is iterated exactly the number of times the function is called. - the usual complexity of the loop between Lines [[\[scc:begin\_non\_max\_scc\_loop\]]{}]{} and [[\[scc:end\_non\_max\_scc\_loop\]]{}]{} for a whole execution of is $O(\card{{\mathcal{V}}})$, because a given element is placed at most once into the set ${\mathit{Finished}}$ (adding an element in ${\mathit{Finished}}$ is in $O(1)$). - if the previous two loops are not considered, less than $10$ usual operations are executed in the block from Lines [[\[scc:begin2\]]{}]{} to [[\[scc:end\]]{}]{}, all of complexity $O(1)$. The execution of this block either follows a call to or the execution of the goto statement (at Line [[\[scc:goto\]]{}]{}). The latter is executed only if the stack $F$ is not empty. Since each hyperarc can be pushed on a local stack $F$ and then popped from it only once, it happens at most $\card{A}$ times during the whole execution of . It follows that the usual complexity of the block from Lines [[\[scc:begin2\]]{}]{} to [[\[scc:end\]]{}]{} is $O(\card{{\mathcal{V}}} + \card{A})$ in total (excluding the loops previously discussed). Summing all the complexities above proves that the time complexity of is $O(\operatorname{\mathsf{size}}({\mathcal{H}}) \times \alpha(\card{{\mathcal{V}}}))$. The space complexity is obviously linear in $\operatorname{\mathsf{size}}({\mathcal{H}})$. An implementation is provided in the library [[TPLib]{}]{} [@tplib], in the module `Hypergraph`.[^2] The algorithm is not able to determine all strongly connected components in directed hypergraphs. Consider the following example: \(u) at (0, 0) [$u$]{}; (t) at (0, -1.5) [$w$]{}; \(x) at (2.3, -0.75) [$v$]{}; \(u) edge\[simpleedge,out=-110,in=110\] (t); \(x) edge\[simpleedge,out=120,in=0\] (u); ; at (1,-1) [$a$]{}; Our algorithm determines the unique terminal [<span style="font-variant:small-caps;">Scc</span>]{}, which is reduced to the vertex $w$. However, the non-terminal [<span style="font-variant:small-caps;">Scc</span>]{} formed by $u$ and $v$ is not discovered. Indeed, the non-simple hyperarc $a$, which allows to reach $v$ from $u$, cannot be transformed into a simple arc, since $u$ and $v$ do not belong to a same [<span style="font-variant:small-caps;">Scc</span>]{} of the underlying digraph. Determining other properties in almost linear time, and applications {#subsec:other_properties} -------------------------------------------------------------------- Some properties can be directly determined from the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s. Indeed, a directed hypergraph ${\mathcal{H}}$ admits a sink ([*i.e.*]{} a vertex reachable from all vertices) if, and only if, it contains a unique terminal [<span style="font-variant:small-caps;">Scc</span>]{}. Besides, strong connectivity amounts to the existence of a terminal [<span style="font-variant:small-caps;">Scc</span>]{} containing all the vertices. \[cor:strongly\_connectivity\_and\_sink\] Given a directed hypergraph ${\mathcal{H}}$, the following problems can be solved in almost linear time in $\operatorname{\mathsf{size}}({\mathcal{H}})$: \[item:i\] is there a sink in ${\mathcal{H}}$? \[item:ii\] is ${\mathcal{H}}$ strongly connected? We now discuss some applications of these results. #### Tropical geometry Tropical polyhedra are the analogues of convex polyhedra in *tropical algebra*, [*i.e.*]{} the semiring $\mathbb{R} \cup \{-\infty\}$ endowed with the operations $\max$ and $+$ as addition and multiplication. A *tropical polyhedron* is the set of the solutions $x \in (\mathbb{R} \cup \{-\infty\})^n$ of finitely many tropical affine inequalities, of the form: $$\max(a_0, a_1 + x_1, \dots, a_n + x_n) \leq \max(b_0, b_1 + x_1, \dots, b_n + x_n) \enspace,$$ where $a_i, b_i \in \mathbb{R} \cup \{-\infty\}$. Analogously to classical convex polyhedra, any tropical polyhedron can be equivalently expressed as the convex hull (in the tropical sense) of a set of vertices and extreme rays. This yields the problem of computing the vertices of a tropical polyhedron, which can be seen as the “tropical counterpart” of the well-studied vertex enumeration problem in computational geometry. This problem has various applications in computer science and control theory, among others, in the analysis of discrete event systems [@katz05], software verification [@AllamigeonGaubertGoubaultSAS08], and verification of real-time systems [@LuJLAP2011]. Directed hypergraphs and their terminal [<span style="font-variant:small-caps;">Scc</span>]{}s arise in the characterization of the vertices of a tropical polyhedron. In a joint work of the author with Gaubert and Goubault [@AllamigeonGaubertGoubaultDCG2013], it has been proved that a point $x \in (\mathbb{R} \cup \{-\infty\})^n$ of a tropical polyhedron $\mathcal{P}$ is a vertex if, and only if, a certain directed hypergraph associated to $x$ and built from the inequalities defining $\mathcal{P}$, admits a sink. This combinatorial criterion plays a crucial role in the tropical vertex enumeration problem. It is indeed involved in an algorithm called *tropical double description method* [@AllamigeonGaubertGoubaultDCG2013; @AllamigeonGaubertGoubaultSTACS10], in order to eliminate points which are not vertices, among a set of candidates. This set can be very large (exponential in the dimension $n$), so the efficiency of the elimination step is critical. The almost linear time algorithm consequently leads to a significant improvement over the state-of-the-art, both in theory and in practice (see [@AllamigeonGaubertGoubaultDCG2013 Section 6]). It also allows to show the surprising result that it is easier to determine whether a point is a vertex in a tropical polyhedron than in a classical one (if $p$ is the number of inequalities defining the polyhedron, the latter problem can be solved in $O(n^2 p)$ while the former in $O(n p \alpha(n))$). #### Nonlinear spectral theory Problem  appears in a generalization of the Perron-Frobenius theorem to homogeneous and monotone functions studied by Gaubert and Gunawardena in [@GaubertGunawardena04]. Recall that a function $f : (\mathbb{R}^*_+)^n \mapsto (\mathbb{R}^*_+)^n$ is said to be *monotone* when $f(x) \leq f(y)$ for any $x, y \in (\mathbb{R}^*_+)^n$ such that $x \leq y$ (the relation $\leq$ being understood entrywise), and that it is *homogeneous* when $f(\lambda x) = \lambda f(x)$ for all $\lambda \in \mathbb{R}^*_+$ and $x \in (\mathbb{R}^*_+)^n$. A central problem is to give conditions under which $f$ admits an eigenvector in the cone $(\mathbb{R}^*_+)^n$, [*i.e.*]{} a vector $x \in (\mathbb{R}^*_+)^n$ such that $f(x) = \lambda x$ for some $\lambda > 0$. Gaubert and Gunawardena establish a sufficient combinatorial condition [@GaubertGunawardena04 Theorem 6] expressed as the strong connectivity of a directed graph obtained as the limit of a sequence of graphs. This sequence is identical to the one arising during the execution of the method sketched in Section \[subsec:maxscc\_principle\]. It follows that the sufficient condition is equivalent to the strong connectivity of a directed hypergraph ${\mathcal{H}}(f)$ constructed from $f$. The hypergraph ${\mathcal{H}}(f)$ consists of the vertices $1, \dots, n$ and the hyperarcs $(I, \{j\})$ such that $\lim_{\mu \rightarrow +\infty} f_j(\mu e_I) = +\infty$ ($e_I$ denotes the vector whose $i$-th entry is equal to $1$ when $i \in I$, and $0$ otherwise). #### Horn propositional logic As mentioned in Section \[sec:introduction\], directed hypergraphs can be used to encode Horn formulas. Recall that a *Horn formula* $F$ over the propositional variables $X_1, \dots, X_n$ is a conjunction of *Horn clauses*, [*i.e.*]{}  either an implication $X_{i_1} \wedge \dots \wedge X_{i_p} \Rightarrow X_i$, or a fact $X_i$, or a goal $\neg X_{i_1} \vee \dots \vee \neg X_{i_p}$. Given a propositional formula $F$, an assignment $\sigma : \{ X_1, \dots, X_n \} \rightarrow \{\True, \False\}$ is a *model* of $F$ if replacing each $X_i$ by its associated truth value $\sigma(X_i)$ yields a true assertion. If $F_1, F_2$ are two propositional formulas, $F_1$ is said to *entail* $F_2$, which is denoted by $F_1 \models F_2$, if every model of $F_1$ is a model of $F_2$. A directed hypergraph ${\mathcal{H}}(F)$ can be associated to any Horn formula $F$ so as to decide entailment of implications over its variables. We use the construction developed by Ausiello and Italiano [@Ausiello91]. The hypergraph ${\mathcal{H}}(F)$ consists of the vertices ${\mathbf{t}}, {\mathbf{f}}, 1, \dots, n$ and the following hyperarcs: $(\{i_1, \dots, i_p\}, \{i\})$ for every implication $X_{i_1} \wedge \dots \wedge X_{i_p} \Rightarrow X_i$ in $F$, $(\{{\mathbf{t}}\}, \{i\})$ for every fact $X_i$, $(\{i_1, \dots, i_p\}, \{{\mathbf{f}}\})$ for every goal $\neg X_{i_1} \vee \dots \vee \neg X_{i_p}$, $(\{{\mathbf{f}}\}, \{1, \dots, n\})$, and $(\{i\},\{{\mathbf{t}}\})$ for all $i = 1, \dots, n$. Observe that the size of ${\mathcal{H}}(F)$ is linear in the size of the formula $F$, [*i.e.*]{} the number of atoms in its clauses (without loss of generality, it is assumed that every variable occurs in $F$). \[lemma:horn\] Let $F$ be a Horn formula over the variables $X_1, \dots, X_n$. Then $F \models X_i \Rightarrow X_j$ if, and only if, $j$ is reachable from $i$ in the directed hypergraph ${\mathcal{H}}(F)$. The “if” part can be shown by induction. If $i = j$, this is obvious. Otherwise, there exists a hyperarc $(T,H)$ such that $j \in H$ and every element of $T$ is reachable from $i$. Three cases can be distinguished: - if $T$ is not equal to $\{{\mathbf{t}}\}$ or $\{{\mathbf{f}}\}$, then by construction, $F$ contains the implication $\wedge_{k \in T} X_k \Rightarrow X_j$. For all $k \in T$, $F \models X_i \Rightarrow X_k$ by induction, so that $F \models X_i \Rightarrow X_j$. - if $T = \{{\mathbf{t}}\}$, then $X_j$ is a fact, and $F \models X_i \Rightarrow X_j$ trivially holds. - if $T$ is reduced to $\{{\mathbf{f}}\}$, there is a goal $\neg X_{k_1} \vee \dots \vee \neg X_{k_p}$ in $F$ such that each $k_l$ is reachable from $i$, hence $F \models X_i \Rightarrow X_{k_l}$. As $F$ also entails the implication $X_{k_1} \wedge \dots \wedge X_{k_p} \Rightarrow X_j$, we conclude that $F \models X_i \Rightarrow X_j$. For the “only if” part, let $R$ be the set of reachable vertices from $i$ in ${\mathcal{H}}(F)$, and assume that $j \not \in R$. Let $\sigma$ be the assignment defined by $\sigma(X_k) = \True$ if $k \in R$, $\False$ otherwise. We claim that $\sigma$ models $F$. Consider an implication $X_{k_1} \wedge \dots \wedge X_{k_p} \Rightarrow X_k$ in $F$. If $\sigma(X_{k_l}) = \True$ for all $l = 1, \dots, p$, then $k$ is reachable from $i$ in ${\mathcal{H}}(F)$, hence $\sigma(X_k) = \True$, and the implication is valid on $\sigma$. Similarly, for each fact $X_k$ in $F$, $k$ obviously belongs to $R$, which ensures that $\sigma(X_k) = \True$. Finally, if $F$ contains a goal $\neg X_{k_1} \vee \dots \vee \neg X_{k_p}$ such that $\sigma(X_{k_l}) = \True$ for all $l = 1, \dots, p$, then every vertex of the hypergraph is reachable from $i$ (through the vertex ${\mathbf{f}}$), which is impossible ($j \not \in R$). This completes the proof. Corollary \[cor:strongly\_connectivity\_and\_sink\] and Lemma \[lemma:horn\] consequently prove that the two following decision problems over Horn formulas can be solved in almost linear time: whether a variable of a Horn formula is implied by all the others, whether all variables of a Horn formula are equivalent. Contributions on the complexity of computing all [<span style="font-variant:small-caps;">Scc</span>]{}s {#sec:combinatorics} ======================================================================================================= A lower bound on the size of the transitive reduction of the reachability relation {#subsec:transitive_reduction} ---------------------------------------------------------------------------------- Given a directed graph or a directed hypergraph, the reachability relation can be represented by the set of the couples $(x,y)$ such that $x$ reaches $y$. This is however a particularly redundant representation because of transitivity. In order to get a better idea of the intrinsic complexity of the reachability relation, we should rather consider transitive reductions, which are defined as minimal binary relations having the same transitive closure. In directed graphs, Aho [*et al.*]{}  have shown in [@AhoGareyUllmanSICOMP72] that all transitive reductions of the reachability relation have the same size (the size of a binary relation ${\mathcal{R}}$ is the number of couples $(x,y)$ such that $x \mathbin{{\mathcal{R}}} y$). This size is bounded by the size of the digraph. Furthermore, a canonical transitive reduction can be defined by choosing a total ordering over the vertices. In directed hypergraphs, the existence of a canonical transitive reduction of the reachability relation can be similarly established, because reachability is still reflexive and transitive.[^3] However, we are going to show that its size is superlinear in $\operatorname{\mathsf{size}}({\mathcal{H}})$ for some directed hypergraphs ${\mathcal{H}}$. These hypergraphs arise from the subset partial order. More specifically, given a family ${\mathcal{F}}$ of distinct sets over a finite domain $D$, the partial order induced by the relation $\subseteq$ on ${\mathcal{F}}$ is called *the subset partial order* over ${\mathcal{F}}$. Without loss of generality, we assume that every set $S$ of ${\mathcal{F}}$ satisfies $\card{S} > 1$ (up to adding two fixed elements $x,y \not \in D$ to all sets, which does not change the partial order over ${\mathcal{F}}$). From this family, we build a corresponding directed hypergraph ${\mathcal{H}}({\mathcal{F}},D)$. Each of its vertices is either associated to a set $S \in {\mathcal{F}}$ or to a domain element $x \in D$, and is denoted by $v[S]$ or $v[x]$ respectively. Besides, each set $S$ is associated to two hyperarcs $a[S]$ and $a'[S]$. The hyperarc $a[S]$ leaves the singleton $\{v[S]\}$ and enters the set of the vertices $v[x]$ such that $x \in S$. The hyperarc $a'[S]$ is defined inversely, leaving the latter set and entering $\{v[S]\}$. An example is given in Figure \[fig:subset\_hypergraph\]. (x1) at (0,1.5) [$v[x_1]$]{}; (x2) at (2,1.5) [$v[x_2]$]{}; (x3) at (2,0) [$v[x_3]$]{}; (x4) at (0,0) [$v[x_4]$]{}; (s1) at (-2,0.75) [$v[S_1]$]{}; (s2) at (4,0.75) [$v[S_2]$]{}; (s3) at (1,3.25) [$v[S_3]$]{}; ; (-1.6,-0.5) node\[below,lbl\] [$a'[S_1]$]{}; ; (-1.2,0.8) node\[above,lbl\] [$a[S_1]$]{}; ; (3.6,-0.5) node\[below,lbl\] [$a'[S_2]$]{}; ; (3.2,0.8) node\[above,lbl\] [$a[S_2]$]{}; ; (0,2.5) node\[right,lbl\] [$a'[S_3]$]{}; ; (2,2.5) node\[left,lbl\] [$a[S_3]$]{}; \[lemma:reachable\] Given $S \in {\mathcal{F}}$, $v$ is reachable from $v[S]$ in ${\mathcal{H}}({\mathcal{F}},D)$ if, and only if, $v = v[S']$ for some $S' \in {\mathcal{F}}$ such that $S' \subseteq S$, or $v = v[x]$ for some $x \in S$. Clearly, any vertex $v[x]$ is reachable from $v[S]$ through the hyperarc $a[S]$. Besides, assuming $S \supseteq S'$, then $v[S]$ reaches $v[S']$ through the hyperpath formed by the hyperarcs $a[S]$ and $a'[S']$. Now, let us prove by induction that these are the only vertices reachable from $v[S]$. Let $u$ be reachable from $v[S]$. If $u = v[S]$, then this is obvious. Otherwise, there exists a hyperarc $a = (T,H)$ such that $u \in H$ and $T = \{u_1, \dots, u_q\}$ with each $u_i$ being reachable from $v[S]$. We can distinguish two cases: (i) either $a$ is of the form $a[S']$ for some $S' \in {\mathcal{F}}$, in which case the tail is reduced to the vertex $v[S']$, which is reachable from $v[S]$. By induction, we know that $S \supseteq S'$. Since $u = v[x]$ for some $x \in S'$, it follows that $x \in S$. (ii) or $a$ is of the form $a'[S']$ for some $S' \in {\mathcal{F}}$. Then its tail is the set of the $v[x]$ for $x \in S'$, and its head consists of the single vertex $v[S']$. Thus $x \in S$ for all $x \in S'$ by induction, which ensures that $u = v[S']$ with $S' \subseteq S$. \[prop:transitive\_reduction\_lower\_bound\] The size of the transitive reduction of the reachability relation of ${\mathcal{H}}({\mathcal{F}},D)$ is lower bounded by the size of the transitive reduction of the subset partial order over the family ${\mathcal{F}}$. We claim that for any couple $(S,S')$ in the transitive reduction of the subset partial order over the family ${\mathcal{F}}$, $(v[S'],v[S])$ belongs to the transitive reduction of the relation ${{\rightsquigarrow}}_{{\mathcal{H}}({\mathcal{F}},D)}$. Suppose that the pair $(v[S'],v[S])$ is not in transitive reduction of ${{\rightsquigarrow}}_{{\mathcal{H}}({\mathcal{F}},D)}$, and that $S \subseteq S'$ (the case $S \not \subseteq S'$ is obvious). By Lemma \[lemma:reachable\], $v[S]$ is reachable from $v[S']$. Besides, there exists a vertex $u$ of ${\mathcal{H}}({\mathcal{F}},D)$ distinct from $v[S]$ and $v[S']$ such that $v[S'] {{\rightsquigarrow}}_{{\mathcal{H}}({\mathcal{F}},D)} u {{\rightsquigarrow}}_{{\mathcal{H}}({\mathcal{F}},D)} v[S]$. Observe that any vertex reaching a vertex of the form $v[T]$ ($T \in {\mathcal{F}}$) is necessarily of the form $v[T']$ for some $T' \in {\mathcal{F}}$ (because of the assumption $\card{T} > 1$ which ensures that no vertex of the form $v[x]$ for $x \in D$ can reach $v[T]$). Consequently, there exists a set $S'' \in {\mathcal{F}}$ (distinct from $S$ and $S'$) such that $u = v[S'']$. Following Lemma \[lemma:reachable\], this shows that $S' \supsetneq S'' \supsetneq S$. Thus $(S,S')$ cannot belong to the transitive reduction of the subset partial order over ${\mathcal{F}}$. The subset partial order has been well studied in the literature [@YellinIPL93; @PritchardIPL95; @PritchardAlg99; @PritchardJAlg99; @ElmasryIPL09]. It has been proved in [@YellinIPL93; @ElmasryIPL09] that the size of the transitive reduction of the subset partial order can be superlinear in the size of the input $({\mathcal{F}},D)$ (defined as $\card{D} + \sum_{S \in {\mathcal{F}}} \card{S}$). Combining this with Proposition \[prop:transitive\_reduction\_lower\_bound\] provides the following result: \[th:transitive\_reduction\_lower\_bound\] There is a directed hypergraph ${\mathcal{H}}$ such that the size of the transitive reduction of the reachability relation is in $\Omega(\operatorname{\mathsf{size}}({\mathcal{H}})^2 / \log^2 (\operatorname{\mathsf{size}}({\mathcal{H}})))$. We use the construction given in [@ElmasryIPL09] in which ${\mathcal{F}}$ consists of two disjoint families ${\mathcal{F}}_1$ and ${\mathcal{F}}_2$ of sets over the domain $D = \{x_1, \dots, x_n\}$ (where $n$ is supposed to be divisible by 4). The first family consists of the subsets having $n/4$ elements among $x_1, \dots, x_{n/2}$. The second family is formed by the subsets containing all the elements $x_1, \dots, x_{n/2}$, and precisely $n/4$ elements among $x_{n/2+1}, \dots, x_n$. The transitive reduction of the subset partial order over ${\mathcal{F}}$ coincides with the cartesian product ${\mathcal{F}}_1 \times {\mathcal{F}}_2$. Each ${\mathcal{F}}_i$ precisely contains $\binom{n/2}{n/4} = \Theta(2^{n/2}/\sqrt{n})$ sets, so that the size of the transitive reduction of the subset partial order is $\Theta(2^n/n)$. Proposition \[prop:transitive\_reduction\_lower\_bound\] shows that the size of the transitive reduction of ${{\rightsquigarrow}}_{{\mathcal{H}}({\mathcal{F}},D)}$ is in $\Omega(2^n/n)$. Now, the size of the directed hypergraph ${\mathcal{H}}({\mathcal{F}},D)$ is equal to: $$\operatorname{\mathsf{size}}({\mathcal{H}}({\mathcal{F}},D)) = n+2\binom{n/2}{n/4} + 2\frac{3n}{4}\binom{n/2}{n/4} + 2\frac{n}{4}\binom{n/2}{n/4},$$ so that $\operatorname{\mathsf{size}}({\mathcal{H}}({\mathcal{F}},D)) = \Theta(\sqrt{n} 2^{n/2})$. This provides the expected result. The size of the transitive relation of the reachability relation can be seen as a partial measure of the complexity of the [<span style="font-variant:small-caps;">Scc</span>]{} computation problem. It is indeed natural to think at algorithms computing the [<span style="font-variant:small-caps;">Scc</span>]{}s by following the reachability relation between them, for instance by a depth-first search, hence by exploring at least a transitive reduction of the reachability relation. In fact, most of the algorithms determining [<span style="font-variant:small-caps;">Scc</span>]{}s of directed graphs, for instance the ones due to Tarjan [@Tarjan72], Cheriyan and Mehlhorn [@CheriyanMehlhorn96], or Gabow [@Gabow00], perform a depth-first search on the entire graph, and thus follow this approach. Theorem \[th:transitive\_reduction\_lower\_bound\] shows that this class of algorithms cannot have a linear complexity on directed hypergraphs: \[cor:scc\_lower\_bound\] Any algorithm computing the strongly connected components of directed hypergraphs by traversing an entire transitive reduction of the reachability relation has a worst case complexity at least equal to $N^2 / \log^2 (N)$, where $N$ is the size of the input. Consequently, the reachability relation must be sufficiently explored to identify the [<span style="font-variant:small-caps;">Scc</span>]{}s, but it cannot be totally explored unless sacrificing the time complexity. Note that the algorithm relies on a certain trade-off to discover terminal [<span style="font-variant:small-caps;">Scc</span>]{}s: it only traverses hyperarcs $(T,H)$ such that $T$ is contained in a [<span style="font-variant:small-caps;">Scc</span>]{}, whereas hyperarcs in which the tail vertices belong to distinct [<span style="font-variant:small-caps;">Scc</span>]{}s are ignored. Reduction from the minimal set problem {#subsec:set_pb_reduction} -------------------------------------- Given a family ${\mathcal{F}}$ of distinct sets over a domain $D$ as above, the *minimal set problem* consists in finding all minimal sets $S \in {\mathcal{F}}$ for the subset partial order. This problem has received much attention [@PritchardActInf91; @YellinSODA92; @YellinIPL93; @PritchardIPL95; @PritchardJAlg99; @ElmasryIPL09; @BayardoSDM11]. It has important applications in propositional logic [@PritchardActInf91] or data mining [@BayardoSDM11]. It can also be seen as a boolean case of the problem of finding maximal vectors among a given family [@KungJACM75; @KirkpatrickSOCG85; @GodfreyVLDB05]. Surprisingly, the most efficient algorithms addressing the minimal set problem compute the whole subset partial order [@YellinIPL93; @ElmasryIPL09]. The best known methods to compute the subset partial order in the general case are due to Pritchard [@PritchardIPL95; @PritchardJAlg99]. Their complexity are in $O(N^2/\log N)$, where $N$ is the size of the input $({\mathcal{F}},D)$. In the dense case, [*i.e.*]{} when the size of the family is in $\Theta(\card{D} \cdot \card{{\mathcal{F}}})$, Elmasry defined a method with a complexity in $O(N^2/\log^2 N)$ [@ElmasryIPL09]. This matches the lower bound provided in Corollary \[cor:scc\_lower\_bound\]. In this section, we establish a linear time reduction from the minimal set problem to the problem of computing the [<span style="font-variant:small-caps;">Scc</span>]{}s in directed hypergraph. To obtain it, we build a directed hypergraph ${\overline{\mathcal{H}}}({\mathcal{F}},D)$ starting from the hypergraph ${\mathcal{H}}({\mathcal{F}},D)$. On top of the vertices of the latter, ${\overline{\mathcal{H}}}({\mathcal{F}},D)$ has the following vertices: for each $S \in {\mathcal{F}}$, an additional vertex $w[S]$, $(\card{D}+1)$ vertices labelled by $c_0,\dots,c_{\card{D}}$, and a special vertex labelled by ${\mathit{superset}}$. Besides, we add the following hyperarcs: for each $S \in {\mathcal{F}}$, a hyperarc leaving $\{v[S]\}$ and entering the singleton $\{c_{\card{S}-1}\}$, for every $0 \leq i \leq \card{D}$, a hyperarc leaving $\{c_i\}$ and entering the set of the vertices $w[S]$ such that $\card{S} = i$, for each $i > 0$, a hyperarc from $\{c_i\}$ to $\{c_{i-1}\}$, for each $S \in {\mathcal{F}}$, a hyperarc leaving the set $\{v[S],w[S]\}$ and entering the singleton $\{{\mathit{superset}}\}$, for every $S \in {\mathcal{F}}$, a hyperarc from $\{{\mathit{superset}}\}$ to $\{v[S]\}$. This construction is illustrated in Figure \[fig:minimal\_subset\_hypergraph\]. (x1) at (0,1.5) [$v[x_1]$]{}; (x2) at (2,1.5) [$v[x_2]$]{}; (x3) at (2,0) [$v[x_3]$]{}; (x4) at (0,0) [$v[x_4]$]{}; (s1) at (-2,0.75) [$v[S_1]$]{}; (s2) at (4,0.75) [$v[S_2]$]{}; (s3) at (1,3.25) [$v[S_3]$]{}; (superset) at (-2.75,3.5) [${\mathit{superset}}$]{}; ; ; ; ; ; ; ; (w1) at (-3.5,0.75) [$w[S_1]$]{}; (w2) at (5.5,0.75) [$w[S_2]$]{}; (w3) at (2.5,3.25) [$w[S_3]$]{}; (c0) at (-2,-2) [$c_0$]{}; (c1) at (-0.5,-2) [$c_1$]{}; (c2) at (1,-2) [$c_2$]{}; (c3) at (2.5,-2) [$c_3$]{}; (c4) at (4,-2) [$c_4$]{}; (c4) – (c3); (c3) – (c2); (c2) – (c1); (c1) – (c0); (s1.240) to\[out=-80,in=130\] (-1.25,-2) to\[out=-50,in=-120\] (c2); (s2.-60) to\[out=-100,in=50\] (3.25,-2) to\[out=-130,in=-60\] (c2) ; ; (s3) .. controls (1,-1) and (0,-1.5) .. (c1); (c2) .. controls (1,2.25) and (2,2.75) .. (w3); ; ; ; (superset.-70) to\[out=-80,in=90\] (s1); (superset) to\[out=0,in=180\] (s3.160); (superset.30) .. controls (-1.75,4) and (4,7).. (s2); \[prop:equivalence\] For any $S \in {\mathcal{F}}$, $S$ is not minimal in ${\mathcal{F}}$ if, and only if, the vertex ${\mathit{superset}}$ is reachable from $v[S]$ in ${\overline{\mathcal{H}}}({\mathcal{F}},D)$. Assume that $S$ is not minimal in ${\mathcal{F}}$, and let $S' \in {\mathcal{F}}$ satisfying $S' \subsetneq S$. By Lemma \[lemma:reachable\], $v[S']$ is reachable from $v[S]$ in ${\mathcal{H}}({\mathcal{F}},D)$, and hence in ${\overline{\mathcal{H}}}({\mathcal{F}},D)$. Since $\card{S'} = j < \card{S} = i$, $w[S']$ is reachable from $v[S]$ through the hyperpath traversing the vertices $c_{i-1},c_{i-2},\dots,c_j$. Finally, the vertex ${\mathit{superset}}$ is reachable through the hyperarc from $\{v[S'],w[S']\}$. Conversely, suppose that $v[S]$ reaches ${\mathit{superset}}$ in ${\overline{\mathcal{H}}}({\mathcal{F}},D)$. Consider a minimal hyperpath $a_1, \dots, a_p$ from $v[S]$ to ${\mathit{superset}}$. Necessarily, $a_p$ is a hyperarc of the form $(\{v[S'], w[S']\},\{{\mathit{superset}}\})$ for some $S' \in {\mathcal{F}}$. Consequently, both vertices $v[S']$ and $w[S']$ are reachable from $v[S]$. Besides, to each of the two vertices, there exists a hyperpath from $v[S]$, which is a subsequence of $a_1, \dots, a_{p-1}$, and which consequently does not contain the vertex ${\mathit{superset}}$ (meaning that the latter does not appear in any tail or head of the hyperarcs of the hyperpath). Let $a'_1, \dots, a'_q$ be a minimal hyperpath from $v[S]$ to $v[S']$ not containing ${\mathit{superset}}$. The hyperpath cannot contain any hyperarc of the form $(\{v[T], w[T]\},\{{\mathit{superset}}\})$ (where $T \in {\mathcal{F}}$). As a result, no vertex of the form $w[T]$ should occur in the hyperpath (by minimality). Similarly, no vertex of the form $c_i$ belongs to the hyperpath (otherwise, it should also contain a vertex of the form $w[T]$). It follows that the hyperpath $a'_1, \dots, a'_q$ is also a hyperpath in the hypergraph ${\mathcal{H}}({\mathcal{F}},D)$. Applying Lemma \[lemma:reachable\] then shows that $S' \subseteq S$. It remains to show that the latter inclusion is strict. Similarly, let $a''_1, \dots, a''_r$ be a minimal hyperpath from $v[S]$ to $w[S']$ not containing ${\mathit{superset}}$. Then the tail of $a''_r$ is necessarily reduced to the vertex $c_i$, where $i = \card{S'}$, and its head is $\{w[S']\}$. It follows that $a''_1, \dots, a''_{r-1}$ is a hyperpath from $v[S]$ to $c_i$ not containing ${\mathit{superset}}$. Now suppose that $i \geq \card{S}$. Let $j \geq i$ the greatest integer such that $c_j$ appears in the hyperpath $a''_1, \dots, a''_{r-1}$. Necessarily, one of the hyperarcs in the hyperpath is of the form $(\{v[T]\},\{c_j\})$, so that $v[T]$ is reachable from $v[S]$ through a hyperpath not passing through the vertex ${\mathit{superset}}$. It follows from the previous discussion that $T \subseteq S$. But $\card{T} = j+1 > i \geq \card{S}$, which is a contradiction. This shows that $i = \card{S'} < \card{S}$, hence $S' \subsetneq S$. Since every vertex of the form $v[S]$ is reachable from ${\mathit{superset}}$, minimal sets of the family ${\mathcal{F}}$ are precisely given by the vertices which do not belong to the [<span style="font-variant:small-caps;">Scc</span>]{} of the vertex ${\mathit{superset}}$. This proves the following complexity reduction: \[th:minimal\_set\_pb\_reduction\] The minimal set problem can be reduced in linear time to the problem of determining the strongly connected components in a directed hypergraph. We assume the existence of an oracle providing the [<span style="font-variant:small-caps;">Scc</span>]{}s of any directed hypergraph. Consider an instance $({\mathcal{F}},D)$ of the minimal set problem. The hypergraph ${\overline{\mathcal{H}}}({\mathcal{F}},D)$ can be built in linear time in the size of the input. Calling the oracle on ${\overline{\mathcal{H}}}({\mathcal{F}},D)$ yields its [<span style="font-variant:small-caps;">Scc</span>]{}s. Then, by examining each [<span style="font-variant:small-caps;">Scc</span>]{} and its content, we collect the sets $S \in {\mathcal{F}}$ such that $v[S]$ does not belong to the same component as the vertex ${\mathit{superset}}$. We finally return these sets. By Proposition \[prop:equivalence\], they are precisely the minimal sets in the family ${\mathcal{F}}$. Another interesting combinatorial problem is to decide whether a collection of sets is a Sperner family, [*i.e.*]{} the sets are not pairwise comparable. As a consequence of Theorem \[th:minimal\_set\_pb\_reduction\], it can be shown that the problem of deciding whether a collection of sets is a Sperner family can be reduced in linear time to the problem of determining the [<span style="font-variant:small-caps;">Scc</span>]{}s in a directed hypergraph. The Sperner family problem can be indeed reduced in linear time to the minimal set problem, by examining whether the number of minimal sets of ${\mathcal{F}}$ is equal to the cardinality of ${\mathcal{F}}$. In a similar way, we can also exhibit a linear time reduction from the problem of determining a linear extension of the subset partial order over a family of sets, to the problem of topologically sorting the vertices of an acyclic directed hypergraph. The *topological sort* of an acyclic directed hypergraph ${\mathcal{H}}$ refers to a total ordering $\preceq$ of the vertices such that $u \preceq v$ as soon as $u {{\rightsquigarrow}}_{\mathcal{H}}v$. The idea is to use the directed hypergraph ${\mathcal{H}}({\mathcal{F}},D)$ (which can be built in linear time in the size of $({\mathcal{F}},D)$). This hypergraph can be shown to be acyclic (under the assumption $\card{S} > 1$ for all $S \in {\mathcal{F}}$). By Lemma \[lemma:reachable\], it is straightforward that inverting and restricting a topological ordering over the vertices of the form $v[S]$ provides a linear extension of the partial order over ${\mathcal{F}}$. To our knowledge, the problem of determining a linear extension of the subset partial order has not been particularly studied. It is probably not obvious to solve this problem without examining a significant part of the subset partial order (or at least of a sparse representation such as its transitive reduction). Conclusion {#subsec:complexity} ========== In this paper, we have proved that all terminal [<span style="font-variant:small-caps;">Scc</span>]{}s can be determined in only almost linear time (Theorems \[th:correctness\] and \[th:complexity\]). As a consequence, two other problems, testing strong connectivity and the existence of a sink, can be solved in almost linear time. The problem of computing all [<span style="font-variant:small-caps;">Scc</span>]{}s appears to be much harder. We conclude with the following questions: \[op1\] Is it possible to compute the strongly connected components in directed hypergraphs with the same time and space complexity as in directed graphs? \[op2\] Is it possible to “break” the partial lower bound $O(N^2/\log^2 N)$ provided by Corollary \[cor:scc\_lower\_bound\]? The results established in Section \[sec:combinatorics\] on the size of the transitive reduction of the reachability relation in hypergraphs (Theorem \[th:transitive\_reduction\_lower\_bound\]), and on the reduction from the minimal set problem (Theorem \[th:minimal\_set\_pb\_reduction\]), show that the answer to Question \[op1\] is likely to be “No” (at least considering “reasonable” models of computation, like the RAM model). Corollary \[cor:scc\_lower\_bound\] indicates that solving Question \[op2\] would require to design an algorithm capturing only a part of the reachability relation (or a transitive reduction). This part should be however sufficiently large to correctly identify the [<span style="font-variant:small-caps;">Scc</span>]{}s. In any case, the directed hypergraphs ${\mathcal{H}}({\mathcal{F}},D)$ and ${\overline{\mathcal{H}}}({\mathcal{F}},D)$ constructed in Section \[sec:combinatorics\] provide useful examples to study the problem. [LMM[[$^{+}$]{}]{}12]{} Giorgio Ausiello, Alessandro D’Atri, and Domenico Saccà, *Graph algorithms for functional dependency manipulation*, J. ACM **30** (1983), 752–766. G Ausiello, A D’Atri, and D Saccá, *Minimal representation of directed hypergraphs*, SIAM J. Comput. **15** (1986), 418–431. Giorgio Ausiello, Paolo Giulio Franciosa, and Daniele Frigioni, *Directed hypergraphs: Problems, algorithmic results, and a novel decremental approach*, Theoretical Computer Science, 7th Italian Conference, ICTCS 2001, Proceedings (Antonio Restivo, Simona Ronchi Della Rocca, and Luca Roversi, eds.), Lecture Notes in Computer Science, vol. 2202, Springer, 2001, pp. 312–327. Giorgio Ausiello, Paolo Giulio Franciosa, Daniele Frigioni, and Roberto Giaccio, *Decremental maintenance of reachability in hypergraphs and minimum models of horn formulae*, Algorithms and Computation, 8th International Symposium, ISAAC ’97, Singapore, December 17-19, 1997, Proceedings (Hon Wai Leong, Hiroshi Imai, and Sanjay Jain, eds.), Lecture Notes in Computer Science, vol. 1350, Springer, 1997, pp. 122–131. X. Allamigeon, S. Gaubert, and E. Goubault, *Inferring min and max invariants using max-plus polyhedra*, Proceedings of the 15th International Static Analysis Symposium (SAS’08), Lecture Notes in Comput. Sci., vol. 5079, Springer, Valencia, Spain, 2008, pp. 189–204. [to3em]{}, *The tropical double description method*, Proceedings of the 27th International Symposium on Theoretical Aspects of Computer Science (STACS 2010) (Dagstuhl, Germany) (J.-Y. Marion and Th. Schwentick, eds.), Leibniz International Proceedings in Informatics (LIPIcs), vol. 5, Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2010, pp. 47–58. [to3em]{}, *Computing the vertices of tropical polyhedra using directed hypergraphs*, Discrete & Computational Geometry **49** (2013), no. 2, 247–279. Alfred V. Aho, M. R. Garey, and Jeffrey D. Ullman, *The transitive reduction of a directed graph*, SIAM Journal on Computing **1** (1972), no. 2, 131–137. Giorgio Ausiello and Giuseppe F. Italiano, *On-line algorithms for polynomially solvable satisfiability problems*, J. Log. Program. **10** (1991), no. 1/2/3[&]{}4, 69–90. Giorgio Ausiello, Giuseppe Italiano, Luigi Laura, Umberto Nanni, and Fabiano Sarracco, *Structure theorems for optimum hyperpaths in directed hypergraphs*, Combinatorial Optimization (A. Mahjoub, Vangelis Markakis, Ioannis Milis, and Vangelis Paschos, eds.), Lecture Notes in Computer Science, vol. 7422, Springer Berlin / Heidelberg, 2012, pp. 1–14. Xavier Allamigeon, *[TPLib]{}: Tropical polyhedra library*, 2009, Distributed under LGPL, available at <https://gforge.inria.fr/projects/tplib>. Giorgio Ausiello, Umberto Nanni, and Giuseppe F. Italiano, *Dynamic maintenance of directed hypergraphs*, Theoretical Computer Science **72** (1990), no. 2-3, 97 – 117. Roberto J. Bayardo and Biswanath Panda, *Fast algorithms for finding extremal sets*, Proceedings of the Eleventh SIAM International Conference on Data Mining, SDM 2011, April 28-30, 2011, Mesa, Arizona, USA, SIAM / Omnipress, 2011, pp. 25–34. J. Cheriyan and K. Mehlhorn, *Algorithms for dense graphs and networks on the random access computer*, Algorithmica **15** (1996), 521–549. Thomas H. Cormen, Clifford Stein, Ronald L. Rivest, and Charles E. Leiserson, *Introduction to algorithms*, McGraw-Hill Higher Education, 2001. Amr Elmasry, *Computing the subset partial order for dense families of sets*, Information Processing Letters **109** (2009), no. 18, 1082 – 1086. Harold N. Gabow, *Path-based depth-first search for strong and biconnected components*, Inf. Process. Lett. **74** (2000), no. 3-4, 107–114. S. Gaubert and J. Gunawardena, *The [P]{}erron-[F]{}robenius theorem for homogeneous, monotone functions*, Trans. of AMS **356** (2004), no. 12, 4931–4950. Giorgio Gallo, Claudio Gentile, Daniele Pretolani, and Gabriella Rago, *Max horn sat and the minimum cut problem in directed hypergraphs*, Math. Program. **80** (1998), 213–237. Giorgio Gallo, Giustino Longo, Stefano Pallottino, and Sang Nguyen, *Directed hypergraphs and applications*, Discrete Appl. Math. **42** (1993), no. 2-3, 177–201. Giorgio Gallo and Daniele Pretolani, *A new algorithm for the propositional satisfiability problem*, Discrete Applied Mathematics **60** (1995), no. 1-3, 159–179. Parke Godfrey, Ryan Shipley, and Jarek Gryz, *Maximal vector computation in large data sets*, Proceedings of the 31st international conference on Very large data bases, VLDB ’05, VLDB Endowment, 2005, pp. 229–240. R. D. Katz, *Max-plus [$(A,B)$]{}-invariant spaces and control of timed discrete event systems*, IEEE Trans. Aut. Control **52** (2007), no. 2, 229–241. H. T. Kung, F. Luccio, and F. P. Preparata, *On finding the maxima of a set of vectors*, J. ACM **22** (1975), 469–476. David G. Kirkpatrick and Raimund Seidel, *Output-size sensitive algorithms for finding maximal vectors*, Proceedings of the first annual symposium on Computational geometry (New York, NY, USA), SCG ’85, ACM, 1985, pp. 89–96. Qi Lu, Michael Madsen, Martin Milata, S[ø]{}ren Ravn, Uli Fahrenberg, and Kim G. Larsen, *Reachability analysis for timed automata using max-plus algebra*, The Journal of Logic and Algebraic Programming **81** (2012), no. 3, 298–313. Xinxin Liu and Scott A. Smolka, *Simple linear-time algorithms for minimal fixed points (extended abstract)*, Automata, Languages and Programming, 25th International Colloquium, ICALP’98, Proceedings (Kim Guldstrand Larsen, Sven Skyum, and Glynn Winskel, eds.), Lecture Notes in Computer Science, vol. 1443, Springer, 1998, pp. 53–66. S. Nguyen and S. Pallottino, *Hyperpaths and shortest hyperpaths*, COMO ’86: Lectures given at the third session of the Centro Internazionale Matematico Estivo (C.I.M.E.) on Combinatorial optimization (New York, NY, USA), Lectures Notes in Mathematics, Springer-Verlag New York, Inc., 1989, pp. 258–271. Lars Relund Nielsen, Daniele Pretolani, and Kim Allan Andersen, *Finding the k shortest hyperpaths using reoptimization*, Operations Research Letters **34** (2006), no. 2, 155 – 164. Sang Nguyen, Stefano Pallottino, and Michel Gendreau, *Implicit enumeration of hyperpaths in a logit model for transit networks*, Transportation Science **32** (1998), no. 1, 54–64. Can C. [Ö]{}zturan, *On finding hypercycles in chemical reaction networks*, Appl. Math. Lett. **21** (2008), no. 9, 881–884. Daniele Pretolani, *A directed hypergraph model for random time dependent shortest paths*, European Journal of Operational Research **123** (2000), no. 2, 315–324. Daniele Pretolani, *Hypergraph reductions and satisfiability problems*, Theory and Applications of Satisfiability Testing, 6th International Conference, SAT 2003 (Enrico Giunchiglia and Armando Tacchella, eds.), Lecture Notes in Computer Science, vol. 2919, Springer, 2003, pp. 383–397. Paul Pritchard, *Opportunistic algorithms for eliminating supersets*, Acta Informatica **28** (1991), 733–754. Paul Pritchard, *A simple sub-quadratic algorithm for computing the subset partial order*, Information Processing Letters **56** (1995), no. 6, 337 – 341. Paul Pritchard, *A fast bit-parallel algorithm for computing the subset partial order*, Algorithmica **24** (1999), 76–86. Paul Pritchard, *On computing the subset graph of a collection of sets*, Journal of Algorithms **33** (1999), no. 2, 187 – 203. Robert Tarjan, *Depth-first search and linear graph algorithms*, SIAM Journal on Computing **1** (1972), no. 2, 146–160. Mayur Thakur and Rahul Tripathi, *Linear connectivity problems in directed hypergraphs*, Theor. Comput. Sci. **410** (2009), 2592–2618. Daniel M. Yellin, *Algorithms for subset testing and finding maximal sets*, Proceedings of the third annual ACM-SIAM symposium on Discrete algorithms (Philadelphia, PA, USA), SODA ’92, Society for Industrial and Applied Mathematics, 1992, pp. 386–392. Daniel M. Yellin and Charanjit S. Jutla, *Finding extremal sets in less than quadratic time*, Information Processing Letters **48** (1993), no. 1, 29 – 34. An Example of Complete Execution Trace of the Algorithm of Section \[sec:maxscc\] {#sec:execution_trace} ================================================================================= We give the main steps of the execution of the algorithm  on the directed hypergraph depicted in Figure \[fig:hypergraph\]: \(u) at (-2,-1) [$u$]{}; (v) at (0,0) [$v$]{}; (w) at (0,-2) [$w$]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; \(u) edge\[simpleedge,out=90,in=-180\] (v); at (-1,-0.5) [$a_1$]{}; (v) edge\[simpleedge,out=-90,in=90\] (w); at (-0.5,-1.3) [$a_2$]{}; (w) edge\[simpleedge,out=-120,in=-60\] (u); at (-1.5,-2.5) [$a_3$]{}; at (1.75,-0.5) [$a_4$]{}; at (2.5,-3.5) [$a_5$]{}; ; ; Vertices are depicted by solid circles if their index is defined, and by dashed circles otherwise. Once a vertex is placed into ${\mathit{Finished}}$, it is depicted in gray. Similarly, a hyperarc which has never been placed into a local stack $F$ is represented by dotted lines. Once it is pushed into $F$, it becomes solid, and when it is popped from $F$, it is colored in gray (note that for the sake of readability, gray hyperarcs mapped to trivial cycles after a vertex merging step will not be represented). The stack $F$ which is mentioned always corresponds to the stack local to the last non-terminated call of the function $\Call{Visit}{}$. Initially, ${\Call{Find}}{z} = z$ for all $z \in \{ u,v,w,x,y,t \}$. We suppose that $\Call{Visit}{u}$ is called first. After the execution of the block from Lines [[\[scc:begin\]]{}]{} to [[\[scc:end\_node\_loop\]]{}]{}, the current state is: \(u) at (-2,-1) [$u$]{} node\[node distance=11ex,left of=u\] [ $\begin{aligned} {\mathit{index}}[u] &= 0 \\[-0.8ex] {\mathit{low}}[u] &= 0 \\[-0.8ex] {\mathit{is\_term}}[u] &= \True \end{aligned}$ ]{}; (v) at (0,0) [$v$]{}; (w) at (0,-2) [$w$]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; \(u) edge\[simpleedge,out=90,in=-180\] (v); (v) edge\[simpleedge,out=-90,in=90,dotted\] (w); (w) edge\[simpleedge,out=-120,in=-60,dotted\] (u); ; ; at (7, -1.5) [$\begin{aligned} S & = [u] \\[-0.8ex] n &= 1 \\[-0.8ex] F & = [a_1] \end{aligned}$ ]{}; Following the hyperarc $a_1$, $\Call{Visit}{v}$ is called during the execution of the block from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{} of $\Call{Visit}{u}$. After Line [[\[scc:end\_node\_loop\]]{}]{} in $\Call{Visit}{v}$, the root of the hyperarc $a_4$ is set to $v$, and the counter $c_{a_4}$ is incremented to $1$ since $v \in S$. The state is: \(u) at (-2,-1) [$u$]{} node\[node distance=11ex,left of=u\] [ $\begin{aligned} {\mathit{index}}[u] &= 0 \\[-0.8ex] {\mathit{low}}[u] &= 0 \\[-0.8ex] {\mathit{is\_term}}[u] &= \True \end{aligned}$ ]{}; (v) at (0,0) [$v$]{} node\[node distance=7ex,above of=v\] [ $\begin{aligned} {\mathit{index}}[v] &= 1 \\[-0.8ex] {\mathit{low}}[v] &= 1 \\[-0.8ex] {\mathit{is\_term}}[v] &= \True \end{aligned}$ ]{}; (w) at (0,-2) [$w$]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; \(u) edge\[simpleedge,out=90,in=-180,lightgray\] (v); (v) edge\[simpleedge,out=-90,in=90\] (w); (w) edge\[simpleedge,out=-120,in=-60,dotted\] (u); ; ; at (1.75,-0.3) [$\begin{aligned} r_{a_4} & = v \\[-1.5ex] c_{a_4} & = 1 \end{aligned}$]{}; at (7, -1.5) [$\begin{aligned} S & = [v; u] \\[-0.8ex] n &= 2 \\[-0.8ex] F & = [a_2] \end{aligned}$ ]{}; Similarly, the function $\Call{Visit}{w}$ is called during the execution of the loop from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{} in $\Call{Visit}{v}$. After Line [[\[scc:end\_node\_loop\]]{}]{} in $\Call{Visit}{w}$, the root of the hyperarc $a_5$ is set to $w$, and the counter $c_{a_5}$ is incremented to $1$ since $w \in S$. Besides, $c_{a_4}$ is incremented to $2 = \card{T(a_4)}$ since ${\Call{Find}}{r_{a_4}} = {\Call{Find}}{v} = v \in S$, so that $a_4$ is pushed on the stack $F_v$. The state is: \(u) at (-2,-1) [$u$]{} node\[node distance=11ex,left of=u\] [ $\begin{aligned} {\mathit{index}}[u] &= 0 \\[-0.8ex] {\mathit{low}}[u] &= 0 \\[-0.8ex] {\mathit{is\_term}}[u] &= \True \end{aligned}$ ]{}; (v) at (0,0) [$v$]{} node\[node distance=7ex,above of=v\] [ $\begin{aligned} {\mathit{index}}[v] &= 1 \\[-0.8ex] {\mathit{low}}[v] &= 1 \\[-0.8ex] {\mathit{is\_term}}[v] &= \True \end{aligned}$ ]{}; (w) at (0,-2) [$w$]{} node\[node distance=11ex,below left of=w\] [ $\begin{aligned} {\mathit{index}}[w] &= 2 \\[-0.8ex] {\mathit{low}}[w] &= 2 \\[-0.8ex] {\mathit{is\_term}}[w] &= \True \end{aligned}$ ]{}; (x) at (3.5,-0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; at (1.75,-0.3) [$\begin{aligned} r_{a_4} & = v \\[-1.5ex] c_{a_4} & = 2 \end{aligned}$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; \(u) edge\[simpleedge,out=90,in=-180,lightgray\] (v); (v) edge\[simpleedge,out=-90,in=90,lightgray\] (w); (w) edge\[simpleedge,out=-120,in=-60\] (u); ; ; at (7, -1.5) [$\begin{aligned} S & = [w; v; u] \\[-0.8ex] n &= 3 \\[-0.8ex] F & = [a_3]\\[-0.8ex] F_{v} & = [a_4] \end{aligned}$ ]{}; The execution of the loop from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{} of $\Call{Visit}{w}$ discovers that ${\mathit{index}}[u]$ is defined but $u \not \in {\mathit{Finished}}$, so that ${\mathit{low}}[w]$ is set to $\min({\mathit{low}}[w],{\mathit{low}}[u]) = 0$ and ${\mathit{is\_term}}[w]$ to ${\mathit{is\_term}}[w] \And {\mathit{is\_term}}[u] = \True$. At the end of the loop, the state is therefore: \(u) at (-2,-1) [$u$]{} node\[node distance=11ex,left of=u\] [ $\begin{aligned} {\mathit{index}}[u] &= 0 \\[-0.8ex] {\mathit{low}}[u] &= 0 \\[-0.8ex] {\mathit{is\_term}}[u] &= \True \end{aligned}$ ]{}; (v) at (0,0) [$v$]{} node\[node distance=7ex,above of=v\] [ $\begin{aligned} {\mathit{index}}[v] &= 1 \\[-0.8ex] {\mathit{low}}[v] &= 1 \\[-0.8ex] {\mathit{is\_term}}[v] &= \True \end{aligned}$ ]{}; (w) at (0,-2) [$w$]{} node\[node distance=11ex,below left of=w\] [ $\begin{aligned} {\mathit{index}}[w] &= 2 \\[-0.8ex] {\mathit{low}}[w] &= 0 \\[-0.8ex] {\mathit{is\_term}}[w] &= \True \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; \(u) edge\[simpleedge,out=90,in=-180,lightgray\] (v); (v) edge\[simpleedge,out=-90,in=90,lightgray\] (w); (w) edge\[simpleedge,out=-120,in=-60,lightgray\] (u); ; ; at (1.75,-0.3) [$\begin{aligned} r_{a_4} & = v \\[-1.5ex] c_{a_4} & = 2 \end{aligned}$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; at (7, -1.5) [$\begin{aligned} S & = [w; v; u] \\[-0.8ex] n &= 3 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] F_{v} & = [a_4] \end{aligned}$ ]{}; Since ${\mathit{low}}[w] \neq {\mathit{index}}[w]$, the block from Lines [[\[scc:begin2\]]{}]{} to [[\[scc:end2\]]{}]{} is not executed, and $\Call{Visit}{w}$ terminates. Back to the loop from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{} in $\Call{Visit}{v}$, ${\mathit{low}}[v]$ is assigned to the value $\min({\mathit{low}}[v],{\mathit{low}}[w]) = 0$, and ${\mathit{is\_term}}[v]$ to ${\mathit{is\_term}}[v] \And {\mathit{is\_term}}[w] = \True$: \(u) at (-2,-1) [$u$]{} node\[node distance=11ex,left of=u\] [ $\begin{aligned} {\mathit{index}}[u] &= 0 \\[-0.8ex] {\mathit{low}}[u] &= 0 \\[-0.8ex] {\mathit{is\_term}}[u] &= \True \end{aligned}$ ]{}; (v) at (0,-0) [$v$]{} node\[node distance=7ex,above of=v\] [ $\begin{aligned} {\mathit{index}}[v] &= 1 \\[-0.8ex] {\mathit{low}}[v] &= 0 \\[-0.8ex] {\mathit{is\_term}}[v] &= \True \end{aligned}$ ]{}; (w) at (0,-2) [$w$]{} node\[node distance=11ex,below left of=w\] [ $\begin{aligned} {\mathit{index}}[w] &= 2 \\[-0.8ex] {\mathit{low}}[w] &= 0 \\[-0.8ex] {\mathit{is\_term}}[w] &= \True \end{aligned}$ ]{}; \(x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; \(u) edge\[simpleedge,out=90,in=-180,lightgray\] (v); (v) edge\[simpleedge,out=-90,in=90,lightgray\] (w); (w) edge\[simpleedge,out=-120,in=-60,lightgray\] (u); ; ; at (1.75,-0.3) [$\begin{aligned} r_{a_4} & = v \\[-1.5ex] c_{a_4} & = 2 \end{aligned}$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; at (7, -1.5) [$\begin{aligned} S & = [w; v; u] \\[-0.8ex] n &= 3 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] F_{v} & = [a_4] \end{aligned}$ ]{}; Since ${\mathit{low}}[v] \neq {\mathit{index}}[v]$, the block from Lines [[\[scc:begin2\]]{}]{} to [[\[scc:end2\]]{}]{} is not executed, and $\Call{Visit}{v}$ terminates. Back to the loop from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{} in $\Call{Visit}{u}$, ${\mathit{low}}[u]$ is assigned to the value $\min({\mathit{low}}[u],{\mathit{low}}[v]) = 0$, and ${\mathit{is\_term}}[u]$ to ${\mathit{is\_term}}[u] \And {\mathit{is\_term}}[v] = \True$. Therefore, at Line [[\[scc:begin2\]]{}]{}, the conditions ${\mathit{low}}[u] = {\mathit{index}}[u]$ and ${\mathit{is\_term}}[u] = \True$ hold, so that a vertex merging step is executed. At that point, the stack $F$ is empty. After that, $i$ is set to ${\mathit{index}}[u] = 0$ (Line [[\[scc:begin\_node\_merging\]]{}]{}), and $F_u = {[\,]}$ is emptied to $F$ (Line [[\[scc:push\_on\_fprime1\]]{}]{}), so that $F$ is still empty. Then $w$ is popped from $S$, and since ${\mathit{index}}[w] = 2 > i = 0$, the loop from Lines [[\[scc:begin\_node\_merging\_loop\]]{}]{} to [[\[scc:end\_node\_merging\_loop\]]{}]{} is iterated. Then the stack $F_w = {[\,]}$ is emptied in $F$. At Line [[\[scc:merge\]]{}]{}, is called. The result is denoted by $U$ (in practice, either $U = u$ or $U = w$). The state is: \(v) at (0,0) [$v$]{} node\[node distance=7ex,above of=v\] [ $\begin{aligned} {\mathit{index}}[v] &= 1 \\[-0.8ex] {\mathit{low}}[v] &= 0 \\[-0.8ex] {\mathit{is\_term}}[v] &= \True \end{aligned}$ ]{}; (w) at (0,-2) [$U$]{} node\[node distance=10ex,below left of=w\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \text{ or } 2 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \True \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; \(v) edge\[simpleedge,out=-90,in=90,lightgray\] (w); (w) edge\[simpleedge,out=170,in=-170,lightgray\] (v); ; ; at (1.75,-0.3) [$\begin{aligned} r_{a_4} & = v \\[-1.5ex] c_{a_4} & = 2 \end{aligned}$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; at (7, -1.5) [$\begin{aligned} S & = [v; u] \\[-0.8ex] n &= 3 \\[-0.8ex] F_{v} & = [a_4] \\[-0.8ex] i & = 0 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] U & = {\Call{Find}}{u} = {\Call{Find}}{w} \end{aligned}$ ]{}; Then $v$ is popped from $S$, and since ${\mathit{index}}[v] = 1 > i = 0$, the loop Lines [[\[scc:begin\_node\_merging\_loop\]]{}]{} to [[\[scc:end\_node\_merging\_loop\]]{}]{} is iterated again. Then the stack $F_v = [a_4]$ is emptied in $F$. At Line [[\[scc:merge\]]{}]{}, is called. The result is set to $U$ (in practice, $U$ is one of the vertices $u$, $v$, $w$). The state is: \(U) at (0,-1.5) [$U$]{} node\[node distance=16ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0, 1, \text{ or } 2 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \True \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; (U) edge\[simpleedge,out=40,in=-180\] (x); (U) edge\[simpleedge,out=40,in=120\] (y); ; at (6, -2.5) [$\begin{aligned} S & = [u] \\[-0.8ex] n &= 3 \\[-0.8ex] F_{v} & = {[\,]}\\[-0.8ex] i & = 0 \\[-0.8ex] F & = [a_4]\\[-0.8ex] U & = {\Call{Find}}{u} = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \end{aligned}$ ]{}; After that, $u$ is popped from $S$, and as ${\mathit{index}}[u] = 0 = i$, the loop is terminated. At Line [[\[scc:index\_redef\]]{}]{}, ${\mathit{index}}[U]$ is set to $i$, and $U$ is pushed on $S$. Since $F \neq \emptyset$, we go back to Line [[\[scc:begin\_edge\_loop\]]{}]{}, in the state: \(U) at (0,-1.5) [$U$]{} node\[node distance=13ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \True \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; (U) edge\[simpleedge,out=40,in=-180\] (x); (U) edge\[simpleedge,out=40,in=120\] (y); ; at (7, -1.5) [$\begin{aligned} S & = [U] \\[-0.8ex] n &= 3 \\[-0.8ex] F & = [a_4]\\[-0.8ex] U & = {\Call{Find}}{u} = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \end{aligned}$ ]{}; Then $a_4$ is popped from $F$, and the loop from [[\[scc:begin\_edge\_loop2\]]{}]{} to [[\[scc:end\_edge\_loop2\]]{}]{} iterates over $H(a_4) = \{x,y\}$. Suppose that $x$ is treated first. Then $\Call{Visit}{x}$ is called. During its execution, at Line [[\[scc:end\_node\_loop\]]{}]{}, the state is: \(U) at (0,-1.5) [$U$]{} node\[node distance=13ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \True \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{} node\[node distance=7ex,above of=x\] [ $\begin{aligned} {\mathit{index}}[x] &= 3 \\[-0.8ex] {\mathit{low}}[x] &= 3 \\[-0.8ex] {\mathit{is\_term}}[x] &= \True \end{aligned}$ ]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; (U) edge\[simpleedge,out=40,in=-180,lightgray\] (x); (U) edge\[simpleedge,out=40,in=120\] (y); ; at (7, -1.5) [$\begin{aligned} S & = [x;U] \\[-0.8ex] n & = 4 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] U & = {\Call{Find}}{u} = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \end{aligned}$ ]{}; Since $F$ is empty, the loop from Lines [[\[scc:begin\_edge\_loop\]]{}]{} to [[\[scc:end\_edge\_loop\]]{}]{} is not executed. At Line [[\[scc:begin2\]]{}]{}, ${\mathit{low}}[x] = {\mathit{index}}[x]$ and ${\mathit{is\_term}}[x] = \True$, so that a trivial vertex merging step is performed, only on $x$, since it is the top element of $S$. After Line [[\[scc:index\_redef\]]{}]{}, it can be verified that $S = [x; U]$, ${\mathit{index}}[x] = 3$ and $F = {[\,]}$. Therefore, the goto statement at Line [[\[scc:goto\]]{}]{} is not executed. It follows that the loop from Lines [[\[scc:begin\_non\_max\_scc\_loop\]]{}]{} to [[\[scc:end\_non\_max\_scc\_loop\]]{}]{} is executed, and after that, the state is: \(U) at (0,-1.5) [$U$]{} node\[node distance=13ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \True \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{} node\[node distance=7ex,above of=x\] [ $\begin{aligned} {\mathit{index}}[x] &= 3 \\[-0.8ex] {\mathit{low}}[x] &= 3 \\[-0.8ex] {\mathit{is\_term}}[x] &= \True \end{aligned}$ ]{}; (y) at (3.5,-2) [$y$]{}; (t) at (2,-4.5) [$t$]{}; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 1 \end{aligned}$]{}; (U) edge\[simpleedge,out=40,in=-180,lightgray\] (x); (U) edge\[simpleedge,out=40,in=120\] (y); ; at (6, -1.5) [$\begin{aligned} S & = [U] \\[-0.8ex] n & = 4 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] U & = {\Call{Find}}{u} = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \\[-0.8ex] {\mathit{Finished}}& = \{ x \} \end{aligned}$ ]{}; After the termination of $\Call{Visit}{x}$, since $x \in {\mathit{Finished}}$, ${\mathit{is\_term}}[U]$ is set to $\False$. After that, $\Call{Visit}{y}$ is called, and at Line [[\[scc:end\_node\_loop\]]{}]{}, it can be checked that $c_{a_5}$ has been incremented to $2 = \card{T(a_5)}$ because $R_{a_5} = {\Call{Find}}{r_{a_5}} = {\Call{Find}}{w} = U$ and $U \in S$. Therefore, $a_5$ is pushed to $F_U$, and the state is: \(U) at (0,-1.5) [$U$]{} node\[node distance=13ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \False \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{} node\[node distance=7ex,above of=x\] [ $\begin{aligned} {\mathit{index}}[x] &= 3 \\[-0.8ex] {\mathit{low}}[x] &= 3 \\[-0.8ex] {\mathit{is\_term}}[x] &= \True \end{aligned}$ ]{}; (y) at (3.5,-2) [$y$]{} node\[node distance=12ex,right of=y\] [ $\begin{aligned} {\mathit{index}}[y] &= 4 \\[-0.8ex] {\mathit{low}}[y] &= 4 \\[-0.8ex] {\mathit{is\_term}}[y] &= \True \end{aligned}$ ]{}; (t) at (2,-4.5) [$t$]{}; \(U) edge\[simpleedge,out=40,in=-180,lightgray\] (x); (U) edge\[simpleedge,out=40,in=120,lightgray\] (y); ; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 2 \end{aligned}$]{}; at (8, -1.5) [$\begin{aligned} S & = [y;U] \\[-0.8ex] n & = 5 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] F_U & = [a_5]\\[-0.8ex] U & = {\Call{Find}}{u} \\[-0.8ex] & = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \\[-0.8ex] {\mathit{Finished}}& = \{ x \} \end{aligned}$ ]{}; As for the vertex $x$, $\Call{Visit}{y}$ terminates by popping $y$ from $S$ and adding it to ${\mathit{Finished}}$. Back to the execution of $\Call{Visit}{U}$, at Line [[\[scc:begin2\]]{}]{}, the state is: \(U) at (0,-1.5) [$U$]{} node\[node distance=13ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \False \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{} node\[node distance=7ex,above of=x\] [ $\begin{aligned} {\mathit{index}}[x] &= 3 \\[-0.8ex] {\mathit{low}}[x] &= 3 \\[-0.8ex] {\mathit{is\_term}}[x] &= \True \end{aligned}$ ]{}; (y) at (3.5,-2) [$y$]{} node\[node distance=12ex,right of=y\] [ $\begin{aligned} {\mathit{index}}[y] &= 4 \\[-0.8ex] {\mathit{low}}[y] &= 4 \\[-0.8ex] {\mathit{is\_term}}[y] &= \True \end{aligned}$ ]{}; (t) at (2,-4.5) [$t$]{}; \(U) edge\[simpleedge,out=40,in=-180,lightgray\] (x); (U) edge\[simpleedge,out=40,in=120,lightgray\] (y); ; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 2 \end{aligned}$]{}; at (8, -1.5) [$\begin{aligned} S & = [U] \\[-0.8ex] n & = 5 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] F_U & = [a_5]\\[-0.8ex] U & = {\Call{Find}}{u}\\[-0.8ex] & = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \\[-0.8ex] {\mathit{Finished}}& = \{ y, x \} \end{aligned}$ ]{}; While ${\mathit{low}}[U] = {\mathit{index}}[U]$, ${\mathit{is\_term}}[U]$ is equal to $\False$, so that no vertex merging loop is performed on $U$. Therefore, $a_5$ is not popped from $F_U$. Nevertheless, the loop from Lines [[\[scc:begin\_non\_max\_scc\_loop\]]{}]{} to [[\[scc:end\_non\_max\_scc\_loop\]]{}]{} is executed, and after that, $\Call{Visit}{u}$ is terminated in the state: \(U) at (0,-1.5) [$U$]{} node\[node distance=13ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \False \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{} node\[node distance=7ex,above of=x\] [ $\begin{aligned} {\mathit{index}}[x] &= 3 \\[-0.8ex] {\mathit{low}}[x] &= 3 \\[-0.8ex] {\mathit{is\_term}}[x] &= \True \end{aligned}$ ]{}; (y) at (3.5,-2) [$y$]{} node\[node distance=12ex,right of=y\] [ $\begin{aligned} {\mathit{index}}[y] &= 4 \\[-0.8ex] {\mathit{low}}[y] &= 4 \\[-0.8ex] {\mathit{is\_term}}[y] &= \True \end{aligned}$ ]{}; (t) at (2,-4.5) [$t$]{}; \(U) edge\[simpleedge,out=40,in=-180,lightgray\] (x); (U) edge\[simpleedge,out=40,in=120,lightgray\] (y); ; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 2 \end{aligned}$]{}; at (8, -1.5) [$\begin{aligned} S & = {[\,]}\\[-0.8ex] n & = 5 \\[-0.8ex] F & = {[\,]}\\[-0.8ex] F_U & = [a_5]\\[-0.8ex] U & = {\Call{Find}}{u}\\[-0.8ex] & = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \\[-0.8ex] {\mathit{Finished}}& = \{ U, y, x \} \end{aligned}$ ]{}; Finally, $\Call{Visit}{t}$ is called from at Line [[\[scc:visit\_call\]]{}]{}. It can be verified that a trivial vertex merging loop is performed on $t$ only. After that, $t$ is placed into ${\mathit{Finished}}$. Therefore, the final state of is: \(U) at (0,-1.5) [$U$]{} node\[node distance=13ex,left of=U\] [ $\begin{aligned} {\mathit{index}}[U] &= 0 \\[-0.8ex] {\mathit{low}}[U] &= 0 \\[-0.8ex] {\mathit{is\_term}}[U] &= \False \end{aligned}$ ]{}; (x) at (3.5,0) [$x$]{} node\[node distance=7ex,above of=x\] [ $\begin{aligned} {\mathit{index}}[x] &= 3 \\[-0.8ex] {\mathit{low}}[x] &= 3 \\[-0.8ex] {\mathit{is\_term}}[x] &= \True \end{aligned}$ ]{}; (y) at (3.5,-2) [$y$]{} node\[node distance=12ex,right of=y\] [ $\begin{aligned} {\mathit{index}}[y] &= 4 \\[-0.8ex] {\mathit{low}}[y] &= 4 \\[-0.8ex] {\mathit{is\_term}}[y] &= \True \end{aligned}$ ]{}; (t) at (2,-4.5) [$t$]{} node\[node distance=7ex,below of=t\] [ $\begin{aligned} {\mathit{index}}[t] &= 5 \\[-0.8ex] {\mathit{low}}[t] &= 5 \\[-0.8ex] {\mathit{is\_term}}[t] &= \True \end{aligned}$ ]{};; \(U) edge\[simpleedge,out=40,in=-180,lightgray\] (x); (U) edge\[simpleedge,out=40,in=120,lightgray\] (y); ; at (3.2,-3.4) [$\begin{aligned} r_{a_5} & = w \\[-1.5ex] c_{a_5} & = 2 \end{aligned}$]{}; at (7.8, -1.5) [$\begin{aligned} S & = {[\,]}\\[-0.8ex] n & = 6 \\[-0.8ex] F_U & = [a_5]\\[-0.8ex] U & = {\Call{Find}}{u}\\[-0.8ex] & = {\Call{Find}}{v} \\[-0.8ex] & = {\Call{Find}}{w} \\[-0.8ex] {\mathit{Finished}}& = \{ t, U, y, x \} \end{aligned}$ ]{}; As ${\mathit{is\_term}}[x] = {\mathit{is\_term}}[y] = {\mathit{is\_term}}[t] = \True$ and ${\mathit{is\_term}}[{\Call{Find}}{z}] = \False$ for $z = u, v, w$, there are three terminal [<span style="font-variant:small-caps;">Scc</span>]{}s, given by the sets: $$\begin{aligned} \{ z \mid {\Call{Find}}{z} = x \} & = \{x\}, \\ \{ z \mid {\Call{Find}}{z} = y \} & = \{y\}, \\ \{ z \mid {\Call{Find}}{z} = t \} & = \{t\}.\end{aligned}$$ Proof of Theorem \[th:correctness\] {#sec:correctness_proof} =================================== The correctness proof of the algorithm turns out to be harder than for algorithms on directed graphs such as Tarjan’s one [@Tarjan72], due to the complexity of the invariants which arise in the former algorithm. That is why we propose to show the correctness of two intermediary algorithms, named (Figure \[fig:maxscc2\]) and (Figure \[fig:maxscc3\]), and then to prove that they are equivalent to . $n \gets 0$, $S \gets {[\,]}$, ${\mathit{Finished}}\gets \emptyset$ ${\mathit{collected}}_a \gets \False$ ${\mathit{index}}[u] \gets \Nil$ ${\mathit{low}}[u] \gets \Nil$ local $U \gets \Call{Find}{u}$\[scc2:find1\], local $F \gets \emptyset$\[scc2:begin\_atom1\] ${\mathit{index}}[U] \gets n$, ${\mathit{low}}[U] \gets n$ $n \gets n+1$ ${\mathit{is\_term}}[U] \gets \True$ push $U$ on the stack $S$\[scc2:push1\] local ${\mathit{no\_merge}}\gets \True$ $F \gets \{ a \in A \mid T(a) = \{ u \} \}$\[scc2:f\_assign\] ${\mathit{collected}}_a \gets \True$ \[scc2:end\_atom1\] \[scc2:begin\_edge\_loop\] pop $a$ from $F$ local $W \gets \Call{Find}{w}$\[scc2:find3\] \[scc2:rec\_call\] ${\mathit{is\_term}}[U] \gets \False$ ${\mathit{low}}[U] \gets \min({\mathit{low}}[U],{\mathit{low}}[W])$ ${\mathit{is\_term}}[U] \gets {\mathit{is\_term}}[U] \And {\mathit{is\_term}}[W]$ \[scc2:end\_edge\_loop\] \[scc2:is\_root\] local $i \gets {\mathit{index}}[U]$\[scc2:begin\_node\_merging\]\[scc2:begin\_atom2\] pop $V$ from $S$\[scc2:pop1\] \[scc2:begin\_node\_merging\_loop\] ${\mathit{no\_merge}}\gets \False$ $U \gets \Call{Merge}{U, V}$\[scc2:merge\] pop $V$ from $S$\[scc2:pop2\] \[scc2:end\_node\_merging\_loop\] push $U$ on $S$\[scc2:push2\] $F \gets \biggl\{ a \in A \Bigm| \begin{aligned} & {\mathit{collected}}_a = \False, \\[-1ex] & \forall x \in T(a),\Call{Find}{x} = U \end{aligned} \biggr\}$ \[scc2:f\_assign2\] ${\mathit{collected}}_a \gets \True$\[scc2:end\_atom2\]\[scc2:end\_node\_merging\] \[scc2:begin\_not\_single\] $n \gets i$, ${\mathit{index}}[U] \gets n$, $n \gets n+1$ \[scc2:nredefined\] ${\mathit{no\_merge}}\gets \True$, go to Line [[\[scc2:begin\_edge\_loop\]]{}]{}\[scc2:goto\] \[scc2:end\_not\_single\] \[scc2:begin\_non\_max\_scc\_loop\]\[scc2:begin\_atom3\] pop $V$ from $S$, add $V$ to ${\mathit{Finished}}$\[scc2:pop3\] \[scc2:end\_non\_max\_scc\_loop\]\[scc2:end\_atom3\] The main difference between the first intermediary form and is that it does not use auxiliary data associated to the hyperarcs to determine which ones are added to the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ after a vertex merging step. Instead, the stack $F$ is directly filled with the right hyperarcs (Lines [[\[scc2:f\_assign\]]{}]{} and [[\[scc2:f\_assign2\]]{}]{}). Besides, a boolean ${\mathit{no\_merge}}$ is used to determine whether a vertex merging step has been executed. The notion of *vertex merging step* is refined: it now refers to the execution of the instructions between Lines [[\[scc2:begin\_node\_merging\]]{}]{} and [[\[scc2:end\_node\_merging\]]{}]{} in which the boolean ${\mathit{no\_merge}}$ is set to $\False$. For the sake of simplicity, we will suppose that sequences of assignment or stack manipulations are executed atomically. For instance, the sequences of instructions located in the blocks from Lines [[\[scc2:begin\_atom1\]]{}]{} and [[\[scc2:end\_atom1\]]{}]{}, or from Lines [[\[scc2:begin\_atom2\]]{}]{} and [[\[scc2:end\_atom2\]]{}]{}, and at from Lines [[\[scc2:begin\_atom3\]]{}]{} to [[\[scc2:end\_atom3\]]{}]{}, are considered as elementary instructions. Under this assumption, intermediate complex invariants do not have to be considered. We first begin with very simple invariants: \[inv:tindex\] Let $U$ be a vertex of the current hypergraph ${\mathcal{H}}_{\mathit{cur}}$. Then ${\mathit{index}}[U]$ is defined if, and only if, ${\mathit{index}}[u]$ is defined for all $u \in {\mathcal{V}}$ such that ${\Call{Find}}{u} = U$. It can be shown by induction on the number of vertex merging steps which has been performed on $U$. In the basis case, there is a unique element $u \in {\mathcal{V}}$ such that ${\Call{Find}}{u} = U$. Besides, $U = u$, so that the statement is trivial. After a merging step yielding the vertex $U$, we necessarily have ${\mathit{index}}[U] \neq \Nil$. Moreover, all the vertices $V$ which has been merged into $U$ satisfied ${\mathit{index}}[V] \neq \Nil$ because they were stored in the stack $S$. Applying the induction hypothesis terminates the proof. \[inv:comporstack\] Let $u \in {\mathcal{V}}$. When ${\mathit{index}}[u]$ is defined, then $\Call{Find}{u}$ belongs either to the stack $S$, or to the set ${\mathit{Finished}}$ (both cases cannot happen simultaneously). Initially, ${\Call{Find}}{u} = u$, and once ${\mathit{index}}[u]$ is defined, ${\Call{Find}}{u}$ is pushed on $S$ (Line [[\[scc2:push1\]]{}]{}). Naturally, $u \not \in {\mathit{Finished}}$, because otherwise, ${\mathit{index}}[u]$ would have been defined before (see the condition Line [[\[scc2:end\_non\_max\_scc\_loop\]]{}]{}). After that, $U = {\Call{Find}}{u}$ can be popped from $S$ at three possible locations: - at Lines [[\[scc2:pop1\]]{}]{} or [[\[scc2:pop2\]]{}]{}, in which case $U$ is transformed into a vertex $U'$ which is immediately pushed on the stack $S$ at Line [[\[scc2:push2\]]{}]{}. Since after that, ${\Call{Find}}{u} = U'$, the property ${\Call{Find}}{u} \in S$ still holds. - at Line [[\[scc2:pop3\]]{}]{}, in which case it is directly appended to the set ${\mathit{Finished}}$. \[inv:incomp\] The set ${\mathit{Finished}}$ is always growing. Once an element is added to ${\mathit{Finished}}$, it is never removed from it nor merged into another vertex (the function is always called on elements immediately popped from the stack $S$). \[prop:maxscc2\] After the algorithm $\Call{TerminalScc2}{{\mathcal{H}}}$ terminates, the sets $\{ v \in {\mathcal{V}}\mid \Call{Find}{v} = U \text{ and } {\mathit{is\_term}}[U] = \True \}$ are precisely the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s of ${\mathcal{H}}$. We prove the whole statement by induction on the number of vertex merging steps. *Basis Case.* First, suppose that the hypergraph ${\mathcal{H}}$ is such that no vertices are merged during the execution of , [*i.e.*]{} the vertex merging loop (from Lines [[\[scc2:begin\_node\_merging\_loop\]]{}]{} to [[\[scc2:end\_node\_merging\_loop\]]{}]{}) is never executed. Then the boolean ${\mathit{no\_merge}}$ is always set to $\True$, so that $n$ is never redefined to $i+1$ (Line [[\[scc2:nredefined\]]{}]{}), and there is no back edge to Line [[\[scc2:begin\_edge\_loop\]]{}]{} in the control-flow graph. It follows that removing all the lines between Lines [[\[scc2:begin\_node\_merging\]]{}]{} to [[\[scc2:end\_not\_single\]]{}]{} does not change the behavior of the algorithm. Besides, since the function is never called, $\Call{Find}{u}$ always coincides with $u$. Finally, at Line [[\[scc2:f\_assign\]]{}]{}, $F$ is precisely assigned to the set of simple hyperarcs leaving $u$ in ${\mathcal{H}}$, so that the loop from Lines [[\[scc2:begin\_edge\_loop\]]{}]{} to [[\[scc2:end\_edge\_loop\]]{}]{} iterates on the successors of $u$ in ${\mathsf{graph}}({\mathcal{H}})$. As a consequence, the algorithm behaves exactly like . Moreover, under our assumption, the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s of ${\mathsf{graph}}({\mathcal{H}})$ are all reduced to singletons (otherwise, the loop from Lines [[\[scc2:begin\_node\_merging\_loop\]]{}]{} to [[\[scc2:end\_node\_merging\_loop\]]{}]{} would be executed, and some vertices would be merged). Therefore, by Proposition \[prop:terminal\_scc\], the statement in Proposition \[prop:maxscc2\] holds. *Inductive Case.* Suppose that the vertex merging loop is executed at least once, and that its first execution happens during the execution of, say, . Consider the state of the algorithm at Line [[\[scc2:begin\_node\_merging\]]{}]{} just before the execution of the first occurrence of the vertex merging step. Until that point, ${\Call{Find}}{v}$ is still equal to $v$ for all vertices $v \in {\mathcal{V}}$, so that the execution of coincides with the execution of . Consequently, if $C$ is the set formed by the vertices $y$ located above $x$ in the stack $S$ (including $x$), $C$ forms a terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}})$. In particular, the elements of $C$ are located in a same [<span style="font-variant:small-caps;">Scc</span>]{} of the hypergraph ${\mathcal{H}}$. Consider the hypergraph ${\mathcal{H}}'$ obtained by merging the elements of $C$ in the hypergraph $({\mathcal{V}},A \setminus \{ a \mid \exists y \in C \text{ s.t. } T(a) = \{ y \} \})$, and let $X$ be the resulting vertex. For now, we may add a hypergraph as last argument of the functions , , [*etc*]{}, to distinguish their execution in the context of the call to or . We make the following observations: - the vertex $x$ is the first element of the component $C$ to be visited during the execution of . It follows that the execution of until the call to coincides with the execution of until the call to . - besides, during the execution of , the execution of the loop from Lines [[\[scc2:begin\_edge\_loop\]]{}]{} to [[\[scc2:end\_edge\_loop\]]{}]{} only has a local impact, [*i.e.*]{} on the ${\mathit{is\_term}}[y]$, ${\mathit{index}}[y]$, or ${\mathit{low}}[y]$ for $y \in C$, and not on any information relative to other vertices. Indeed, we claim that the set of the vertices $y$ on which is called during the execution of the loop is exactly $C \setminus \{ x \}$. First, for all $y \in C \setminus \{ x \}$, has necessarily been executed *after* Line [[\[scc2:begin\_edge\_loop\]]{}]{} (otherwise, by Invariant \[inv:comporstack\], $y$ would be either below $x$ in the stack $S$, or in ${\mathit{Finished}}$). Conversely, suppose that after Line [[\[scc2:begin\_edge\_loop\]]{}]{}, there is a call to with $t \not \in C$. By Invariant \[inv:comporstack\], $t$ belongs to ${\mathit{Finished}}$, so that for one of the vertices $w$ examined in the loop, either $w \in {\mathit{Finished}}$ or ${\mathit{is\_term}}[w] = \False$ after the call to . Hence ${\mathit{is\_term}}[x]$ should be $\False$, which contradicts our assumptions. - finally, from the execution of Line [[\[scc2:goto\]]{}]{} during the call to , our algorithm behaves exactly as from the execution of Line [[\[scc2:begin\_edge\_loop\]]{}]{} in . Indeed, ${\mathit{index}}[X]$ is equal to $i$, and the latter is equal to $n-1$. Similarly, for all $y \in C$, ${\mathit{low}}[y] = i$ and ${\mathit{is\_term}}[y] = \True$. The vertex $X$ being equal to one of the $y \in C$, we also have ${\mathit{low}}[X] = i$ and ${\mathit{is\_term}}[X] = \True$. Moreover, $X$ is the top element of $S$. Furthermore, it can be verified that at Line [[\[scc2:f\_assign2\]]{}]{}, the set $F$ contains exactly all the hyperarcs of $A$ which generate the simple hyperarcs leaving $X$ in ${\mathcal{H}}'$: they are exactly characterized by $$\begin{aligned} & {\Call{Find}}{z,{\mathcal{H}}} = X \text{ for all } z \in T(a), \text{ and }T(a) \neq \{y\} \text{ for all } y \in C \\ \Longleftrightarrow{} & {\Call{Find}}{z,{\mathcal{H}}} = X \text{ for all } z \in T(a), \text{ and } {\mathit{collected}}_a = \False \end{aligned}$$ since at that Line [[\[scc2:f\_assign2\]]{}]{}, a hyperarc $a$ satisfies ${\mathit{collected}}_a = \True$ if, and only if, $T(a)$ is reduced to a singleton $\{t\}$ such that ${\mathit{index}}[t]$ is defined. Finally, for all vertices $y \in C$, $\Call{Find}{y,{\mathcal{H}}}$ can be equivalently replaced by $\Call{Find}{X,{\mathcal{H}}'}$. As a consequence, and return the same result. Both functions perform the same union-find operations, except the first the vertex merging step executed by on $C$. Let $f$ be the function which maps all vertices $y \in C$ to $X$, and any other vertex to itself. We claim that ${\mathcal{H}}'$ and $f({\mathcal{H}})$ have the same reachability graph, [*i.e.*]{} ${{\rightsquigarrow}}_{{\mathcal{H}}'}$ and ${{\rightsquigarrow}}_{f({\mathcal{H}})}$ are identical relations. Indeed, the two hypergraphs only differ on the images of the hyperarcs $a \in A$ such that $T(a) = \{y\}$ for some $y \in C$. For such hyperarcs, we have $H(a) \subseteq C$, because otherwise, ${\mathit{is\_term}}[x]$ would have been set to $\False$ ([*i.e.*]{} the component $C$ would not be terminal). It follows that their are mapped to the cycle $(\{X\},\{X\})$ by $f$, so that ${\mathcal{H}}'$ and $f({\mathcal{H}})$ clearly have the same reachability graph. In particular, they have the same terminal [<span style="font-variant:small-caps;">Scc</span>]{}s. Finally, since the elements of $C$ are in a same [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathcal{H}}$, Proposition \[prop:collapse\] shows that the function $f$ induces a one-to-one correspondence between the [<span style="font-variant:small-caps;">Scc</span>]{}s of ${\mathcal{H}}$ and the [<span style="font-variant:small-caps;">Scc</span>]{}s of $f({\mathcal{H}})$: $$\begin{aligned} D & \longmapsto f(D) \\ (D' \setminus \{ X \}) \cup C & \longmapsfrom D' && \text{if }X \in D' \\ D' & \longmapsfrom D' && \text{otherwise}.\end{aligned}$$ The action of the function $f$ exactly corresponds to the vertex merging step performed on $C$. Since by induction hypothesis, determines the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s in $f({\mathcal{H}})$, it follows that Proposition \[prop:maxscc2\] holds. $n \gets 0$, $S \gets {[\,]}$, ${\mathit{Finished}}\gets \emptyset$ $r_a \gets \Nil$, $c_a \gets 0$ ${\mathit{collected}}_a \gets \False$\[scc3:collected\_init\] ${\mathit{index}}[u] \gets \Nil$ ${\mathit{low}}[u] \gets \Nil$ , $F_u \gets {[\,]}$ \[scc3:init\_call\] local $U \gets \Call{Find}{u}$\[scc3:find1\], local $F \gets {[\,]}$\[scc3:begin\_atom1\] ${\mathit{index}}[U] \gets n$, ${\mathit{low}}[U] \gets n$ $n \gets n+1$\[scc3:troot\_def\] ${\mathit{is\_term}}[U] \gets \True$ push $U$ on the stack $S$ \[scc3:begin\_node\_loop\] push $a$ on $F$\[scc3:f\_push\] $r_a \gets u$ local $R_a \gets \Call{Find}{r_a}$\[scc3:find2\] \[scc3:root\_reach\] $c_a \gets c_a + 1$\[scc3:counter\_increment\] \[scc3:counter\_reach\] push $a$ on the stack $F_{R_a}$\[scc3:stack\_edge\]\[scc3:root\_def\] \[scc3:collected1\_begin\] ${\mathit{collected}}_a \gets \True$ \[scc3:collected1\_end\]\[scc3:end\_atom1\] \[scc3:begin\_edge\_loop\] pop $a$ from $F$ local $W \gets \Call{Find}{w}$\[scc3:find3\] \[scc3:rec\_call\] ${\mathit{is\_term}}[U] \gets \False$ ${\mathit{low}}[U] \gets \min({\mathit{low}}[U],{\mathit{low}}[W])$ ${\mathit{is\_term}}[U] \!\! \gets \!\! {\mathit{is\_term}}[U] \! \And \! {\mathit{is\_term}}[W]$ \[scc3:end\_edge\_loop\] local $i \gets {\mathit{index}}[U]$\[scc3:begin\_node\_merging\] pop each $a \in F_U$ and push it on $F$\[scc3:push\_on\_fprime1\]\[scc3:begin\_atom2\] pop $V$ from $S$ \[scc3:begin\_node\_merging\_loop\] pop each $a \in F_V$ and push it on $F$\[scc3:push\_on\_fprime2\] $U \gets \Call{Merge}{U, V}$\[scc3:merge\] pop $V$ from $S$ \[scc3:end\_node\_merging\_loop\]\[scc3:end\_node\_merging\] ${\mathit{index}}[U] \gets i$, push $U$ on $S$ $F \gets \biggl\{\! a \in A \!\Bigm|\! \begin{aligned} & {\mathit{collected}}_a = \False, \\[-1ex] & \forall x \in T(a),\Call{Find}{x} = U \end{aligned} \!\biggr\}$ \[scc3:f\_assign2\] ${\mathit{collected}}_a \gets \True$\[scc3:end\_atom2\]\[scc3:collected2\] go to Line [[\[scc3:begin\_edge\_loop\]]{}]{}\[scc3:goto\]\[scc3:begin\_not\_single\] \[scc3:begin\_non\_max\_scc\_loop\]\[scc3:begin\_atom3\] pop $V$ from $S$, add $V$ to ${\mathit{Finished}}$ \[scc3:end\_non\_max\_scc\_loop\] \[scc3:end\_atom3\] \(t) ++ (0ex,2ex) coordinate (t); (b) ++ (0ex,-0.5ex) coordinate (b); (l) ++ (-0.5ex,0ex) coordinate (l); (r) ++ (0.5ex,0ex) coordinate (r); let 1 = (l), 2 = (t) in coordinate (lt) at (1,2); let 1 = (l), 2 = (b) in coordinate (lb) at (1,2); let 1 = (r), 2 = (t) in coordinate (rt) at (1,2); let 1 = (r), 2 = (b) in coordinate (rb) at (1,2); (lt) – (rt) – (rb) – (lb) – cycle; (t2) ++ (0ex,-0.5ex) coordinate (t2); (b2) ++ (0ex,-0.5ex) coordinate (b2); (l2) ++ (-0.5ex,0ex) coordinate (l2); (r2) ++ (0.5ex,0ex) coordinate (r2); let 1 = (l2), 2 = (t2) in coordinate (lt2) at (1,2); let 1 = (l2), 2 = (b2) in coordinate (lb2) at (1,2); let 1 = (r2), 2 = (t2) in coordinate (rt2) at (1,2); let 1 = (r2), 2 = (b2) in coordinate (rb2) at (1,2); (lt2) – (rt2) – (rb2) – (lb2) – cycle; (t3) ++ (0ex,-0.5ex) coordinate (t3); (b3) ++ (0ex,-0.5ex) coordinate (b3); (l3) ++ (-0.5ex,0ex) coordinate (l3); (r3) ++ (0.5ex,0ex) coordinate (r3); let 1 = (l3), 3 = (t3) in coordinate (lt3) at (1,3); let 1 = (l3), 3 = (b3) in coordinate (lb3) at (1,3); let 1 = (r3), 3 = (t3) in coordinate (rt3) at (1,3); let 1 = (r3), 3 = (b3) in coordinate (rb3) at (1,3); (lt3) – (rt3) – (rb3) – (lb3) – cycle; (t4) ++ (0ex,-0.5ex) coordinate (t4); (b4) ++ (0ex,-0.5ex) coordinate (b4); (l4) ++ (-0.5ex,0ex) coordinate (l4); (r4) ++ (0.5ex,0ex) coordinate (r4); let 1 = (l4), 3 = (t4) in coordinate (lt4) at (1,3); let 1 = (l4), 3 = (b4) in coordinate (lb4) at (1,3); let 1 = (r4), 3 = (t4) in coordinate (rt4) at (1,3); let 1 = (r4), 3 = (b4) in coordinate (rb4) at (1,3); (lt4) – (rt4) – (rb4) – (lb4) – cycle; The second intermediary version of our algorithm, , is based on the first one, but it performs the same computations on the auxiliary data $r_a$ and $c_a$ as in . However, the latter are never used, because at Line [[\[scc3:f\_assign2\]]{}]{}, $F$ is re-assigned to the value provided in . It follows that for now, the parts in gray can be ignored. The following lemma states that and are equivalent: \[prop:maxscc3\] Let ${\mathcal{H}}$ be a directed hypergraph. After the execution of the algorithm $\Call{TerminalScc3}{{\mathcal{H}}}$, the sets $\{ v \in {\mathcal{V}}\mid \Call{Find}{v} = U \text{ and } {\mathit{is\_term}}[U] = \True\}$ precisely correspond to the terminal [<span style="font-variant:small-caps;">Scc</span>]{}s of ${\mathcal{H}}$. When is executed, the local stack $F$ is not directly assigned to the set $\{ a \in A \mid T(a) = \{ u \} \}$ (see Line [[\[scc2:f\_assign\]]{}]{} in Figure \[fig:maxscc2\]), but built by several iterations on the set $A_u$ (Line [[\[scc3:f\_push\]]{}]{}). Since $u \in T(a)$ and $\card{T(a)} = 1$ holds if, and only if, $T(a)$ is reduced to $\{ u \}$, initially fills $F$ with the same hyperarcs as . Besides, the condition ${\mathit{no\_merge}}= \False$ in (Line [[\[scc2:begin\_not\_single\]]{}]{}) is replaced by $F \neq \emptyset$ (Line [[\[scc3:begin\_not\_single\]]{}]{}). We claim that the condition $F \neq \emptyset$ can be safely used in as well. Indeed, in , $F \neq \emptyset$ implies ${\mathit{no\_merge}}= \False$. Conversely, suppose that in , ${\mathit{no\_merge}}= \False$ and $F = \emptyset$, so that the algorithm goes back to Line [[\[scc2:goto\]]{}]{} after having ${\mathit{no\_merge}}$ to $\True$. The loop from Lines [[\[scc2:begin\_edge\_loop\]]{}]{} to [[\[scc2:end\_edge\_loop\]]{}]{} is not executed since $F = \emptyset$, and it directly leads to a new execution of Lines [[\[scc2:is\_root\]]{}]{} to [[\[scc2:begin\_not\_single\]]{}]{} with ${\mathit{no\_merge}}= \True$. Therefore, going back to Line [[\[scc2:goto\]]{}]{} was useless. Finally, during the vertex merging step in , $n$ keeps its value, which is greater than or equal to $i+1$, but is not necessarily equal to $i+1$ like in (just after Line [[\[scc2:nredefined\]]{}]{}). This is safe because the whole algorithm only need that $n$ take increasing values, and not necessarily consecutive ones. We conclude by applying Proposition \[prop:maxscc2\]. We make similar assumptions on the atomicity of the sequences of instructions. Note that Invariant \[inv:tindex\], \[inv:comporstack\], and \[inv:incomp\] still holds in . \[inv:re\] Let $a \in A$ such that $\card{T(a)} > 1$. If for all $x \in T(a)$, ${\mathit{index}}[x]$ is defined, then the root $r_a$ is defined. For all $x \in T(a)$, $\Call{Visit3}{x}$ has been called. The root $r_a$ has necessarily been defined at the first of these calls (remember that the block from Lines [[\[scc3:begin\_atom1\]]{}]{} to [[\[scc3:end\_atom1\]]{}]{} is supposed to be executed atomically). \[inv:incomp2\] Consider a state ${\mathit{cur}}$ of the algorithm in which $U \in {\mathit{Finished}}$. Then any vertex reachable from $U$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ is also in ${\mathit{Finished}}$. The invariant clearly holds when $U$ is placed in ${\mathit{Finished}}$. Using the atomicity assumptions, the call to $\Call{Visit3}{u}$ is necessarily terminated. Let ${\mathit{old}}$ be the state of the algorithm at that point, and ${\mathcal{H}}_{\mathit{old}}$ and ${\mathit{Finished}}_{\mathit{old}}$ the corresponding hypergraph and set of terminated vertices at that state respectively. Since $\Call{Visit3}{u}$ has performed a depth-first search from the vertex $U$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{old}})$, all the vertices reachable from $U$ in ${\mathcal{H}}_{\mathit{old}}$ stand in ${\mathit{Finished}}_{\mathit{old}}$. We claim that the invariant is then preserved by the following vertex merging steps. The graph arcs which may be added by the latter leave vertices in $S$, and consequently not from elements in ${\mathit{Finished}}$ (by Invariant \[inv:comporstack\]). It follows that the set of reachable vertices from elements of ${\mathit{Finished}}_{\mathit{old}}$ is not changed by future vertex merging steps. As a result, *all the vertices reachable from $U$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$ are elements of ${\mathit{Finished}}_{\mathit{old}}$*. Since by Invariant \[inv:incomp2\], ${\mathit{Finished}}_{\mathit{old}}\subseteq {\mathit{Finished}}$, this proves the whole invariant in the state ${\mathit{cur}}$. \[inv:call\_to\_visit3\] In the digraph ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$, at the call to $\Call{Visit3}{u}$, $u$ is reachable from a vertex $W$ such that ${\mathit{index}}[W]$ is defined if, and only if, $W$ belongs to the stack $S$. The “if” part can be shown by induction. When the function $\Call{Visit3}{u}$ is called from Line [[\[scc3:init\_call\]]{}]{}, the stack $S$ is empty, so that this is obvious. Otherwise, it is called from Line [[\[scc3:rec\_call\]]{}]{} during the execution of $\Call{Visit3}{x}$. Then $X = \Call{Find}{x}$ is reachable from any vertex in the stack, since $x$ was itself reachable from any vertex in the stack at the call to $\Call{Find}{X}$ (inductive hypothesis) and that this reachability property is preserved by potential vertex merging steps (Proposition \[prop:collapse\]). As $u$ is obviously reachable from $X$, this shows the statement. Conversely, suppose that ${\mathit{index}}[W]$ is defined, and $W$ is not in the stack. According to Invariant \[inv:comporstack\], $W$ is necessarily an element of ${\mathit{Finished}}$. Hence $u$ also belongs to ${\mathit{Finished}}$ by Invariant \[inv:incomp2\], which is a contradiction since this cannot hold at the call to $\Call{Visit}{u}$. \[inv:ce\] Let $a \in A$ such that $\card{T(a)} > 1$. Consider a state ${\mathit{cur}}$ of the algorithm in which $r_a$ is defined. Then $c_a$ is equal to the number of elements $x \in T(a)$ such that ${\mathit{index}}[x]$ is defined and ${\Call{Find}}{x}$ is reachable from ${\Call{Find}}{r_a}$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. Since at Line [[\[scc3:counter\_increment\]]{}]{}, $c_a$ is incremented only if $R_a = {\Call{Find}}{r_a}$ belongs to $S$, we already know using Invariant \[inv:call\_to\_visit3\] that $c_a$ is equal to the number of elements $x \in T(a)$ such that, at the call to $\Call{Visit3}{x}$, $x$ was reachable from ${\Call{Find}}{r_a}$. Now, let $x \in {\mathcal{V}}$, and consider a state ${\mathit{cur}}$ of the algorithm in which $r_a$ and ${\mathit{index}}[x]$ are both defined, and ${\Call{Find}}{r_a}$ appears in the stack $S$. Since ${\mathit{index}}[x]$ is defined, has been called on $x$, and let ${\mathit{old}}$ be the state of the algorithm at that point. Let us denote by ${\mathcal{H}}_{\mathit{old}}$ and ${\mathcal{H}}_{\mathit{cur}}$ the current hypergraphs at the states ${\mathit{old}}$ and ${\mathit{cur}}$ respectively. Like previously, we may add a hypergraph as last argument of the function $\Call{Find}{}$ to distinguish its execution in the states ${\mathit{old}}$ and ${\mathit{cur}}$. We claim that ${\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{cur}}} {{\rightsquigarrow}}_{{\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})} {\Call{Find}}{x,{\mathcal{H}}_{\mathit{cur}}}$ if, and only if, ${\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{old}}} {{\rightsquigarrow}}_{{\mathsf{graph}}({\mathcal{H}}_{\mathit{old}})} x$. The “if” part is due to the fact that reachability in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{old}})$ is not altered by the vertex merging steps (Proposition \[prop:collapse\]). Conversely, if $x$ is not reachable from ${\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{old}}}$ in ${\mathcal{H}}_{\mathit{old}}$, then ${\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{old}}}$ is not in the call stack $S_{\mathit{old}}$ (Invariant \[inv:call\_to\_visit3\]), so that it is an element of ${\mathit{Finished}}_{\mathit{old}}$. But ${\mathit{Finished}}_{\mathit{old}}\subseteq {\mathit{Finished}}_{\mathit{cur}}$, which contradicts our assumption since by Invariant \[inv:comporstack\], an element cannot be stored in ${\mathit{Finished}}_{\mathit{cur}}$ and $S_{\mathit{cur}}$ at the same time. It follows that if $r_a$ is defined and ${\Call{Find}}{r_a}$ appears in the stack $S$, $c_a$ is equal to the number of elements $x \in T(a)$ such that ${\mathit{index}}[x]$ is defined and ${\Call{Find}}{r_a} {{\rightsquigarrow}}_{{\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})} {\Call{Find}}{x}$. Let ${\mathit{cur}}$ be the state of the algorithm when ${\Call{Find}}{r_a}$ is moved from $S$ to ${\mathit{Finished}}$. The invariant still holds. Besides, in the future states ${\mathit{new}}$, $c_a$ is not incremented because ${\Call{Find}}{r_a, {\mathcal{H}}_{\mathit{cur}}} \in {\mathit{Finished}}_{\mathit{cur}}\subseteq {\mathit{Finished}}_{\mathit{new}}$ (Invariant \[inv:incomp\]), so that ${\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{new}}} = {\Call{Find}}{r_a, {\mathcal{H}}_{\mathit{cur}}}$, and the latter cannot appear in the stack $S_{\mathit{new}}$ (Invariant \[inv:comporstack\]). Furthermore, any vertex reachable from $R_a = {\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{new}}}$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{new}})$ belongs to ${\mathit{Finished}}_{\mathit{new}}$ (Invariant \[inv:incomp2\]). It even belongs to ${\mathit{Finished}}_{\mathit{cur}}$, as shown in the second part of the proof of Invariant \[inv:incomp2\] (emphasized sentence). It follows that the number of reachable vertices from ${\Call{Find}}{r_a}$ has not changed between states ${\mathit{cur}}$ and ${\mathit{new}}$. Therefore, the invariant on $c_a$ will be preserved, which completes the proof. \[prop:visit3\] In , the assignment at Line [[\[scc3:f\_assign2\]]{}]{} does not change the value of $F$. It can be shown by strong induction on the number $p$ of times that this line has been executed. Suppose that we are currently at Line [[\[scc3:begin\_node\_merging\]]{}]{}, and let $X_1,\dots,X_q$ be the elements of the stack located above the root $U = X_1$ of the terminal [<span style="font-variant:small-caps;">Scc</span>]{} of ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$. Any arc $a$ which will transferred to $F$ from Line [[\[scc3:begin\_node\_merging\]]{}]{} to Line [[\[scc3:end\_node\_merging\]]{}]{} satisfies $c_a = \card{T(a)} > 1$ and ${\Call{Find}}{r_a} = X_i$ for some $1 \leq i \leq q$ (since at [[\[scc3:begin\_node\_merging\]]{}]{}, $F$ is initially empty). Invariant \[inv:ce\] implies that for all elements $x \in T(a)$, ${\Call{Find}}{x}$ is reachable from $X_i$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$, so that by terminality of the [<span style="font-variant:small-caps;">Scc</span>]{} $C = \{X_1,\dots,X_q\}$, ${\Call{Find}}{x}$ belongs to $C$, [*i.e.*]{} there exists $j$ such that ${\Call{Find}}{x} = X_j$. It follows that at Line [[\[scc3:end\_node\_merging\]]{}]{}, ${\Call{Find}}{x} = U$ for all $x \in T(a)$. Then, we claim that ${\mathit{collected}}_a = \False$ at Line [[\[scc3:end\_node\_merging\]]{}]{}. Indeed, $a' \in A$ satisfies ${\mathit{collected}}_{a'} = \True$ if, and only if: - either it has been copied to $F$ at Line [[\[scc3:f\_push\]]{}]{}, in which case $\card{T(a')} = 1$, - or it has been copied to $F$ at the $r$-th execution of Line [[\[scc3:f\_assign2\]]{}]{}, with $r < p$. By induction hypothesis, this means that $a'$ has been pushed on a stack $F_X$ and then popped from it strictly before the $r$-th execution of Line [[\[scc3:f\_assign2\]]{}]{}. Observe that a given hyperarc can be popped from a stack $F_x$ at most once during the whole execution of . Here, $a$ has been popped from $F_{X_i}$ after the $p$-th execution of Line [[\[scc3:f\_assign2\]]{}]{}, and $\card{T(a)} > 1$. It follows that ${\mathit{collected}}_a = \False$. Conversely, suppose for that, at Line [[\[scc3:f\_assign2\]]{}]{}, ${\mathit{collected}}_a = \False$, and all the $x \in T(a)$ satisfies ${\Call{Find}}{x} = U$. Clearly, $\card{T(a)} > 1$ (otherwise, $a$ would have been placed into $F$ at Line [[\[scc3:f\_push\]]{}]{} and ${\mathit{collected}}_a$ would be equal to $\True$). Few steps before, at Line [[\[scc3:begin\_node\_merging\]]{}]{}, ${\Call{Find}}{x}$ is equal to one of $X_j$, $1 \leq j \leq q$. Since ${\mathit{index}}[X_j]$ is defined ($X_j$ is an element of the stack $S$), by Invariant \[inv:tindex\], ${\mathit{index}}[x]$ is also defined for all $x \in T(a)$, hence, the root $r_a$ is defined by Invariant \[inv:re\]. Besides, ${\Call{Find}}{r_a}$ is equal to one of the $X_j$, say $X_k$ (since $r_a \in T(a)$). As all the ${\Call{Find}}{x}$ are reachable from ${\Call{Find}}{r_a}$ in ${\mathsf{graph}}({\mathcal{H}}_{\mathit{cur}})$, then $c_a = \card{T(a)}$ using Invariant \[inv:ce\]. It follows that $a$ has been pushed on the stack $F_{R_a}$, where $R_a = {\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{old}}}$ in an previous state ${\mathit{old}}$ of the algorithm. As ${\mathit{collected}}_a = \False$, $a$ has not been popped from $F_{R_a}$, and consequently, the vertex $R_a$ of ${\mathcal{H}}_{\mathit{old}}$ has not involved in a vertx merging step. Therefore, $R_a$ is still equal to ${\Call{Find}}{r_a,{\mathcal{H}}_{\mathit{cur}}} = X_k$. It follows that at Line [[\[scc3:begin\_node\_merging\]]{}]{}, $a$ is stored in $F_{X_k}$, and thus it is copied to $F$ between Lines [[\[scc3:begin\_node\_merging\]]{}]{} and [[\[scc3:end\_node\_merging\]]{}]{}. This completes the proof. We now can prove the correctness of . By Proposition \[prop:visit3\], Line [[\[scc3:f\_assign2\]]{}]{} can be safely removed in . It follows that the booleans ${\mathit{collected}}_a$ are now useless, so that Line [[\[scc3:collected\_init\]]{}]{}, the loop from Lines [[\[scc3:collected1\_begin\]]{}]{} to [[\[scc3:collected1\_end\]]{}]{}, and Line [[\[scc3:collected2\]]{}]{} can be also removed. After that, we precisely obtain the algorithm . Proposition \[prop:maxscc3\] completes the proof. [^1]: In the sequel, the underlying model of computation is the Random Access Machine. [^2]: The module can be used independently of the rest of the library. Note that in the source code, terminal [<span style="font-variant:small-caps;">Scc</span>]{}s are referred to as *maximal* [<span style="font-variant:small-caps;">Scc</span>]{}s. [^3]: Any finite reflexive and transitive relation ${\mathcal{R}}$ can be seen as the reachability relation of a directed graph $G$, whose arcs are the couples $(x,y)$ such that $x \mathbin{{\mathcal{R}}} y$, $x \neq y$. Then the transitive reduction of ${\mathcal{R}}$ is defined as in [@AhoGareyUllmanSICOMP72].
{ "pile_set_name": "ArXiv" }
--- abstract: 'Fashion is an inherently visual concept and computer vision and artificial intelligence (AI) are playing an increasingly important role in shaping the future of this domain. Many research has been done on recommending fashion products based on the learned user preferences. However, in addition to recommending single items, AI can also help users create stylish outfits from items they already have, or purchase additional items that go well with their current wardrobe. Compatibility is the key factor in creating stylish outfits from single items. Previous studies have mostly focused on modeling pair-wise compatibility. There are a few approaches that consider an entire outfit, but these approaches have limitations such as requiring rich semantic information, category labels, and fixed order of items. Thus, they fail to effectively determine compatibility when such information is not available. In this work, we adopt a Relation Network (RN) to develop new compatibility learning models, Fashion RN and FashionRN-VSE, that addresses the limitations of existing approaches. FashionRN learns the compatibility of an entire outfit, with an arbitrary number of items, in an arbitrary order. We evaluated our model using a large dataset of 49,740 outfits that we collected from Polyvore website. Quantitatively, our experimental results demonstrate state of the art performance compared with alternative methods in the literature in both compatibility prediction and fill-in-the-blank test. Qualitatively, we also show that the item embedding learned by FashionRN indicate the compatibility among fashion items.' author: - Maryam Moosaei - Yusan Lin - Hao Yang bibliography: - 'ref.bib' title: Fashion Recommendation and Compatibility Prediction Using Relational Network --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10002951.10003227.10003233&lt;/concept\_id&gt; &lt;concept\_desc&gt;Information systems Collaborative and social computing systems and tools&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010178&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Artificial intelligence&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010405.10010469&lt;/concept\_id&gt; &lt;concept\_desc&gt;Applied computing Arts and humanities&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Introduction {#sec:Introduction} ============ Fashion plays an important role in the society. People use fashion as a way of expressing individuality, style, culture, wealth, and status [@chao2009framework]. E-commerce fashion industry is expected to rise worldwide from \$481 billion USD revenue market in 2018 to \$712 billion USD by 2022[^1]. This shows the increasing demands for online apparel shopping and motivates businesses to build more advanced recommendation systems. Many online retailers also started incorporating advanced recommendation systems to tackle the sophisticated fashion recommendation problem, such as StitchFix[^2], asos[^3] and Amazon Fashion[^4]. This enormous e-commerce market has attracted researchers’ attention in the artificial intelligence, computer vision, multimedia, and recommendation system communities [@mckinsey]. Many research has been done using computational techniques to solve problems in fashion, e-commerce in particular. One most common line of research has been done on recommending single fashion items to consumers based on their purchase or browsing history. The most notable work is done by Kang et al, which they develop a neural network that learns users’ preferences towards fashion products based on the visual information through Bayesian Personalized Ranking [@kang2017visually].[^5] ![Examples of compatible and incompatible outfits.[]{data-label="compatible"}](fig/outfits_v2.pdf){width="45.00000%"} However, fashion recommendation is unique compare to other domains not only due to its heavily visual nature, but also because the concept of compatibility if more crucial than in any other types of products. People are often interested in purchasing items that match well together and compose a stylish outfit. Traditionally, fashion recommendation systems rely on co-purchased and co-clicked histories and recommend items based on similarity and user reviews. This requires going beyond retrieving similar items to developing a model that understands the notion of “compatibility” [@DBLP:conf/cikm/WanWLBM18]. Modeling compatibility is challenging because semantics that determine what is compatible and stylish are extremely complex and many factors such as color, cut, pattern, texture, style, culture, and personal taste play a role in what people perceive as compatible and fashionable. Developing an artificial intelligence algorithm that can learn the compatibility of items would considerably improve the quality of many fashion recommendation systems. Such systems will help customers decide what to wear every day, which alleviates the tedious task for non-fashion experts. Nonetheless, recommending compatible items that form a fashion outfit includes several challenges. First of all, the item co-occurrence relationships are extremely sparse, hence a collaborative approach is hard based on such data nature. Leveraging the contents of items effectively is preferred. Secondly, compatibility is a very different concept from *similarity*. Simply retrieving items that are similar to each other is merely enough to form fashion outfits. Many times, two items fit perfectly with each other in a fashion outfit, while when looking at them individually, they are visually extremely different. Thirdly, the number of items in fashion outfits varies. In most models, a fixed number of objects or fixed dimensionality is assumed as input. A model that encounters the different number of items as input is desired. While visual features are commonly used in fashion recommendation, existing works on compatibility prediction have three main limitations: 1. Can only determine the compatibility of a pair of items and fail to work on outfits with an arbitrary number of items. Example methods with this limitation are [@vasileva2018learning; @song2017neurostylist; @veit2015learning]. 2. Need category labels (e.g., shirt, shoes) and rich attributes (e.g., floral, casual) in order to determine compatibility and will not work if such information is not available [@hsiao2017creating; @li2017mining; @han2017learning]. 3. Require a fixed order or fixed number of items to determine compatibility of an outfit. For example, Han et al. [@han2017learning] proposed a method for compatibility learning that requires items in all outfits to be carefully ordered from top to bottom and then accessories. These limitations narrow the application of current methods. For example, many online retailers may lack detailed description or have noisy labels for fashion items. In addition, for items that are showcased at brick and mortar stores detailed descriptions are often not written on item tags. To encounter the above challenges and limitations, in this paper, we use visual information of items to model fashion compatibility to optimize the content-based learning. We then, based on the concept of Relational Networks [@santoro2017simple], build neural networks, FashionRN and FashionRN-VSE, that learn the relation between every pair of items, as well as take in different number of items as inputs. both FashionRN and FashionRN-VSE are to learn visual/textual relations between items of an outfit and use these relations to determine the compatibility of the outfits. The intuition behind using Relational Networks is that we can consider an outfit as a scene and the items of the outfit as objects in the scene. We are interested in learning a certain type of relation which is compatibility. To show the effectiveness of the proposed FashionRN and FashionRN-VSE, we evaluate our models on a collected Polyvore dataset, consisting of 49,740 unique fashion outfits. Through empirical experiments, we show that our proposed models perform well both quantitatively and qualitatively. We design two evaluation tasks: fashion outfit compatibility prediction and fill-in-the-blank, and compare FashionRN and FashionRN-VSE’s performances with other state-of-the-art models. We show that FashionRN-VSE achieves an Area Under Curve (AUC) of 88% in the compatibility prediction task, compared to the second best comparing methods, Bi-LSTM with VSE, which achieves 72%. Furthermore, FashionRN-VSE achieves an accuracy of 58% in the fill-in-the-blank task, compared to the second best, SiameseNet, of 35% in accuracy. Besides learning the compatibility given an outfit, FashionRN and FashionRN-VSE can also generate item embedding from the hidden layer. Through visualization of the learned item embedding, we show that items that make sense to be put together in an outfit are closer to each other in the FashionRN embedding space. While comparing the visualization of the same items on embedding generated by current state-of-the-art CNN model, DenseNet, we see that DenseNet embedding place items that are visually similar (e.g., colors and shapes) but not necessarily compatible close to each other in the embedding space. This shows that embedding learned by FashionRN, besides the visual similarity, also captures the underlying compatibility. Our contributions are: 1. We developed FashionRN and FashionRN-VSE, a new line of compatibility learning framework based on Relational Networks [@santoro2017simple]. Our approach is independent of the number of items, order of items, and does not need semantic information and category labels. 2. We compared FashionRN and FashionRN-VSE to other state-of-the-arts in compatibility prediction and Fill In The Blank (FITB) task. We show that FashionRN outperforms the second best by 112.5% and 148.5% in the two tasks, repectively, while FashionRN-VSE outperforms the second best by 122.2% and 165.7%, respectively. 3. Through visualization, we find the item embedding learned by FashionRN well capture the underlying compatibility among fashion items, when compared to CNN models such as DenseNet that focus on the visual similarity. The reminder of this paper is organized as follows. Section \[relatedwork\] reviews the related work. Section \[sec:methodology\] describes our methodology. Section \[sec:experimental\] presents our quantitative experimental results followed by our qualitative results in Section \[sec:qualitative\]. We finally conclude this work in Section \[sec:conclusion\]. Related Work {#relatedwork} ============ In this section, we review the literature that are related to this work, which are fashion recommendation and relational networks. Fashion Recommendation ---------------------- There is a growing body of literature on fashion recommendation. Most of the available fashion recommendation systems use keyword search [@vaccaro2016elements], purchased histories [@wang2011utilizing], and user ratings [@kang2017visually; @qian2014personalized] to recommend items. These methods do not consider visual appearance of items which is a key feature in fashion. To address this limitation, several research groups have worked on incorporating visual information in fashion recommendation systems, mainly with the purpose of recommending similar items to an image query [@jing2015visual; @he2016fashionista; @chao2009framework; @tautkute2018deepstyle; @DBLP:conf/iccv/RenSLMF17], and recommending aesthetics based on personal preferences [@DBLP:conf/cikm/DengCFNY17; @DBLP:conf/cikm/SkopalPKGL17]. Similarity based fashion recommendation systems are useful for finding substitutes for an item (e.g., finding a shirt with the same style but different brand or price) or matching street images to online products [@chao2009framework; @hadi2015buy]. However, many times users are interested in searching for different category of items which are compatible and are in harmony. This requires going beyond similarity based methods and modeling more complex concepts such as compatibility and aesthetics. Many humans are expert in detecting whether an outfit looks compatible or something is “off” by simply looking at its appearance. For example, even though compatibility is a subjective concept, most people would agree that the outfits shown in Figure \[compatible\] are all well composed and stylish. Research has shown that computer vision and artificial intelligence algorithms are also able to some extent learn the notion of compatibility [@song2017neurostylist; @veit2015learning; @han2017learning; @he2018fashionnet]. For example, Iwata et al. used a topic model to find matching tops (e.g., shirt) for bottoms (e.g., jeans) using a small human annotated dataset collected from magazines [@iwata2011fashion]. Veit et al. [@veit2015learning] used images of co-purchased items from an Amazon dataset to train a Siamese neural network [@hadsell2006dimensionality] for predicting compatibility between pairs of items. Song et al. showed that integrating visual and contextual information can improve compatibility prediction [@song2017neurostylist]. To exploit the pair-wise compatibility between tops and bottoms they learned a latent compatibility space by employing a dual autoencoder network [@ngiam2011multimodal] and a Bayesian Personalized Ranking (BPR) framework [@rendle2009bpr]. Lin et al. developed a model that is not only capable of matching tops with bottoms, but also is able to generate a sentence for each recommendation to explain why they match [@lin2018explainable]. Instead of a dual auto-encoder network, they used a mutual attention mechanism to model compatibility and a cross-modality attention module to learn the transformation between the visual and textual space for generating a sentence as a comment. Vasileva et al. [@vasileva2018learning] extended state-of-the-art in compatibility learning by answering novel queries such as finding a set of tops that can substitute a particular top in an outfit (high compatibility), while they are very different (low similarity). To do this, they jointly learned two embedding spaces, one for item similarity and the other for item compatibility. All of the aforementioned methods are pair-wise and focus on learning compatibility between “tops” and “bottoms”. These methods fail to consider an entire outfit with an arbitrary number of items. To address this limitation, Han et al. [@han2017learning] and Jiang et al. [@jiang2018ask] considered an outfit as a sequence (from top to bottom and then accessories) and each item in the outfit as a time step [@han2017learning]. They trained a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones, learning their compatibility. They used attribute and category information as a regularization for training their model. Treating outfits as a sequence and using an LSTM-based model does not respect the fact that sets are order invariant. Consequently, it requires carefully sorting of items in all outfits in a consistent order based on their category labels. Otherwise, a compatible top-bottom may be detected as incompatible if one changes their order to bottom-top. Li et al. developed a model that considers outfits as order-less sets. Given a collection of fashion items, their method can predict popularity of a set by incorporating images, titles, and category labels [@li2017mining]. In a recent work, Hsiao and Grauman [@hsiao2017creating] proposed an unsupervised compatibility learning framework which uses textual attributes of items. The researchers employed a Correlated Topic Model (CTM) [@blei2006correlated] from text analysis to learn compatibility. They considered an outfit as a document, visual attributes (e.g., floral, chiffon) as words, and style as a topic. Their model learns the composition of attributes that characterizes a style. For example, a formal blazer is more likely to be combined with a pair of jeans than a floral legging. While a fair number of studies are available on compatibility prediction, existing methods are mostly pair-wise and a few studies which consider an entire outfit [@hsiao2017creating; @han2017learning], are either not order invariant with respects to the items in an outfit [@han2017learning], or require rich contextual data including explicit category labels, whether extracted from item descriptions or human annotated [@hsiao2017creating]. Hence, in our work, we explored a new visual compatibility learning framework that would consider an entire outfit with an arbitrary number of items with an arbitrary order. Our model can work without category labels or semantic attributes. ![image](fig/fashionRN.pdf){width="90.00000%"} Relational Networks ------------------- Many factors such as style, texture, material, and color contribute to compatibility and the relation between these factors is non-linear. In this work, we develop Fashion RN by modifying a Relational Network (RN) to learn a non-linear space that can predict the compatibility of an outfit. Previous findings suggest that relational reasoning is “baked” into RNs, similar to learning sequential dependencies which is built in recurrent neural networks [@santoro2017simple]. Different variations of RNs have been successfully applied to answering semantic based questions about dynamic physical systems. For example, Santoro et al. modified an RN architecture and showed that given an image of a scene, an RN combined with an LSTM can answer relational questions such as “*Are there any rubber things with the same size of the yellow cylinder?*” The input to an RN is a set of objects, but the definition of an object is flexible and not specified. For example, Santoro et al. used a CNN network to convert images of physical systems into $k$ feature maps of size $d*d$ [@santoro2017simple]. They then considered each row of the feature map as an object. Therefore, in their work, an object could be a part of the background, a particular physical object, or a texture. The object-object relation in their work was question dependent. Thus, their RN architecture was conditioned on a question embedded by an LSTM. Each pair of objects was concatenated with the question embedding before going into the RN. The intuition behind our approach is that humans do not need to know the textual description of items in an outfit (see Figure \[compatible\]) and their category labels in order to know if it looks compatible. Humans can detect compatibility in a visual scene by looking at it. In fact, many of the textual attributes (e.g., floral, shirt, casual) can be implicitly learned from visual information. Moreover, sets are order invariant. For example, humans do not need to see the items of an outfit in a specific order (e.g., always seeing pants before seeing shirts) in order to detect their compatibility. Therefore, in this work we try to model similar intelligence by developing a compatibility learning method that is based on visual information and does not require labeling clothing attributes or feeding items in a fixed order. Our network, Fashion RN, is based on Relational Networks (RNs) which are architected for relational reasoning [@raposo2017discovering]. Santoro et al. successfully applied RNs to text-based question answering about scenes [@santoro2017simple]. We considered compatibility as a particular type of relation and explored developing an RN inspired architecture that can learn the compatibility between items in an outfit. Compatibility Learning with Relational Network {#sec:methodology} ============================================== In this section, we propose our model, FashionRN, that learns the compatibility among fashion items in a fashion outfit. We also propose its variant, FashionRN-VSE. For the ease of understanding, we summarize the symbols used in this paper in Table \[table:symbol\_definition\]. **Symbol** **Definition** --------------- ---------------------------------- $\mathcal{D}$ Dataset $\mathcal{I}$ Item set $\Phi$ CNN model $x$ High-dimensional visual features $v$ Low-dimensional visual features $d$ Textual embedding $h$ Relation embedding $f, g$ Fully-connected layers : Symbol definition.[]{data-label="table:symbol_definition"} Problem Formulation and Model Intuition --------------------------------------- We assume the compatibility of a fashion outfit to be based on the relation among all of the items included in an outfit. To learn the compatibility of fashion outfits, we formulate our problem into a binary classification problem. Let $S = \lbrace i_1,i_2,…i_n \rbrace$ be a fashion outfit, where each $i \in \mathcal{I}$ is an item in this set. The dataset $\mathcal{D} = \lbrace S \rbrace$. Given an $S$, predict whether it is a compatible fashion outfit or not. The learning of fashion outfit compatibility can be thought of as follows. For a fashion outfit, we measure the compatibility of each pair of items in the outfit, and eventually aggregate all of the pairs’ compatibility scores to obtain the overall outfit compatibility score. To achieve this, we propose two models: FashionRN and FashionRN-VSE, which we describe in detail in the following. ![image](fig/fashionRN_VSE.pdf){width="90.00000%"} FashionRN {#sec:compatibility learning} --------- We design FashionRN based on the concept of the relational network architecture. In our model, an outfit is treated like a scene and its items are treated like the objects in the scene. Therefore, as opposed to Santoro et al. who consider rows of a CNN feature map, extracted from the entire scene, as objects; we consider images of items in an outfit as objects and use a DenseNet to transform them into feature vectors. Additionally, we are interested in learning one specific type of visual relation, compatibility, which is not question dependent and therefore we do not need any LSTM model. Our FashionRN consists of two parts, as shown in Figure \[diagram\]. The first part, *relation construction*, learns the non-linear relation between each pair of items in an outfit and the second part *compatibility scoring*, combines all the pair-wise relations to learn the compatibility of the entire outfit. ### Relation Construction First, the images of items are passed through a pre-trained CNN model of choice $\Phi$ (e.g., DenseNet) to produce high-dimensional visual feature vectors, $\mathbf{x}$. $\mathbf{x}$ is then passed through a fully connected (FC) layer, which serves two purposes. It down sizes the feature vectors and learns a latent space from dimensions that correspond to fashion styles and contribute to compatibility. The reduced-dimensional features are denoted as $v$. After generating the lower-dimensional visual features $v$, the relation between each pair of items in $S$ is constructed as follows. For each pair of items $(i,j) \in S$, we concatenate their visual features and passed through a FC layer $g$ to generate relation embedding $h$. $$\begin{aligned} h_{(i,j)} = g([ v_i || v_j ])\end{aligned}$$ ### Compatibility Scoring After the relation construction, we model the compatibility among all the pairs of items in $S$ as follows. $$\begin{aligned} \label{eq:compatibility_score} m_s = f \Big( \frac{1}{\binom{n}{2}}\sum_{i,j}h_{(i,j)} \Big) \end{aligned}$$ where $m_s$ is the compatibility score of outfit $S$. Both $f$ and $g$ are based on multiple non-linear functions with parameters $\theta_f$ and $\theta_g$. In our work, $ f_{\theta_f} $ and $ g_{\theta_g} $ are multi-layer perceptrons (MLPs) and we want to learn the parameters $\theta = \lbrace \theta_f, \theta_g \rbrace$ such that they can predict the compatibility between fashion items. The output of $g_\theta$ is the “relation” [@santoro2017simple]. Thus, $g_\theta$ learns the relation between the visual appearances of $v_i$ and $v_j$. FashionRN-VSE ------------- While some studies learn compatibility using visual information [@vasileva2018learning; @veit2015learning], others have suggested combining textual data with visual data can improve the performance of compatibility prediction [@han2017learning; @li2017mining; @hsiao2017creating; @song2017neurostylist]. We hence propose a variant of FashionRN, which combines the concept of Visual Semantic Embedding (VSE) proposed by Han et al. [@han2017learning] We name this model FashionRN-VSE. The diagram of this method is presented in Figure \[joint\]. VSE produces image embedding ($v_i$) and description embedding ($d_i$) for an item $i$. $v_i$ is produced by passing through a CNN model of choice $\Phi$ as in FashionRN, while $d_i$ is produced by encoding each word in the outfit description to a one-hot encoding. $v_i$ and $d_i$ for each item in an outfit are concatenated and fed into FashionRN-VSE. The compatibility stays the same as Eq. (\[eq:compatibility\_score\]), while the relation embedding for FashionRN-VSE is reformulated as follows. $$\begin{aligned} h_{(i,j)} = g \big( (v_i || d_i) || (v_j || d_j) \big)\end{aligned}$$ With the consideration of textual information, FashionRN-VSE not only considers the visuals of fashion items, but also more detail information beyond what can be observed from the images. These information include: brands, texture, material, and even price point, etc. We believe through capturing these information, FashionRN-VSE can better learn the compatibility of fashion items in a fashion outfit. Design Options and Time Complexity ---------------------------------- Our proposed models enable various design and usage options. First of all, depends on one’s data richeness, one can choose to use FashionRN if only visual information is available, and FashionRN-VSE is both visual and textual information are available. Secondly, RNs are order invariant. Therefore, it does not matter in which order the outfit items are passed to the network. Although to detect the compatibility of an outfit we consider the relation between all of its items, using RNs gives the flexibility to consider only some of the item pairs, or to put greater weights on some of them. For example, if an item (e.g., a handbag) is the center piece of an outfit and one likes to compose an outfit that magnifies this piece, we can put greater weights on the relations that involve this item. Besides the flexibility, our proposal is also efficient time-complexity-wise. Our compatibility learning framework can be applied to outfits with an arbitrary number of items. The time complexity of calculating the compatibility of an outfit with $n$ items is $O(\binom{n}{2})$ with respects to the number of the items in the outfit[^6]. However, considering that outfits have limited number of items (less than 12 in our dataset), this time complexity will remain linear $O(n)$. Also, developing a compatibility framework based on RNs eliminates the need of passing item category labels as input to the network, as the network itself is able to implicitly learn such information. Parameter Learning {#sec:Implementation} ------------------ The parameters $\theta_g$ and $\theta_f$ are learned through back propagation using a cross-entropy loss function as follows: $$\begin{aligned} \label{eq:30} \mathcal{L} (\theta_g, \theta_f) = -\sum_{i}^{|\mathcal{B}|} (y_i \log(p_i)+(1-y_i) \log(1-p_i))\end{aligned}$$ where $\mathcal{B}$ is one batch of training, $y_i$ is the ground truth label and $p_i$ is the predicted label ($m_s$) of the $i$^th^ outfit. To learn the parameters, all of the outfits in the datasets are viewed as positive samples, with $y$ expected to be 1s. To create negative samples, we randomly select numbers of items to create artificial outfits, and set their labels $y$ to be 0s. Evaluation {#sec:experimental} ========== To examine the effectiveness of FashionRN and FashionRN-VSE, in this section, we empirically test their performances on two prediction tasks: compatibility prediction and fill-in-the-blank test, on a large fashion outfit dataset, and compare with other state-of-the-arts. Dataset {#sec:Dataset} ------- Learning compatibility of fashion outfits requires a rich source of data which can be collected from online fashion communities such as Polyvore, Chictopia[^7], and Shoplook[^8]. On these websites, users can create stylish outfits and look at million of outfits created by others. Such rich fashion data can be used to train neural networks to learn different fashion concepts and automatically create stylish outfits. Polyvore is a great source of data especially for our work because it has images of items with clear background and descriptions. Researchers have used data from Polyvore for various studies [@vaccaro2016elements; @li2017mining; @lee2017style2vec; @vasileva2018learning]. However, some of their datasets are not open source (e.g.,[@li2017mining]) or have a small size (e.g., [@han2017learning; @song2017neurostylist]). Thus, we collected our own dataset from Polyvore. To ensure the quality, we collected outfits from users who are highly popular on Polyvore and have at least 100K followers. For each item we saved a 150 x 150 image and item description.We cleaned the dataset by excluding items that are not clothing (e.g., furniture) using their metadata. Then, we removed any outfit that is left with only one item. The remaining dataset had 49,740 outfits and 256,004 items. The collected outfits have arbitrary number of items ranging from 2 to 12, but on average each outfit has five items. We used 70% of our data for training (34,818 sets), 15% for validation (7,461 sets) and 15% for testing (7,461 sets). The data collected from Polyvore includes compatible outfits (positive class). Following the methodology of [@han2017learning] we created our negative class by randomly picking items from different outfits. While, these outfits are not guaranteed to be incompatible, they have a lower probability of compatibility compared to outfits that have been created by fashion experts on Polyvore and therefore our network should assign lower compatibility scores to these randomly composed outfits. We created an incompatible outfit per each positive outfit. This resulted in overall 69,636 sets for training (positive and negative), 14,922 sets for validation and 14,922 sets for testing. Experiment Setting ------------------ We choose DenseNet as our CNN model $\Phi$ since at the time of writing, it is the state-of-the-art. DenseNet generates image features $x$ of dimension 94,080. We design the FC layer $f$ to output 1000-dimensional features, so that $v\in R^{1000}$. In our work, $f$ and $g$ are both multi-layer perceptrons (MLPs). $g$ has four layers with size 512, 512, 256, 256 and $f$ has three layers with size 128, 128, 32. Therefore, $\Theta _g \in R^{2000*256}$ and $\Theta _f \in R^{256*32}$. At the end we used a softmax layer for classification. We used layer normalization and ReLU activation for all the layers of $f$ and $g$. We set dropout rate to 0.35 for all the layers except the last layer of $f$. We set the learning rate to 0.001 and the batch size to 64. Therefore, each mini batch included 64 fashion sets. Finally, we trained our model until the validation loss stabilized which took 19 epochs. Our model is implemented using Tensorflow, and Adam optimizer is used to learn the parameters. All our experiments are run on GPU Tesla P100-PCIE-16GB. Prediction Tasks ---------------- An effective compatibility model, given an unseen fashion outfit, should accurately score the outfit based on how the items included match with each other. Besides, given an incomplete fashion outfit, it should also be able to provide suggestion on fashion item to fill in. With such objectives in mind, we design two prediction tasks to evaluate the effectiveness of FashionRN and FashionRN-VSE. We evaluated our method using the large dataset we collected from Polyvore (Section \[sec:Dataset\]). We performed two tests: - **Compatibility prediction test**: predict the compatibility score of a given fashion outfit. This test is a binary classification task, where the model should answer true if the given outfit is compatible, and false otherwise. - **Fill in the blank (FITB) test**: given an outfit and a number of candidate items, find the item that matches best with the existing items in the outfit. This test is a retrieval task, where given an incomplete fashion outfit, and a list of candidate fashion items, the model aims to score all of the candidate items and return the item with the highest compatibility score with the incomplete fashion outfit. These two tests are commonly used in the fashion recommendation literature for evaluating compatibility learning methods [@han2017learning; @hsiao2017creating; @vasileva2018learning]. Comparing Methods ----------------- To demonstrate the effectiveness of our proposed method, we compared our results with the following approaches and demonstrate our results in Table \[table1\] and Table \[table2\]. We evaluated these methods on the dataset described in Section \[sec:Dataset\]. For each method, we used the authors’ codes and their reported set of parameters. We have considered compatibility prediction as a binary classification task and have calculated Area Under Curve (AUC) score to compare these methods. - **Bi-LSTM + VSE** [@han2017learning]: A fashion outfit is considered as a sequence from top to bottom and a Bi-LSTM model is jointly trained with a visual-semantic embedding (VSE) model to learn compatibility. - **SiameseNet** [@veit2015learning]: SiameseNet uses a Siamese CNN to transform images into an embedding space in which compatible items are close to each other and are far away from incompatible items. After training the network uses a contrastive loss, the distance between item embeddings is used for estimating their compatibility. To compare with this network, we created compatible pairs by selecting items from the same outfit. Incompatible pairs were created by selecting items from different outfits. To measure the compatibility of an outfit using SiameseNet, we averaged the compatibility scores of all of the item pairs in that outfit. - **BPR-DAE** [@song2017neurostylist]: A latent compatibility space is learned by employing a dual autoencoder (DAE) network and a Bayesian Personalized Ranking (BPR) framework. We trained BPR-DAE similar to SiameseNet and considered the average compatibility score of all the item pairs in an outfit as its compatibility score. - **RAW-V**: The compatibility score of an outfit $S$ is measured based on the raw visual features of its items as: $$\begin{aligned} \label{eq4} m_s = \frac{1}{\binom{n}{2}}\sum_{i,j} d(v_i, v_j)\end{aligned}$$ $v_i$ and $v_j$ are the visual feature representations of items $i$ and $j$, extracted from a fined-tuned DenseNet [@huang2017densely] and $d(v_i , v_j) = v_i \cdot v_j$ is the cosine similarity between items $i$ and $j$. The compatibility of an outfit is obtained by averaging pair-wise compatibilities of all the pairs in the outfit. - **VSE**: We learned the joint visual semantic embedding proposed by Han et al. [@han2017learning] and measured compatibility similar to RAW-V. - **FashionRN**: Our proposed model considers a fashion outfit as a scene, and items in the outfit as objects in the scene. It then learns the compatibility of an outfit with arbitrary number of items using a Relational Network. - **FashionRN-VSE**: Our proposed model that builds on top of FashionRN, and adds in the component of VSE. The first four methods are popular in the literature for learning compatibility and the rest are for understanding how different components contribute to compatibility. As our method is mainly based on visual information, we did not compare our method with approaches which only rely on semantic information for learning compatibility [@hsiao2017creating]. ![Example test outfits in our compatibility prediction task and their scores.[]{data-label="compat"}](fig/compatibility_test.jpg){width="0.9\linewidth"} Compatibility Prediction {#sec:compatibility prediction} ------------------------ In this task, a number of items are given as input and we aim to find their compatibility score. For items that are compatible with each other, the model should answer yes, and false otherwise. This enables a recommendation system to recommend items based on their compatibility with a query or with items in a shopping cart. In addition, users can create their own outfits and know their compatibility. Approaches AUC ---------------------------------- ---------- Bi-LSTM + VSE [@han2017learning] 0.72 SiameseNet [@veit2015learning] 0.48 BPR-DAE [@song2017neurostylist] 0.53 RAW-V 0.61 VSE 0.45 Fashion RN **0.81** Fashion RN + VSE **0.88** : Performance of different approaches on the compatibility prediction test.[]{data-label="table1"} Table \[table1\] shows the performance comparison among different approaches for compatibility prediction task. This table shows that both of our models, FashionRN and FashionRN-VSE, achieve the best performance among the comparing methods, including the Bi-LSTM method which requires both visual and semantic information. This is because our Relational Network based model is inherently able to learn a variety of relations between items including their categories without requiring to have access to explicit semantic attributes and category labels. This is specially useful when semantic information is not available or is very noisy. We also observe that Bi-LSTM performance decreases on our dataset. This is likely due to our test dataset size (14,922 outfits) which is much larger than the test dataset (3,076 outfits) used by the authors [@han2017learning]. Table \[table1\] shows that our method performs better than the two comparing pair-wise methods (SiameseNet and BPR-DAE). This finding suggests that pair-wise methods fail to work well on learning the compatibility of outfits with more than two items. This is because a linear combination of pair-wise compatibility scores (e.g., averaging all the pair-wise scores) fails to capture the compatibility of an entire outfit. In our work, although we start by learning the relation between item pairs, we combine the pair-wise relations and pass them through multiple nonlinear layers to learn more powerful feature representations from an entire outfit. This can determine the compatibility of an outfit more accurately than simply averaging all the pair-wise compatibility scores. Methods Accuracy ---------------------------------- ---------- Bi-LSTM + VSE [@han2017learning] 0.34 SiameseNet [@veit2015learning] 0.35 BPR-DAE [@song2017neurostylist] 0.20 RAW-V 0.35 VSE 0.33 Fashion RN **0.52** Fashion RN + VSE **0.58** : Performance of different approaches on FITB test.[]{data-label="table2"} Figure \[compat\] shows qualitative results of our model for compatibility prediction. Compatible outfits have two or more non redundant items that have well-matching colors and share similar style. From Figure \[compat\], we can observe that our method can effectively predict if a set of items make a compatible outfit. For example, items in the first row, are all black/green and share a casual/sportive style and therefore they have a high compatibility score; Items in the second row have chic/formal style and are all cream or dark blue which together create a stylish contrast and therefore have a high compatibility score. Items of incompatible outfits may have inconsistent styles or colors. Incompatible outfits may also have redundant items such as two shirts. We can observe that our method is able to capture such concepts from visual information. In contrast to Bi-LSTM model, we do not need to feed any category labels or attributes (e.g., men, women, shirt, shoes), to our model to explicitly teach it that for example a men’s shirt is incompatible with a woman’s skirt, or an outfit with redundant items is incompatible. Our model is able to implicitly learn such information. For example, items in the fourth row do not have compatible colors/patterns and therefore have received a low compatibility score; items in the fifth row have compatible colors, but a man’s shirt and a pair of men’s jeans do not match with women’s heels. Thus, this outfit has also received a low score; finally, the outfit in the last row has two bottoms (skirt and leggings) and our network has given a low compatibility score to this outfit. ![Example results from the FITB task using Fashion RN model. Items in each row are ranked based on their output scores and held out items are highlighted in rectangles.[]{data-label="fitb"}](fig/fitb.jpg){width="0.9\linewidth"} Fill In The Blank (FITB) Test {#sec:compatibility prediction} ----------------------------- In this task an outfit and a number of candidate items are given and the goal is to find the item that best matches with the existing items in the outfit. This is useful when a user has a set of items (e.g., a shirt and a pair of shoes) and wishes to find another item (e.g., a handbag) that best matches with the rest of the outfit. To run this test, we created a FITB dataset using our positive test set from section \[sec:Dataset\]. In each test outfit, we randomly held-out one item. We then randomly selected three items from other outfits that have the same category as the held-out item. For example, if the held-out item was a shirt, all there randomly selected items were shirts. This is to ensure that the network cannot easily filter out items that already exists in the outfit without needing to understand compatibility. We then found the item among the four candidates that maximizes the compatibility score of the entire outfit. Table \[table2\] shows the results of FITB test for all the comparing methods. Similar to the compatibility prediction task, we observed that our model outperforms all the baselines. The performance of this task is also improved through utilizing joint visual-semantic embeddings in our model (Fashion RN + VSE). The reason for this improvement in the FITB test is that in many cases there is more than one compatible item among the candidates. While the Fashion RN model is able to rely on visual information to find items that are compatible with the given outfit, adding semantic information can improve ranking among the compatible candidates. For example, in the last row of Figure \[fitb\] the held-out item is correctly detected compatible (score = 0.98) by the Fashion RN model. However, the first shirt is also compatible with the outfit and has received a higher score. We observed that adding the semantic information (Fashion RN + VSE) improved ranking of the compatible candidates in this example and resulted in choosing the right shirt in the last row of Figure \[fitb\]. Similar to the compatibility prediction task, we observe that Bi-LSTM method performs poorly on our dataset. As others have noted [@hsiao2017creating] this is probably because the FITB test set provided by Han et al. [@han2017learning] contains poor choices of negatives. In their FITB dataset, negative items may not be the same type as the held-out item. For example, the test outfit may have a shirt, a pair of jeans, and be missing a handbag. If some of the candidates are shirts, which already exists in the outfit, the network can easily eliminate them based on their category without needing to infer compatibility. Thus, to enforce the model to reason based on compatibility, we ensured that all the candidates are the same type as the missing item. Figure \[fitb\] shows successful and unsuccessful examples of this test using our model. In most of the test outfits the held-out item is among the top two items in the ranked list and shows a high compatibility score. Fashion Compatibility Embedding {#sec:qualitative} =============================== Besides the capability of predicting compatibility given a complete fashion outfit and fill-in-the-blank given an incomplete outfit, FashionRN and FashionRN-VSE are also able to learn item and outfit embedding through hidden layers. More specifically, $v$ learned in FashionRN can be viewed as the items’ *compatibility features*. As discussed previously, the concept of compatibility is fundamentally different from similarity, since items that are visually similar to each other are not necessarily compatible in fashion outfits, and vice versa. To demonstrate the learned compatibility embedding of fashion items, we take the learned embedding, transformed them into two-dimensional embedding by using TSNE algorithm [@maaten2008visualizing]. To show FashionRN’s capability of learning compatibility beyond visual similarity, we compare the visualization on the same set of randomly chosen 1000 fashion items with DenseNet features. The results are shown in Figure \[fig:visualization\]. As shown in Figure \[fig:visualization\], at the first glance, the scatters of the same 1000 fashion items created by DenseNet embedding and FashionRN embedding are greatly different. With a closer look, one can see that items with similar colors and shapes are closer to each other in the DenseNet embedding space, while items that make sense to go together in an outfit are closer to each other in the FashionRN embedding space. This shows that FashionRN captures the underlying item compatibility in addition to visual similarity. Conclusion {#sec:conclusion} ========== In this paper, we proposed a method for learning fashion compatibility. We considered an outfit as a scene and its items as objects in the scene and developed FashionRN and FashionRN-VSE, RN-based models, to learn the visual relations between items to determine their compatibility. We collected a large dataset from Polyvore and conducted different experiments to demonstrate the effectiveness of our method. In addition to addressing some of the limitations of existing models, our model showed state-of-the-art performance in both compatibility prediction task and fill-in-the-blank test. Besides the capability in the above prediction tasks, FashionRN and FashionRN-VSE are also able to learn item and outfit embedding that carry underlying compatibility. To showcase such results, we visualize the learned embedding of the same items using both DenseNet and FashionRN. Through visualization, we find that FashionRN better capture the compatibility among items compared to DenseNet. . [^1]: http://www.shopify.com/enterprise/ecommerce-fashion-industry, (accessed on 2018-07-17) [^2]: http://www.stitchfix.com/ [^3]: http://www.asos.com/ [^4]: http://www.amazon.com/amazon-fashion [^5]: http://wwd.com/business-news/business-features/jill-standish-think-tank-1202941433/ [^6]: We empirically found that the order of objects in each pair does not impact the accuracy and thus our time complexity is $O(\binom{n}{2})$ and not $O(n^2)$ [^7]: http://www.chictopia.com [^8]: https://www.shoplook.io
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a simple model of earthquakes on a pre-existing hierarchical fault network. The system self-organizes on long time scales in a stationary state with a power law Gutenberg-Richter distribution of earthquake sizes. The largest fault carries irregular great earthquakes preceded by precursors developing over long time scales and followed by aftershocks obeying an Omori’s law. The cumulative energy released by precursors follows a time-to-failure power law with log-periodic structures, qualifying a large event as an effective dynamical (depinning) critical point. Down the hierarchy, smaller earthquakes exhibit the same phenomenology, albeit with increasing irregularities.' address: - '$^1$ Department of Physics, University of Southern California, Los-Angeles, CA 90089-0484.' - '$^2$ Department of Earth Sciences, University of Southern California, Los-Angeles, CA 90089-0740.' - | $^3$ Department of Earth and Space Sciences and Institute of Geophysics and Planetary Physics\ University of California, Los Angeles, California 90095-1567 - | $^4$ Laboratoire de Physique de la Matière Condensée, CNRS URA 190\ Université des Sciences, B. P. 70, Parc Valrose, 06108 Nice Cedex 2, France author: - 'Y. Huang$^1$, H. Saleur$^1$, C. G. Sammis$^2$, D. Sornette$^{3,4}$' title: 'Precursors, aftershocks, criticality and self-organized criticality' --- \#1 \#1\#2\#3\#4 [ =\#2 ]{} \#1\#2\#3\#4 [ =\#2 =to \#2 ]{} rotate 0.2cm Seismologists model dynamical rupture propagation with complex friction laws and barriers [@Madariaga] and attempt to ascribe the earthquake complexity to nonlinear processes and/or heterogeneities. From a more global point of view, it has been suggested that earthquakes are somewhat similar to critical points [@Allegre], and can be addressed using tools of the renormalization group. A very broad, but quite ill-defined, perspective is also available with the concept of self-organized criticality [@Bak; @SS]. These various points of view model different properties at different time scales; it is hard to see how they relate to each other, and whether they are part of a unique meaningful “theory of earthquakes”. A closely related puzzle is whether criticality and self organized criticality are compatible. The present work attempts to unify a significant fraction of this earthquake phenomenology, and to answer this puzzle: we define a simple model that exhibits the self-critical organization of the crust at large time scales, the critical nature of large earthquakes and the short-time rupture dynamic properties. We start with the following ingredients. $(i)$ The faults are organized in a hierarchical geometrical structure [@hierarchy; @Andrews]. We do not address the problem of the construction of the fault patterns themselves which involves much larger times scale ($10^{5-6}$ years) compared to the time scales we describe ($10^{0-5}$ years). $(ii)$ The tectonic plate is driven at a slow average uniform rate and we take into account its heterogeneities and the existence of relaxation processes by allowing for fluctuations in the local rate of loading. $(iii)$ When a threshold is reached, a redistribution occurs on adjacent faults, with amplitude controlled by the size of the faults. Our model is an hybridization of the sandpile model [@Bak] and of the fractal automaton [@Barriere]. As in [@Barriere], the cell sizes are arranged as a discrete fractal lattice. Each cell can be viewed as representing the region which is elastically unloaded when a fault fails. The fractal distribution of cell sizes then represents a fractal distribution of fault lengths [@hierarchy]. The number of fractal generations does not appear to be a crucial parameter. Most simulations were carried out with $8$ generations, but some runs with $12$ generations did not exhibit major differences. We load the system by dropping particles at regular intervals (which we use as a clock) onto the grid at random sites. The addition of a particle is analogous to energy loading. The probability that a particle is added to a particular cell is proportional to the area $A$ of this cell. Although the exact cell (fault domain) to receive the next increment in stress is random, the entire grid is loaded uniformly at a uniform rate over the long-term. This represents the long-term uniform strain at the boundaries between moving tectonic plates. The short term random heterogeneity in loading represents heterogeneity in crustal structure or in upper mantle flow and the associated relaxation processes. Each cell becomes unstable when it contains $n \times 4A$ particles, $n$ being a parameter. It then breaks and redistributes $4A$ of its particles to its immediate neighbors. The number of particles redistributed to an adjacent cell is proportional to the linear dimension of the cell. The $(n-1)\times 4A$ particles that are not redistributed are considered as lost, like the particles which are redistributed outside the grid on the plate border. Such energy losses we call “cooling”. Since the particles represent energy, the model assumes that a fault fails when the stored energy reaches a critical threshold. The key difference with [@Bak] is that the energy must reach the critical level over the entire area of a cell before it is allowed to break. Due to the fractal structure, cells of widely different sizes are thus coupled together, mimicking the multi-scale interactions between faults. The clock is defined by the particle drops. Cascades are triggered by the addition of a single additional particle, [*i.e.*]{} they occur instantaneously (a delay can also be introduced in the aftershock sequences, see below). At variance with the rules of [@Barriere] and in accord with the standard sandpile model [@Bak], the size of an earthquake is determined by the size of the cascade, and is proportional to the total number of particles cascading. We thus identify the cascade as complexity in the mainshock, e.g. the linking of fault strands and segments or even the linking of adjacent faults such as occured in the 1994 Landers earthquake [@Landers]. We define precursors as those regional cascades which precede a mainshock and the aftershocks as the cascades which follow it. We then use the time scale defined by the particle drops to explore the temporal structure of both the foreshock and aftershock sequences. The cooling, [*i.e.*]{} the disappearence of particles during an event, represents a loss of stored elastic energy due to the earthquake. Consider for instance an elastic medium under constant applied shear strain, which suddenly undergoes a rupture in the form of a dislocation or a crack. The stress field is redistributed with enhancement at the crack tip as well as screening at some other places, while both the total shear stress and total elastic energy decrease. The amount of loss is a function of the nature of the rupture and of geometry. For instance, in the Griffith problem, a crack of length $2c$ is introduced into a rod of length $L$ and section $\sim L^2$. Under constant strain, the relative energy loss ${\delta E \over E}$ incurred by the introduction of the crack is ${5\alpha c^2 \over L^2}/(1+ {5\alpha c^2 \over L^2})$ [@Scholz] with $\alpha$ depending on the geometry and of the order of unity. If we take $c$ between $L/4$ and $L/2$, we get ${\delta E \over E}$ anywhere between about $0.2-0.6$. This loss corresponds to friction on the fault plane, the creation of surface energy in the extensive crushing which occurs in the fault zone [@Andrews], and in the radiation of elastic waves [@Radiation] whose energy is ultimately lost as heat. Comparing with ${\delta E \over E} = {n-1 \over n}$ in our model, we see that $n\approx 2$ is not unreasonable. We take this value for most of our simulations below. These large uncertainties in cooling are actually not critical - in fact, provided $n$ is not too close to 1, the results are largely independent of $n$. The choice $n=1$ of [@Barriere] leading to internal conservation (except at the boundaries) does not seem relevant for earthquakes. As in [@Barriere], we identify an event cascading through the largest cell on the grid as a main shock for our system, [*i.e.*]{}, the largest regional event. After a transient depending on the initial conditions, the system self-organizes in a stationary state with a power law Gutenberg-Richter distribution of earthquake sizes $P(E) dE \sim E^{-(1.8 \pm 0.1)} dE$. This can be called a self-organized critical state, measured by a statistics encompassing many times the largest time intervals between the largest earthquakes. We generated about $100$ main shocks on an $8^{th}$ order fractal structure by adding a total of about $4.6 \cdot 10^7$ particles to the system. The average number of time units (particle drops) between main shocks is $T=4.6 \cdot 10^5$, with fluctuations of this interval of the order of $10\%$. This quasi-periodicity occurs because our main shocks are characteristic earthquakes [@Schwartz] which completely control the energy redistribution at large scales. Their existence is not in contradiction with the self-organized critical state: they simply arise because of finite size effects. For the purpose of comparison with real earthquakes, we choose our units of time so that $T=100$ years, [*i.e.*]{} roughly $10^4$ time units correspond to two years. For most main shocks, precursory built-up of activity is clearly visible, and lasts $1-2 \cdot 10^5$ time units, or about $20-40$ years. A decaying activity posterior to the main shocks is also visible with a lifetime of about $0.1-.2 \cdot 10^5$ time units, [*i.e.*]{} about $2-4$ years. Quite often, the interval between two successive main shocks will not be as quiet, but is interrupted by the breaking of one of the second largest cells. The time interval between such smaller shocks is much less regular than the time interval between main shocks. We start by discussing aftershocks. The common wisdom holds that aftershocks involve a time delay between the application of the stress and the subsequent rupture. This delay presumably involves an intermediate relaxation time, whose effect could resemble that of diffusion or visco-elastic processes. In the present model, the spatial heterogeneity of the loading rate already reflects the existence of delay mechanisms. Inspired by the analysis in terms of critical phenomena (see below), we then plot the cumulative number of events as a function of time [@foot] starting from an initial date of about $10^4$ time units posterior to $t_c$ and going backwards in time. If Omori’s Law ${dn(t)\over dt}\propto {1\over (t-t_c)^p}$ holds, this gives $n(t) \sim (t-t_c)^{1-p}$ (the theoretical divergence at long times for $p<1$ is truncated by the existence of background seismicity). Fits performed for $15$ of our model aftershock sequences gave a distribution of the exponent $p$ centered around $p=.9$, with small fluctuations, $p \in [0.85,1.05]$, in good agreement with the exponent measured for earthquakes. These results are independent of the value of the dissipation parameter $n$, provided $n$ is not too close to $1$. As $n \to 1$, in particular when $n=1$, as in [@Barriere], the exponent $p$ is often near $1$, although it has large fluctuations, and can be found as low as $p=0.7$. Simulations on other fractals indicate that the exponent $p$ is also near unity for different geometry. The model can be improved by the addition of a delay mechanism with a characteristic time $\tau$ for the avalanche associated with the main shock (this has no qualitative effect on the statistics of events, nor on the analysis of precursors). This had the drawback of introducing one additional parameter, but allows a more natural analysis of the data. Fortunately, results were found totally insensitive to the value of $\tau$ over a wide range ($\tau\leq 10^4$) and in perfect agreement with Omori’s law again. We now turn to precursors. While virtually all large shallow earthquakes have easily recognizable aftershock sequences, the same can not be said for foreshocks. In fact, if foreshocks are defined to occur with the same time and space clustering as aftershocks, then most large events do not have a recognizable foreshock sequence [@Jones] It is only when the time scale is extended to tens of years and the space to hundreds of kilometers that precursor sequences can be recognized [@Sykes; @Knopoff; @logperiodic]. We thus use the term “precursor” to distinguish the two definitions. We analyze data in the same spirit as in [@logperiodic; @logperiodic2]: we plot the cumulative Benioff strain $\epsilon(t)$ (square root of the energy release in an event) as a function of time, starting back in time within a range $1-3 \cdot 10^5$ ($20-60$ years) prior to the main shock. We use a lower cut-off such as to exclude the crowd of small background events and test for various cut-off values. The conclusion is, in most cases, a reasonably clear evidence for a power law behavior, decorated by log periodic oscillations [@logperiodic; @logperiodic2] $$\epsilon(t)=A - B (t_c-t)^m\left[1+C\cos\omega\ln(t_c-t)\right]. \label{benioff}$$ The fit by this equation of the cumulative foreshock time series for the biggest main shock is shown in Figure 3. This example is quite typical. Much better fits are sometimes obtained. Bad fits were rare but sometimes occured. The improvment of the $\chi^2$ when using (\[benioff\]) compared with a pure power law fit ($C=0$) was always greater than two. The power law in (\[benioff\]) is the signature of a critical dynamical behavior, reminiscent of depinning transition. The log-periodic correction in (\[benioff\]) reflects the discrete scale invariance (DSI) of the hierarchical fault network on which the events occur [@logperiodic2] (we observed similar oscillations decorating the Gutenberg-Richter law). Intriguingly, the amplitude $C$ of these oscillations is much bigger than for equilibrium statistical mechanics models [@Ising]: the threshold dynamics in the earthquake model seems much more sensitive to DSI. We discuss below how the value of $\omega$ is related to our fractal structure. Values of $m$ and $\omega$ fluctuate more than the exponent $p$ for aftershocks. In $80\%$ of the cases however, $m$ was found in the interval $m\in [0.2,0.6]$, in good agreement with experimental data [@logperiodic; @logperiodic2]. $\omega$ was found in the interval $\omega\in [6,12]$ corresponding to $1.7 \leq \lambda \leq 2.8$ where $\omega\equiv 2\pi/\log\lambda$. These results are not sensitive to the dissipation parameter $n$. Only the time scale is modified. In the extreme case of $n=1$ (without cooling), the seismic activity is much more random, and the rate of events looks much more constant between main shocks than it does with cooling: only about $10^4$ time units ($2$ years) before the main shock does the cumulative Benioff strain develop a power law behavior on the approach to the main shock, with properties that are then similar to the case $n=2$. Beside the Benioff strain (\[benioff\]), we studied the correlation length $\xi$ defined as the maximum spreading distance of the cascades. To get more stable numerical results, we calculated its integrated value, which does exhibit a power law singularity described by $$\begin{aligned} \xi &\propto &(t_c-t)^{-\nu_1},\ t<t_c,\ \nu_1\in [0.6,0.8]\nonumber \\ \xi &\propto & (t-t_c)^{-\nu_2},\ t>t_c,\ \nu_2\approx 1. \label{rtf}\end{aligned}$$ $\nu_1$ and $\nu_2$ have smaller fluctuations than the exponent $m$ of the Benioff strain. Observe that they are different on both sides of the critical point, a situation which is known to be possible in disordered systems in particular. The divergence of $\xi$ in (\[rtf\]) confirms the critical point picture. Moreover, the exponent provides a relation between DSI in time and DSI in space. From our fractal geometry, the latter is characterized by $x\to 2x$. Substituting $\xi \to 2\xi$ in (\[rtf\]) implies $|t-t_c|\to (\lambda)^{-1} |t-t_c|$ with $\lambda\in [2.4-3.2]$, [*i.e*]{} $\omega\in [5.4,7.2]$, in reasonable agreement with what we directly observed. We now use the time-to-failure law (\[benioff\]) to try to forecast the main shock by fitting it to the “experimental” data up to a cutoff time prior to the main shock, as proposed in [@logperiodic]. Overall, we find that $95\%$ of the main shocks can be predicted with an uncertainty less than a year, four years in advance. This must be compared with the typical fluctuations of about $10$ years of the time intervals between two main consecutive shocks. There are however cases where the predictability is much higher, and also extreme cases where the precursory activity is essentially nonexistent. We also studied events occuring on the second largest cells. The analysis is slightly more complicated because two such events can be close in time but well-separated in space. We therefore restricted our attention to well isolated cases. The analysis of precursors and aftershocks gives similar results as for the foregoing events on largest cells. The only difference is that the time scales involved are shorter (roughly by a factor of $10$), and the fluctuations in exponents somewhat bigger. This is presumably due to the fact that the relative size of the fluctuations of the “energy” field with respect to that of these earthquakes is larger for smaller events due to the influence of the earthquakes at the upper levels. Our results thus indicate that reasonable intermediate-time earthquake prediction may be achievable, as proposed in [@logperiodic; @logperiodic2]. From the simulations, there is clear evidence that the predictabilty depends on the “temperature” of the system - the larger the loss of energy after a main shock, the better is the prediction of the next one. This is particularly clear for the first main shocks obtained when initiating the empty system, for which predictabilty is very high. The astonishing accuracy observed in [@logperiodic] for the Loma Prieta example might thus be due to the existence, in that case, of a very “cool” seismic system. In addition, we have demonstrated for the first time the possible coexistence of self-organized criticality and criticality. Up to now, they were considered as dual (mutually exclusive) modes of behavior: critical depinning occurs when the applied force reaches a critical value beyond which the system moves globally, while self-organized criticality needs a slow driving velocity and describes the jerky steady-state of the system. The critical nature of our large cascades emerges from the interplay between the long-range stress-stress correlations of the self-organized critical state and the hierarchical geometrical structure: a given level of the hierarchical rupture is like a critical point to all the lower levels, albeit with a finite size. The finite size effects are thus intrinsic to the process. A. Cochard and R. Madariaga R., Pure and Applied Geophysics [**142**]{}, 419 (1994); J.M. Carlson, J.S. Langer and B.E. Shaw, Rev. Mod. Phys. [**66**]{}, 657 (1994); Y. Benzion and J. Rice, J. Geophys. Res.[**100**]{}, 12959 (1995). C.J. Allègre, J.L. Le Mouel and A. Provost, Nature [**297**]{}, 47 (1982); T.L. Chelidze, Phys. Earth Planet.Int. [**28**]{}, 93 (1982); A.Sornette and D.Sornette, Tectonophysics [**179**]{}, 327 (1990); A.G. Tumarkin and M.G. Shnirman, Computational seismology [**25**]{}, 63 (1992). P. Bak P. and C. Tang, J.Geophys.Res.[**94**]{}, 15635 (1989). A.Sornette and D.Sornette, Europhys.Lett. [**9**]{}, 197 (1989). G.C.P. King, Pageoph [**124**]{}, 567 (1983); [**123**]{}, 806 (1985). D.J. Andrews, J.Geophys.Res. [**94**]{}, 9389 (1989). B. Barriere and D.L. Turcotte, Phys. Rev. E [**49**]{}, 1151 (1994). Y.G. Li, J.E. Vidale, K. Aki, C.J. Marone et al., Science [**265**]{}, 367 (1994). C.H. Scholz, The mechanics of earthquake and faulting (Cambridge University Press, Cambridge, UK, 1990). H. Kanamori, J. Mori, E. Hauksson, T.H. Heaton, L.K. Hutton and L.M. Jones, Bull. Seismol. Soc. Amer. [**83**]{}, 330 (1993). D.P. Schwartz and K.J. Coopersmith, J.Geophys.Res. [**89**]{}, 5681(1984) When computing this cumulative number, every breakdown in a cascade is counted as one event - this amounts to putting a delay of one time unit. D.C. Agnew and L.M. Jones, J. Geophys. Res. [**96**]{}, 11959 (1991). L.R. Sykes and S. Jaumé, Nature [**348**]{}, 595 (1990). L. Knopoff, T. Levshina, V.I. Keilis-Borok and C. Mattoni, J. Geophys. Res. [**101**]{}, 5779 (1996). D. Sornette and C.G. Sammis, J.Phys.I France [**5**]{}, 607 (1995) H. Saleur, C.G. Sammis and D. Sornette, J.Geophys.Res. [**101**]{}, 17661 (1996); D.J. Varnes and C.G. Bufe, Geophys. J. Int. [**124**]{}, 149 (1996). B. Derrida, C. Itzykson, J. M. Luck, Comm. Math. Phys. [**94**]{}, 115 (1984).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study a supersymmetric deconstructed gauge theory in which a warp factor emerges dynamically, driven by Fayet-Iliopoulos terms. The model is peculiar in that it possesses a global supersymmetry that remains unbroken despite nonvanishing D-term vacuum expectation values. Inclusion of gravity and/or additional messenger fields leads to the collective breaking of supersymmetry and to an unusual phenomenology.' author: - 'Christopher D. Carone' title: A Peculiar Dynamically Warped Theory Space --- [ address=[Particle Theory Group, Department of Physics, College of William and Mary, Williamsburg, VA 23187-8795]{} ]{} Introduction ============ In this talk[^1], we consider a four-dimensional, linear “moose" model that deconstructs [@decon] a slice of five-dimensional Anti-de Sitter (AdS) space [@deconads]. The profile of link field vacuum expectation values (vevs) along the moose can be chosen to replicate the effects of a warp factor in the higher-dimensional theory. The question we study is whether the necessary profile can be generated dynamically and naturally. In Ref. [@thepaper], we present a number of examples where a monotonically varying warp factor is obtained by assuming a translational symmetry along the moose and specific choices for boundary conditions at its ends. Here, we focus on a supersymmetric example in which a warp factor is driven by Fayet-Iliopoulos (FI) terms. The model is peculiar in that it possesses a global supersymmetry that remains unbroken despite nonvanishing D-term vevs. Inclusion of gravity and/or additional messenger fields leads to the collective breaking of supersymmetry, with interesting consequences. Other applications of deconstruction in model building can be found in Refs. [@more1; @more2]. The Model ========= The model we consider is a 4D ${\cal N}$=1 SUSY U(1)$^n$ moose theory. The link fields consist of chiral multiplets $\phi_i$ with charges $(q_i,q_{i+1})=(+1,-1)$, where $i$ labels the gauge group factor. Conjugate superfields $\overline{\phi}_i$ are included to cancel anomalies. The scalar potential for the link fields is given by $$V_D=\sum_{i=1}^n D_i^2, \label{eq:pot}$$ where $$D_i = g\left( |\phi_{i}|^2 - |\phi_{i-1}|^2 - |\overline{\phi}_{i}|^2 + |\overline{\phi}_{i-1}|^2 + \xi_{i}\right) \,\,\,, \label{eq:D-terms}$$ and where we define $\phi_0=\phi_{n}=0$. Here $g$ is the common gauge coupling and $\xi_i$ is the FI term for the $i^{th}$ group. This potential is minimized when $$\left<\phi_i\right>\,\left(\left<D_i\right>-\left<D_{i+1}\right>\right)=0\ ,$$ which implies that the vacua of interest generically have equal $D$-terms, $$\left<D_i\right>=\frac{\sum_j g\xi_j}{n}\equiv D \,\,. \label{eq:dterms}$$ The scalar vevs $v_i$ and $\overline{v}_i$ satisfy the recursion relation $$(|v_{i+2}|^2-|\overline{v}_{i+2}|^2) - 2(|v_{i+1}|^2-|\overline{v}_{i+1}|^2)+ (|v_i|^2 - |\overline{v}_i|^2) = (\xi_{i+1}-\xi_{i+2}) \,\,\, ,$$ which is a discretized form of $$\frac{\partial^2|\phi(y)|^2}{\partial y^2} = -\frac{\xi'(y)}{a},$$ where $a=1/(g v_1)$ is the lattice spacing. Integrating this result and expressing the link profile $\phi(y)$ in terms of the warp factor, one finds $$\frac{\partial e^{-{f}(y)}}{\partial y}=(-g^2 \xi(y)+gD)a\,,\,\, \,\,\,\mbox{ where } \,\,\,\,D/g=\int_0^R dy \,\xi(y)/R \,.$$ Notice that any desired warp factor can be obtained by setting $$\xi(y)=\widetilde{\xi}(y)+D/g = \widetilde{\xi}(y) +\int_0^R dy\, \xi(y)/R \label{eq:desire}$$ and choosing an appropriate function $\widetilde{\xi}(y)$. However, Eq. (\[eq:desire\]) is self-consistent only if $$\int_0^R dy\,\widetilde{\xi}(y)=0 \,\,\,.$$ A monotonically varying warp factor is possible provided that $\widetilde{\xi}(y)$ receives an opposite sign contribution at the boundary. The Spectrum ============ The Kaluza-Klein spectrum of this model is surprising, given the non-vanishing $\langle D_i \rangle$ in Eq. (\[eq:dterms\]). The masses of the vector and chiral multiplets originate from the kinetic terms $${\cal L}\supset\int d^4\theta\, \sum_i \Phi_i^\dagger\,\exp\left[g(V_{i}-V_{i+1})\right]\,\Phi_i + \overline{\Phi}_i^\dagger\, \exp\left[g(-V_{i}+V_{i+1})\right]\,\overline{\Phi}_i.$$ One finds that the gauge boson mass matrix is $$m_{\rm gauge}^2 = 2 g^2 \left(\begin{array}{ccccccc} v_1^2 && -v_1^2 && && \\ -v_1^2 && v_1^2+v_2^2 && -v_2^2 && \\ && -v_2^2 && v_2^2+v_3^2 &-v_3^2 & \\ &&&& \ddots & \ddots & \\ && &&& -v_{n-1}^2 &\ v_{n-1}^2 \end{array} \right) \,\,\,.$$ The mass matrix for the link field fermions and the gauginos is such that $$M^2_{\rm fermions}=2 g^2 \left(\begin{array}{cc} \Theta \Theta^\dagger & \\ & \Theta^\dagger \Theta \end{array} \right)\ \,\,\,,$$ where the $n\times(n-1)$ dimensional matrix $\Theta$ is given by $$\Theta=\left(\begin{array}{cccc} v_1 & & & \\ -v_1 & v_2 & & \\ & \ddots & \ddots & \\ & & -v_{n-2} & v_{n-1} \\ & & & -v_{n-1} \end{array} \right)\ .$$ Clearly, $2 g^2 \Theta \Theta^\dagger \equiv m_{\rm gauge}^2$, so that half of the fermion spectrum coincides with the gauge boson spectrum. The scalar spectrum, on the other hand, may be obtained by expanding Eq. (\[eq:pot\]) about the its minimum. One finds $${\cal L}\supset\frac{1}{2}(\varphi_i^\dagger \ |\ \varphi_i)\,g^2\left(\begin{array}{c|c}\Theta^\dagger \Theta \ & \ \Theta^\dagger \Theta \\ \hline \Theta^\dagger \Theta\ & \ \Theta^\dagger \Theta \end{array}\right) \left(\begin{array}{c}\varphi_i \\ \hline \varphi_i^\dagger \end{array}\right) \,\,\,.$$ The imaginary modes $(\varphi_i-\varphi_i^\dagger)/\sqrt{2}$ have vanishing masses, and correspond to the would-be Goldstone bosons of the spontaneous symmetry breaking U(1)$^n \rightarrow U(1)$. The real modes $(\varphi_i+\varphi_i^\dagger)/\sqrt{2}$ have the mass matrix $$M^2_{scalars} = 2 g^2 \Theta^\dagger \Theta \,\,\,$$ which coincides precisely with the remaining massive fermion modes. Finally, the $\bar{\phi}$ scalars and their fermionic partners remain massless. Although $n$ FI terms are present, we conclude that the KK spectrum remains exactly supersymmetric. This peculiar result can be understood by considering a simpler theory: a 4D ${\cal N}=1$ SUSY U(1) gauge theory with no matter, plus an FI term. This theory also has an exactly supersymmetric spectrum. The sole effect of the FI term is to introduce a cosmological constant, which is irrelevant if gravity is not included. Precisely the same is true in our model. One can show that the potential Eq. (\[eq:pot\]) has a non-vanishing vacuum energy density $(\sum_i \xi_i)^2/n$. The effects of SUSY breaking reappear in the particle spectrum if the model is coupled to another sector. Imagine that we introduce a vector-like pair of chiral superfields that are charged only under the first U(1) factor. The nonvanishing $D_1$ vev will split the squared masses of their scalar components by $\pm 2\left<D_1\right>$. If these fields are also charged under the gauge groups of the minimal supersymmetric standard model (MSSM), then SUSY-breaking effects will be gauge mediated to the observable sector. Interestingly, the scale of SUSY breaking that is relevant to gauge mediation is determined by a single D-term vev, $D_1$, while the scale relevant to gravity-mediation is set by all $n$ non-vanishing D-terms. Gravity-mediated SUSY-breaking effects therefore scale with the size of the moose. It is possible in such a model to have competing effects from the gauge and gravity mediation of supersymmetry breaking and a heavier gravitino than in other D-term supersymmetry breaking scenarios. Conclusions =========== We have presented a supersymmetric U(1) gauge theory that deconstructs a warped extra dimension and dynamically generates a warp factor. The warping is accomplished via Fayet-Iliopoulos D-terms that force the squares of the link field vevs to grow by an additive factor as one moves along the moose. In its simplest form, the model has the peculiar feature that supersymmetry breaking appears only via the generation of a cosmological constant, while the spectra of the physical gauge and link states remains supersymmetric. In the case where the moose is allowed to couple to additional matter, the delocalization of supersymmetry breaking implies that fields localized at a single site experience a source of supersymmetry-breaking, $D_i^2$, that is $1/n$ as strong as the full amount available for gravity mediation leading, for example, to a heavy gravitino. In addition, supersymmetry breaking is supersoft [@ss] in this scenario. These features may make our U(1)$^n$ model distinctive if it is applied as a secluded supersymmetry-breaking sector for the minimal supersymmetric standard model. [9]{} N. Arkani-Hamed, A. G. Cohen and H. Georgi, Phys. Rev. Lett.  [**86**]{}, 4757 (2001); C. T. Hill, S. Pokorski and J. Wang, Phys. Rev. D [**64**]{}, 105005 (2001) \[arXiv:hep-th/0104035\]. L. Randall, Y. Shadmi and N. Weiner, JHEP [**0301**]{}, 055 (2003) \[arXiv:hep-th/0208120\]; A. Katz and Y. Shadmi, JHEP [**0411**]{}, 060 (2004) \[arXiv:hep-th/0409223\]. C. D. Carone, J. Erlich and B. Glover, JHEP [**0510**]{}, 042 (2005) \[arXiv:hep-ph/0509002\]. C. D. Carone, Phys. Rev. D [**71**]{}, 075013 (2005) \[arXiv:hep-ph/0503069\]. C. Csaki, [*et al.*]{}, Phys. Rev. D [**65**]{}, 015003 (2002); H. C. Cheng, K. T. Matchev and J. Wang, Phys. Lett. B [**521**]{}, 308 (2001); P. H. Chankowski, A. Falkowski and S. Pokorski, JHEP [**0208**]{}, 003 (2002); C. Csaki, G. D. Kribs and J. Terning, Phys. Rev. D [**65**]{}, 015004 (2002). P. J. Fox, A. E. Nelson and N. Weiner, JHEP [**0208**]{}, 035 (2002). [^1]: Presented at SUSY06, the 14th International Conference on Supersymmetry and the Unification of Fundamental Interactions, Irvine, California, USA 12-17 June 2006. WM-06-109.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We review the derivation of the effective Dirac equation for ultracold atoms in one-dimensional bichromatic optical lattices, following the proposal by \[Witthaut *et al.* Phys. Rev. A **84**, 033601 (2011)\]. We discuss how such a derivation – based on a suitable *rotation* of the Bloch basis and on a *coarse graining* approximation – is affected by the choice of the Wannier functions entering the coarsening procedure. We show that in general the Wannier functions obtained by rotating the maximally localized Wannier functions for the original Bloch bands can be sufficiently localized for justifying the coarse graining approximation. We also comment on the relation between the rotation needed to achieve the Dirac form and the standard Foldy-Wouthuysen transformation. Our results provide a solid ground for the interpretation of the experimental results by \[Salger *et al.* Phys. Rev. Lett. **107**, 240401 (2011)\] in terms of an effective Dirac dynamics.' author: - 'Xabier Lopez-Gonzalez' - Jacopo Sisti - Giulio Pettini - Michele Modugno title: | On the effective Dirac equation for ultracold atoms in optical lattices:\ role of the localization properties of the Wannier functions --- Introduction ============ The analog of Klein tunneling – the penetration of relativistic-like particles through a potential barrier – has been recently observed in a proof-of-principle experiment with ultracold atoms in a one-dimensional optical lattice [@salger]. This experiment follows a theoretical proposal by Witthaut *et al.* [@witthaut] for simulating the $1+1$ Dirac equation in bichromatic optical lattices in the presence of a Dirac point, that is when the energy dispersion for a set of two Bloch bands takes the relativistic form $E_{\pm}(q)=\pm\sqrt{m^{2} c^{4} + c^{2}q^{2}}$. In fact, in this case it is possible to transform the original Schrödinger equation into a Dirac equation, by means of a suitable mixing (*rotation*) of the two bands and of a *coarse graining* procedure via a projection over a basis of Wannier functions [@witthaut]. A crucial point that guarantees the validity of this reduction is the existence of a set of Wannier functions sufficiently localized within each lattice cell, in the *rotated* basis [@note1]. In order to clarify this point, in this paper we present a detailed derivation of the Dirac effective equation, that allows to highlight the role played by the Wannier functions (that are not uniquely defined, owing to the arbitrariness of the phase of the Bloch functions [@marzari; @modugno]), analyzing the specific cases discussed in Refs. [@salger; @witthaut]. We show that even the Wannier functions obtained simply by rotating the maximally localized Wannier functions (MLWFs) for the original bands can be a reasonable choice. Our results provide a solid justification of the good agreement between the experimental results of Salger *et al.* [@salger] (or the numerical results of Ref. [@witthaut]) with the effective Dirac equation proposed by Witthaut *et al.* [@witthaut]. The article is organized as follows. In Sect. \[sec:effectivedirac\] we discuss the derivation of the Dirac effective equation and the role played by the Wannier functions in the rotated basis. Then, in Sect. \[sec:fw\] we discuss the relation between the rotation of the Bloch bands and the Foldy-Wouthuysen transformation for the Dirac equation. In Sect. \[sec:mlwfs\] we briefly review the concept of maximally localized Wannier functions, and discuss its relevance to the original and rotated Bloch basis. In Sect. \[sec:results\] we explicitly compute the MLWFs for the original Bloch bands, and discuss how the rotation affects their localization properties. There we consider explicitly both cases of the theoretical proposal by Witthaut *et al.* [@witthaut] and of the experimental realization by Salger *et al.* [@salger]. The implications for the sub-leading term in the expansion of the “slowly-varying” potential describing the potential barrier are examined in Sect. \[sec:potential\]. Concluding remarks are drawn in Sect. \[sec:conclusions\]. Effective Dirac dynamics {#sec:effectivedirac} ======================== Let us start from the single particle Schrödinger equation in the presence of a periodic potential $V_L({x})$ (of period $d$) and a slowly varying external potential $V({x})$ $$i\hbar\partial_t\Psi({x},t)=\left[ H_L({x})+V({x})\right] \Psi({x},t), \label{eq:schrod}$$ where $H_L=-(\hbar^2/2M)\nabla^2+V_L$ is the unperturbed lattice Hamiltonian, whose eigenvectors are Bloch functions $\psi_n({k},{x})={\rm e}^{i{k}{x}}u_n({k},{x}) \equiv \langle{x}|n,{k}\rangle$. Then, Eq. (\[eq:schrod\]) can be mapped to quasimomentum space as (see e.g. [@callaway; @morandi]) $$\begin{aligned} i\hbar\partial_t\varphi_n({k},t)&=E_n({k})\varphi_n({k},t) \nonumber\\ &\qquad+ \sum_{n'}\int_{k'}\langle n,{k}|V|n',{k}'\rangle\varphi_{n'}({k}',t)\end{aligned}$$ where $\varphi_n({k},t)$ represent the expansion coefficients of a generic wave-packet $\Psi({x},t)$ on the Bloch basis, namely $\Psi({x},t)= \sum_n\int_{{k}}\varphi_n({k},t)\psi_n({k},{x})$, and ${k}$ runs over the first Brillouin zone (the dependence on $t$ will be omitted in the following). The above equation can be written in vectorial form as $$i\hbar\partial_t\underline{\varphi}({k})=H_{L}({k})\underline{\varphi}({k})+ \int_{k'}\tilde{V}({k},{k}')\underline{\varphi}({k}'), \label{eq:vec}$$ with $H_{L}({k})=E_{n}({k})\delta_{nn'}$, $\tilde{V}({k},{k}')= \langle n,{k}|V|n',{k}'\rangle$. Let us now consider a subset of two bands, and assume that around $k=0$ the dispersion relation can be approximated as $E_{\pm}(k)=\pm\sqrt{m^{2} c^{4} + c^{2}(\hbar k)^{2}}$ (modulo an irrelevant constant), as considered in Ref. [@witthaut]. Then, in order to put the above expression in the form of a Dirac equation, it is convenient to make use of a $SO(2)$ rotation $R(\theta(k))$ [@witthaut], with $$R(\theta(k))= \begin{pmatrix} \cos\theta(k) & -\sin\theta(k) \\ \sin\theta(k) & \cos\theta(k) \end{pmatrix} \label{eq:mixing}$$ and $$\tan\theta(k)=-\frac{m c^{2}}{c\hbar k + \sqrt{m^{2} c^{4} + c^{2}(\hbar k)^{2}}}. \label{eq:theta}$$ We notice that this rotation is related to an inverse Foldy-Wouthuysen transformation [@fw]; we shall come back to this point later on. Then, Eq. (\[eq:vec\]) can be written as $$\begin{aligned} i\hbar\partial_t \underline{\varphi}'({k})&=& H_{L}'({k})\underline{\varphi}'({k}) \nonumber\\ && +\int_{k'}R({k})\tilde{V}({k},{k}') R^{T}({k}')\underline{\varphi}'({k}') \label{eq:rotated}\end{aligned}$$ with $\underline{\varphi}'=R\underline{\varphi}$, and $$H_{L}'({k})=R({k})H_{L}({k})R^{T}({k}) = \begin{pmatrix} c\hbar k & -m c^{2} \\ -m c^{2} & - c\hbar k \end{pmatrix}. \label{eq:rotatedH}$$ Equation (\[eq:rotated\]) can be transformed back in coordinate space by projection on a basis of Wannier functions, as discussed in the following. We recall that the Wannier functions are obtained from the Bloch functions as [@callaway] $$w_{n}({x}\!-\!{R}_i)=\sqrt{\frac{d}{2\pi}} \int_k{\rm e}^{-i{k}{R}_i}\psi_n({k},{x}), \label{eq:blochwannier}$$ and that they are not uniquely defined due to the arbitrariness of the Bloch functions’ phase (that, in general, depends on $k$). A generic wave packet $\Psi({x})$ can be expanded as $\Psi({x})=\sum_{n,i} \chi_n({R}_i)w_{n}({x}\!-\!{R}_i)$, where the amplitudes $\chi_n({R}_i)$ can be obtained from the Bloch coefficients by a simple Fourier transform $$\label{eq:def_env_f} \chi_n({R}_i)=\sqrt{\frac{d}{2\pi}} \int_k\varphi_n({k}){\rm e}^{i{k}{R}_i}.$$ The same relation holds in the rotated basis [@modugno]. When the Wannier functions (in the present case, those in the *rotated* basis) are sufficiently localized in each cell, the rotated amplitudes $\chi'_n({R}_i)$ play the role of envelope functions (associated to the site ${R}_i$, not just to the state $|w'_{n}(R_{i})\rangle$), corresponding to a *corse graining* on the scale of a single cell [@adams; @morandi]. Then, following the *coarse graining* approximation, the coefficients $\underline{\chi}'({R}_i)$ can be supposed to be differentiable functions of ${R}_i$, the latter being considered as a continuous variable (eventually, $R_{i}\to x$). This holds when $\underline{\chi}'({R}_i)$ is slowly varying on the scale of the lattice period (“smooth” wave packet). Under this approximation, and thanks to the properties of the Fourier transform [@adams; @morandi], the Hamiltonian $H_{L}'$ in coordinate space can be obtained by the replacement ${k}\to -i{\nabla}_{{R}_i}$, so that Eq. (\[eq:rotated\]) can be mapped in coordinate space as $$\begin{aligned} &&i\hbar\partial_t \underline{\chi}'({R}_i)=H_{L}'(-i{\nabla}_{{R}_i})\underline{\chi}'({R}_i) \\&&+ \sum_{j}\int_{k}\int_{k'}e^{i{k}\cdot{R}_i}R({k})\tilde{V}({k},{k}')R^{T}({k}')e^{-i{k}'\cdot{R}_j} \nonumber\underline{\chi}'({R}_j). \label{eq:chiprime}\end{aligned}$$ In addition, it is easy to show that $$\begin{aligned} &&\left.\int_{k}\int_{k'}R({k})\tilde{V}({k},{k}')R^{T}({k}')e^{i{k}\cdot{R}_i}e^{-i{k}'\cdot{R}_j}\right|_{nn'} \nonumber\\ && =\int_{x}{w'_{n}}^{*}({x}-{R}_{i})V({x})w'_{n'}({x}-{R}_{j})\equiv \langle V\rangle^{ij}_{nn'}, \label{eq:10}\end{aligned}$$ yielding $$i\hbar\partial_t \underline{\chi}'({R}_i)=H_{L}'(-i{\nabla})\underline{\chi}'({R}_i)+ \sum_{j}\langle V\rangle^{ij}\underline{\chi}'({R}_j). \label{eq:fulleq}$$ When the Wannier functions are well localized inside each lattice cell, and the potential $V(x)$ is slowly varying on that scale, we can write $$\langle V\rangle^{ij}_{nn'} \approx V({R}_{i})\delta_{nn'}\delta_{ij}, \label{eq:V-approx}$$ so that we finally arrive at $$i\hbar\partial_t \underline{\chi}'(x)=\left[ H_{L}'(-i{\nabla})+V(x)\right] \underline{\chi}'(x).$$ Then, the application of the $U(2)$ transformation [@witthaut] $$U=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix} \label{eq:U}$$ yields ($\underline{\psi}\equiv U\underline{\chi}'$) $$i\hbar\partial_t \underline{\psi}(x)= \begin{pmatrix} V(x) + m c^{2} & c \hat{p} \\ c \hat{p} & V(x) - m c^{2} \end{pmatrix}\underline{\psi}(x) \label{eq:simildirac}$$ that corresponds to the Dirac equation in $1+1$ dimensions, in the presence of a scalar potential $V(x)$. This is the same equation as obtained in [@witthaut]. We remark that in principle the transformation (\[eq:U\]) could be applied before the coarse graining. This would affect the local behavior of the Wannier functions and therefore the subleading terms of the potential in Eq. (\[eq:fulleq\]), but not the leading diagonal term $V(x)\mathbb{1}_{2\times2}$. Summarizing, the present derivation shows that the mapping onto a Dirac equation is justified when there exists a set of sufficiently localized Wannier functions, in the *rotated* basis [@note1]. Though they do not appear explicitly in the final expression (\[eq:simildirac\]), they are needed to warrant both the coarse graining procedure and the expansion (\[eq:V-approx\]) of the slowly varying potential [@note2]. The existence of Wannier functions with such properties will be discussed in the Sections \[sec:mlwfs\] and \[sec:results\]. The Foldy-Wouthuysen transformation {#sec:fw} =================================== As anticipated in the previous section, the rotation (\[eq:mixing\]) with the angle given by Eq. (\[eq:theta\]) (also combined with the constant transformation $U$ in Eq. (\[eq:U\])), corresponds to the inverse free-particle Foldy-Wouthuysen (FW) transformation in the momentum representation (that is, acting on the eigenstates of the Dirac equation) [@fw]. We recall that the FW transformation is used to put the canonical *free* Dirac equation into a convenient diagonal form, $H_{D}=\textrm{diag}(\sqrt{m^{2} c^{4} + c^{2}\hat{p}^{2}},-\sqrt{m^{2} c^{4} + c^{2}\hat{p}^{2}})$. This explains why here it is just the inverse FW transformation that allows to cast the original diagonal $2\times2$ Bloch Hamiltonian into the Dirac form. Owing to the above point, we notice that when the potential $V(x)$ is vanishing one could even apply the coarse graining procedure directly in the original Bloch basis – without rotation – then claiming the equivalence with the Dirac equation via the inverse FW cited above (though, for practical purposes it might still be convenient to work with the canonical form, that is linear in the momentum operator $\hat{p}$.). However, the presence of the scalar potential $V(x)$ dramatically changes the situation. In fact, though one has still to use the same inverse free FW transformation to reshape the original Bloch Hamiltonian in quasimomentum space, the corresponding transformation after the coarse graining would not lead to the Dirac equation, as the potential $V(x)$ does not commute with the momentum operator $\hat{p}$. In this case the exact FW transformation is not known. Usually, it is customary to perform an expansion in $1/m$, leading to the well known spin-orbit and *Zitterbewegung* terms of the non relativistic limit [@fw]. However, such an expansion is not useful in the present case, as here one is not interested in a the non-relativistic limit, but rather the opposite (that is, simulating relativistic effects close to $m=0$). Indeed, the direct transformation that leads to Eq. (\[eq:simildirac\]) preserves all the relativistic contributions. Maximally localized Wannier functions {#sec:mlwfs} ===================================== Among all possible choices, there exists a special class of Wannier functions, the so-called maximally localized Wannier functions (MLWFs) introduced by Marzari and Vanderbilt [@marzari]. These functions, obtained by means of a suitable unitary gauge transformation of the Bloch eigenfunctions, are defined as those with the minimal spread, and can be constructed for both single or composite bands. Their application to bichromatic optical lattices has been recently discussed in [@modugno]. Let us consider explicitly the case of two almost degenerate bands, that is relevant to the present discussion. The single band MLWFs are obtained via a diagonal unitary transformation of the form $\textrm{diag}(e^{i\phi_{1}(k)},e^{i\phi_{2}(k)})$, and correspond to the exponentially decaying Wannier functions discussed by Kohn [@kohn]. In general, it is convenient to define also a set of *generalized* MLWFs for composite bands, by means of a suitable gauge transformation obtained by parametrizing the most general $2\times2$ unitary matrix [@modugno]. In the present case, the situation is complicated by the presence of the constraint (\[eq:theta\]) that fixes the mixing angle $\theta(k)$. In this case, the only freedom left is in the choice of the phases of the original Bloch functions (before the rotation). Indeed, it is easy to prove that any other choice would spoil the form of the Hamiltonian (\[eq:rotatedH\]), introducing a different dependence on $k$. Then, in order to define a set of *generalized* MLWFs that satisfy the constraint (\[eq:theta\]), one could proceed as follows. Given an initial set of Bloch functions $u_{n}(k,x)$ (that we suppose to be smooth functions of $k$ [@modugno]), the full transformation that minimizes the Wannier functions by preserving the Dirac form is $$U(k)=R(\theta(k))\times\textrm{diag}(e^{i\phi_{1}(k)},e^{i\phi_{2}(k)}) \label{eq:gauge}$$ (in principle, one could also include the constant $U(2)$ transformation, see Sect \[sec:effectivedirac\]). Following Ref. [@modugno], the gauge dependent part of the Wannier spread, $\Omega_{U}$, can be expressed in terms of the generalized Berry vector potentials $A_{nm}(k)=i({2\pi}/{d}){\langle u_{nk} |}\partial_{k}{| u_{mk} \rangle}$ as $$\Omega_{U}=\sum_{n=1,2}\langle \left(A_{nn}(k)-\langle A_{nn}\rangle_{\cal{B}}\right)^{2}\rangle_{\cal{B}}+2\langle |A_{12}|^{2}\rangle_{\cal{B}}.$$ Generally, the two contributions in the above expression can be minimized either simultaneously or independently, and in one dimension they can be made strictly vanishing, in the so-called *parallel transport* gauge [@modugno]. However, since we want to preserve the form of the Hamiltonian (\[eq:rotatedH\]), in the present case we can only require $\Omega_{U}$ to be minimum under the transformation (\[eq:gauge\]), namely for $$A_{nm}\rightarrow{\tilde A}_{nm}=i\sum_{l}U^{*}_{nl}{\partial_{k} U_{ml} }+\sum_{l,l'}U^{*}_{nl}U_{ml'}A_{ll'}.$$ Notice that the diagonal gauge transformation in Eq. (\[eq:gauge\]) only affects the diagonal term containing $A_{nn}$, leaving unchanged the off diagonal term $|A_{12}|$ [@modugno]. The latter is therefore fixed by the rotation $R(\theta)$. As a matter of fact, ${\tilde \Omega}_{U}$ results in a complicated integro-differential expression, whose solution is very tough, even numerically. Therefore, we shall adopt a different approach, as discussed in the following. The case of Refs. [@salger; @witthaut] {#sec:results} ====================================== As anticipated before, the coarse graining procedure is justified when the rotated Wannier functions are sufficiently localized on the scale of the lattice spacing, not necessarily those maximally localized. So, a sufficient condition is to start with the single band MLWFs for the original Bloch bands as considered by Witthaut *et al.* [@note3], and verify that the rotation (\[eq:mixing\]) does not affect substantially their localization properties. In order to be quantitative, we shall use the normalized participation ratio $$P=\left(d\int dx |w_{n}|^{4}\right)^{-1}$$ as a measure of the extent of the Wannier functions $w_{n}(x)$, in units of the lattice period $d$. Let us consider as examples the specific cases discussed in Refs. [@salger; @witthaut], with the periodic potential taking the form $$V_{L}(x)=\frac{V_{1}}{2}\cos(2k_{L}x)+\frac{V_{2}}{2}\cos(4k_{L}x+\phi).$$ As for the potential amplitudes, here we consider two different sets, namely $V_{1}=5E_{R}$ and $V_{2}=1.6E_{R}$ [@salger] or $V_{2}=1.56E_{R}$ [@witthaut], with $E_{R}=\hbar^{2}k_{L}^{2}/(2M)$ being the lattice recoil energy (whose actual value is irrelevant here). The two values of $V_{2}$ are very close but present subtle differences, as we shall discuss later on. The Bloch spectrum can be computed by a standard Fourier decomposition [@modugno]. Once the Bloch functions have been (numerically) obtained, one can compute the single band MLWFs by determining the phases $\phi_{1}(k)$ and $\phi_{2}(k)$ of the diagonal gauge transformation in Eq. (\[eq:gauge\]), see e.g. Ref. [@modugno], and then using Eq. (\[eq:blochwannier\]). At the same time, one can also evaluate the *rotated* MLWFs by using the full transformation (\[eq:gauge\]), with the parameters $mc^{2}$ and $c$ obtained from a fit of the energy dispersion around $k=0$ [@witthaut]. Regarding this point, we have found that in the present regime of parameters, the Bloch bands cannot be exactly reproduced by the dispersion relation $E_{\pm}(k)=\pm\sqrt{m^{2} c^{4} + c^{2}\hbar^{2}k^{2}}$ (near $k=0$) as the independent fit of the two bands returns two different values for $c$ (the value of $mc^{2}$ is unambiguously fixed by the energy gap at $k=0$). However, in practice, by taking the average value of the effective velocity, $c\equiv (c_{+}+c_{-})/2$ we get a reasonable description of the exact Bloch bands. The actual values that we get for $mc^{2}$ and $c$ for the two cases in Refs. [@salger; @witthaut] are reported in Tab. \[tab:mc\] for different values of the phase (namely $\phi=0, 0.8\pi,\pi$). $\phi=0$ $\phi=0.8\pi$ $\phi=\pi$ -------------- ---------- --------------- ------------ ----------------- $V_{2}=1.56$ $mc^{2}$ 0.68 0.21 $5\cdot10^{-4}$ $c$ 3.79 3.72 3.72 $V_{2}=1.6$ $mc^{2}$ 0.69 0.21 $8\cdot10^{-3}$ $c$ 3.79 3.72 3.72 : Values of the parameters $mc^{2}$ and $c$ for different values of phase $\phi$ and $V_{2}=1.56E_{R},1.6E_{R}$ ($V_{1}=5E_{R}$). The values in the table are in lattice units (namely, energies in units of $E_{R}$, velocities in units of $E_{R}/(\hbar k_{L})$).[]{data-label="tab:mc"} ![(Color online) Density plot of the Wannier functions, for the first and second excited bands (left and right, respectively) and $\phi=0,0.8\pi,\pi$ (from top to bottom). The MLWFs for the original Bloch bands are shown in blue (dotted-dashed line), those rotated in red (solid line). Here we use the set of parameters of Salger *et al.* [@salger]: $V_{1}=5E_{R}$, $V_{2}=1.6E_{R}$. The numbers in the legend correspond to the values of the participation ratio $P$ (see text).[]{data-label="fig:mlwf-s"}](fig1.eps){width="\columnwidth"} ![ (Color online) Density plot of the Wannier functions, for the set of parameters of Witthaut et al. [@witthaut]: $V_{1}=5E_{R}$, $V_{2}=1.56E_{R}$. See Fig. \[fig:mlwf-s\] for comparison (and description).[]{data-label="fig:mlwf-w"}](fig2.eps){width="\columnwidth"} The corresponding Wannier functions are shown in Figs. \[fig:mlwf-s\] and \[fig:mlwf-w\] (for the cases of Refs. [@salger; @witthaut] respectively). In particular, there we show the single-band MLWFs (blue dotted-dashed lines), and the corresponding *rotated* MLWFs defined above (red solid lines). A number of comments are in order. Firstly, it is obvious that in the “relativistic” [@salger] case $\phi=\pi$, the mass $m$ is almost vanishing, so that actually there is no rotation (see Eq. (\[eq:theta\])), and the two sets of Wannier functions coincide (panels (c),(f)). Then, it is noteworthy that – despite the similar values of the parameters – the cases of Refs. [@salger; @witthaut] present a different behavior, especially close to $\phi=\pi$ (panels (c),(f)). This is due to the fact that the two values $V_{2}=1.56E_{R}$ and $V_{2}=1.6E_{R}$ lie on different sides with respect to the degeneracy point corresponding to the exact crossing of the two bands, that we numerically locate at $V_{2}\simeq1.5625 E_{R}$. As a consequence, the $p$-like solution centered at the deepest minima of the cell and the $s$-like centered at the tiny minimum on top of the potential exchange their role when crossing the resonance (that is $E_{s}<E_{p}$ above the resonance, in Fig. \[fig:mlwf-s\], and vice versa below the resonance, in Fig. \[fig:mlwf-w\]). Finally, the most important remark regards the localization properties of the Wannier functions. Remarkably, Figs. \[fig:mlwf-s\] and \[fig:mlwf-w\] show that the rotation does not affect dramatically their localization properties, the *rotated* Wannier functions having a behavior similar to the MLWFs for the original Bloch band. As a matter of fact, though the two sets of Wannier functions have a different “microscopic” structure, the corresponding values of the participation ration $P$ are not so different. In order to complete the analysis, one has to check how the approximation (\[eq:V-approx\]) behaves under the rotation. This requires a precise analysis of the sub-leading terms in Eq. (\[eq:fulleq\]), that we shall discuss in the following section. The “slowly varying” potential {#sec:potential} ============================== Let us consider a generic *slowly varying* potential (that used in Refs. [@salger; @witthaut] takes the form $V(x)=V_{0}\exp[-2(x/x_{0})^{2}] -Fx$; the following treatment is valid in general). By performing a series expansion around $x=R_{j}$, the potential term in Eq. (\[eq:10\]) can be written as $$\begin{aligned} \langle V\rangle^{jj'}_{nn'}&=\sum_{s}\frac{1}{s!}\left.\frac{\partial^{s}V}{\partial x^{s}}\right|_{R_{j}}\!\!\!\langle (x-R_{j})^{s}\rangle^{jj'}_{nn'} \nonumber\\ &= V(R_{j}) + \left.\frac{\partial V}{\partial x}\right|_{R_{j}}\!\!\!\langle x-R_{j}\rangle^{jj'}_{nn'}+\dots \nonumber\\ &= V(R_{j}) + \left.\frac{\partial V}{\partial x}\right|_{R_{j}}\!\!\!\left(\langle x\rangle^{jj'}_{nn'}-R_{j}\delta_{nn'}\delta_{jj'}\right)+\dots, \nonumber\end{aligned}$$ so that the corrections to Eq. (\[eq:V-approx\]) read $$\begin{aligned} \delta V_{nn'}^{(\ell)}(R_{j})&\equiv\langle V({x})-V(R_{j})\rangle^{j,j+\ell}_{nn'} \nonumber\\&= \left.\frac{\partial V}{\partial x}\right|_{R_{j}}\!\!\!\left(\langle x\rangle^{(\ell)}_{nn'}-R_{j}\delta_{nn'}\delta_{\ell0}\right)+\dots\end{aligned}$$ with $\langle x\rangle^{(\ell)}_{nn'}$ being independent of $j$ owing to the invariance of the lattice under discrete translations. Then, it is convenient to rescale the latter expression by the recoil energy $E_{R}$ and write it as $$\frac{\delta V_{nn'}^{(\ell)}(R_{j})}{E_{R}}\approx\frac{d}{E_{R}}\left.\frac{\partial V}{\partial x}\right|_{R_{j}}\!\!\!\cdot\frac{1}{d}\left(\langle x\rangle^{(\ell)}_{nn'}-R_{j}\delta_{nn'}\delta_{\ell0}\right),$$ where the term $(d/E_{R})(\partial V/\partial x)|_{R_{j}}$ represents the variation of the potential on the scale of the lattice spacing $d$, divided by the characteristic energy scale $E_{R}$ of the lattice. By hypothesis, this term is small under the assumption of a slowly varying potential. Then, in order to verify that the approximation (\[eq:V-approx\]) is justified, one has to check that the remaining term $$\Delta^{(\ell)}_{nn'}\equiv(\langle x\rangle^{(\ell)}_{nn'}-R_{j}\delta_{nn'}\delta_{\ell0})/d$$ is sufficiently smaller than unity (note that $\Delta^{(\ell)}_{nn'}$ is actually independent of $j$ owing to the invariance of the lattice under discrete translations). In principle, one may expect this condition to be satisfied when the Wannier functions are sufficiently localized within each lattice cell. Indeed, we have verified that $|\Delta^{(\ell)}_{nn'}|<0.5$ for $\ell=0,\pm1,\pm2$ for all the cases shown in Figs. \[fig:mlwf-s\],\[fig:mlwf-w\]. Notice also that the actual value of the on-site diagonal term $\Delta^{(0)}_{nn}\equiv(\langle x\rangle^{(0)}_{nn}-R_{j})/d$ (the only one depending on $R_{j}$) is not univocally determined (though smaller than unity, anyway) due to the arbitrariness in choosing the origin of the unit cell. Conclusions {#sec:conclusions} =========== We have revisited the derivation of the effective Dirac equation for non-interacting ultracold atoms in optical lattices, discussing in particular the role played by the localization properties of the Wannier functions entering the *coarse graining* approximation. Though, remarkably, the choice of the Wannier functions does not appear explicitly in the final Dirac equation at leading order, the existence of a set of Wannier functions sufficiently localized within each lattice cell, is a crucial requirement. We have shown that the Wannier functions must be calculated from the *rotated* Bloch basis, and that a reasonable option can be obtained by rotating the MLWFs for the original Bloch bands. The above results eventually justify the use of the *coarse graining* approach, and account for the good agreement between the experimental results of Salger *et al.* [@salger] (or the direct solution of the Schrödinger equation discussed in Ref. [@witthaut]) and the effective Dirac equation proposed by Witthaut *et al.* [@witthaut]. We acknowledge useful discussion with A. Barducci, A. Bergara, I. Egusquiza, G. Muga. This work has been supported by the UPV/EHU under program UFI 11/55, the Spanish Ministry of Science and Innovation through Grant No. FIS2012-36673-C03-03, and the Basque Government through Grant No. IT-472-10. T. Salger, C. Grossert, S. Kling, and M. Weitz, Phys. Rev. Lett. **107**, 240401 (2011). D. Witthaut, T. Salger, S. Kling, C. Grossert, and M. Weitz, Phys. Rev. A **84**, 033601 (2011). In Ref. [@witthaut], only the properties of the (maximally localized) Wannier function of the original Bloch bands have been discussed. N. Marzari and D. Vanderbilt, Phys. Rev. B **56**, 12847 (1997). M. Modugno and G. Pettini, New J. Phys. **14**, 055004 (2012). J. Callaway, *Energy band theory* (Academic Press, New York and London 1964). O. Morandi and M. Modugno, Phys. Rev. B **71**, 235331 (2005). See e.g. J. Bjorken and S. Drell, *Relativistic quantum mechanics* (McGraw-Hill, 1964), W. Greiner, *Relativistic Quantum Mechanics - Wave Equations*, (Springer, 2000). E. N. Adams, Phys. Rev. **85**, 41 (1952). The Wannier functions do not appear explicitly at the leading order as they are integrated out by the coarse graining procedure; however they would affect the next-to-leading corrections of the potential. W. Kohn, Phys. Rev. **115**, 809 (1959). Though Wittahut *et al.* [@witthaut] do not mention explicitly the concept of MLWFs, it is evident that they are considering the MLWFs associated to the original Bloch bands (see *e.g.* their Fig. 3).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose a new subgradient method for the constrained minimization of convex functions. Common subgradient algorithms require an exact projection onto the feasible region in every iteration, therefore making such algorithms reasonable only for problems which admit a fast projection. In our method we use inexact projections and only require moving to within certain distances to the exact projections (which decrease in the course of the algorithm). In particular, and in stark contrast to the usual projected subgradient schemes, the iterates in our method can be infeasible throughout the whole procedure and still we are able to provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. Additionally, we briefly sketch an application to finding the minimal-ł-norm solution to an underdetermined linear system, an important problem in Compressed Sensing, where it is also known as Basis Pursuit.' author: - 'Andreas M. Tillmann, Dirk A. Lorenz, Marc E. Pfetsch' bibliography: - 'Paper\_ISA\_Convergence.bib' title: | An Infeasible-Point Subgradient Method\ using Approximate Projections[^1] --- Introduction {#sect:intro} ============ The projected subgradient method [@S85] is a classical algorithm for the minimization of a nonsmooth convex function $f$ over a convex closed constraint set ${X}$, i.e., the problem $$\label{eq:NLO} \min\, f(x)\quad{{\rm s.\,t.}}\quad x \in {X}.$$ One iteration consists of taking a step of size $\alpha_k$ along the negative direction of an arbitrary subgradient $h^k$ of the objective function $f$ at the current point $x^k$ and then computing the next iterate by projection ${\mathcal{P}}_{X}$ back onto the feasible set ${X}$: $$x^{k+1} = {\mathcal{P}}_{X}(x^k - \alpha_k\, h^k).$$ Over the past decades, numerous extensions and specializations of this scheme have been developed and proven to converge to a minimum (or minimizer). Well known disadvantages of the subgradient method are its slow local convergence and the necessity to extensively tune algorithmic parameters in order to obtain practical convergence. On the positive side, subgradient methods involve fast iterations and are easy to implement. Therefore they have been widely used in applications and (still) form one of the most basic algorithms for nonsmooth convex minimization. The main effort in each iteration of the projected subgradient algorithm usually lies in the computation of the projection ${\mathcal{P}}_{X}$. Since the projection is the solution of a (smooth) convex program itself, the required time depends on the structure of ${X}$ and corresponding specialized algorithms. Examples admitting a fast projection include the case where ${X}$ is the nonnegative orthant or the ł-norm-ball $\{x {\; | \;}{\lVert {x} \rVert}_1 \leq \tau\}$, onto which any $x \in {\bbbr}^n$ can be projected in ${\mathcal{O}}(n)$ time, see [@vdBSFM08]. The projection is more involved if ${X}$ is, for instance, an affine space or a (convex) polyhedron. In these latter cases, it makes sense to replace the exact projection ${\mathcal{P}}_{X}$ by approximate projections ${\mathcal{P}}_{X}^{\varepsilon}$ with the property that ${\lVert {{\mathcal{P}}_{X}^{\varepsilon}(x) - {\mathcal{P}}_{X}(x)} \rVert} \leq \varepsilon$ for every $\varepsilon \geq 0$. The idea is that during the early phases of the algorithm we do not need a highly accurate projection, and ${\mathcal{P}}_{X}^{\varepsilon}(x)$ can be faster to compute if $\varepsilon$ is larger. In the later phases one then strengthens the requirement on the accuracy. One particularly attractive situation in which the approach works is the case when ${X}$ is an affine space, i.e., defined by a linear equation system. Then one can use a truncated iterative method, e.g., a conjugate gradient (CG) approach, to obtain an adaptive approximate projection. We have observed that often only a few steps (2 or 3) of the CG-procedure are needed to obtain a practically convergent method. In this paper, we focus on the investigation of convergence properties of a general variant of the projected subgradient method which relies on such approximate projections. We study conditions on the step sizes and on the accuracy requirements $\varepsilon_k$ (in each iteration) in order to achieve convergence of the sequence of iterates to an optimal point, or at least convergence of the function values to the optimum. We investigate two variants of the algorithm. In the first one, the sequence $(\alpha_k)$ of step sizes forms a divergent but square-summable series ($\sum \alpha_k = \infty$, $\sum\alpha_k^2 < \infty$), and is given a priori. The second variant uses dynamic step sizes which depend on the difference of the current function value to a constant *target value* that estimates the optimal value. The main difference of the resulting algorithms to the standard method is the fact that iterates can be infeasible, i.e., are not necessarily contained in ${X}$. We thus call the algorithm of this paper *infeasible-point subgradient algorithm* (ISA). As a consequence, the objective function values of the iterates might be smaller than the optimum, which requires a non-standard analysis; see the proofs in Section \[sect:proofs\] for details. The work in this paper can be seen as a first step towards the analysis of optimization methods for nonsmooth problems that use adaptive approximate projections. The results provide an explanation for the observed convergence in practice, indicating that projected subgradient methods are in a sense robust to inexact projections. This paper is organized as follows. We first discuss related approaches in the literature. Then we fix some notation and recall a few basics. In the main part of this paper (Sections \[sect:ISA\] and \[sect:proofs\]), we state our infeasible-point subgradient algorithm (ISA) and provide proofs of convergence. In the subsequent sections we briefly discuss some variants and an application to the problem of finding the minimum ł-norm solution of an underdetermined linear equation system, a problem that lately received a lot of attention in the context of *Compressed Sensing* (see, e.g., [@D06; @CRT06; @CSweb]). We finish with some concluding remarks and give pointers to possible extensions and topics of future research. Related work ------------ The objective function values of the iterates in subgradient algorithms typically do not decrease monotonically. With the right choice of step sizes, the (projected) subgradient method nevertheless guarantees convergence of the objective function values to the minimum, see, e.g., [@S85; @P67; @BS81; @P78]. A typical result of this sort holds for step size sequences $(\alpha_k)$ which are nonsummable ($\sum_{k=0}^{\infty}\alpha_k = \infty$), but square-summable ($\sum_{k=0}^{\infty}\alpha_k^2 < \infty$). Thus, $\alpha_k \to 0$ as $k \to \infty$. Often, the corresponding sequence of points can also be guaranteed to converge to an optimal solution $x^*$, although this is not necessarily the case; see [@AW09] for a discussion. Another widely used step size rule uses an estimate $\varphi$ of the optimal value $f^*$, a subgradient $h^k$ of the objective function $f$ at the current iterate $x^k$, and relaxation parameters $\lambda_k > 0$: $$\label{eq:alphak} \alpha_k = \lambda_k \frac{f(x^k) - \varphi}{{\lVert {h^k} \rVert}_2^2}.$$ The parameters $\lambda_k > 0$ are constant or required to obey certain conditions needed for convergence proofs. The dynamic rule is a straightforward generalization of the so-called Polyak-type step size rule, which uses $\varphi = f^*$, to the more practical case when $f^*$ is unknown. The convergence results given in [@AHKS87] extend the work of Polyak [@P67; @P69] to $\varphi\geq f^*$ and $\varphi < f^*$ by imposing certain conditions on the sequence $(\lambda_k)$. We will generalize these results further, using not the (exact) Euclidean projection but an inexact projection operator. Many extensions of the general idea behind subgradient schemes exist, such as variable target value methods (see, e.g., [@KAC91; @LS05; @NB01; @SCT00; @GK99]), using approximate subgradients [@BM73; @AIS98; @LPS96a; @DAF09], or incremental projection schemes [@NDP09; @NB01], to name just a few. The vast majority of methods employs exact projections, though. Notable exceptions are: - the framework proposed in [@NDP09], where the projection step is replaced by applying a feasibility operator that is required to move a given point closer to *every* feasible point, - the infeasible bundle method from [@SS05], - the results in [@Z10], where convergence of a projected subgradient method is established under the presence of computational errors, using slight modifications of standard nonsummable step size sequences (see also [@SZ98]), - the level set subgradient algorithm in [@K98], which employs inexact projections, although here all iterates are strictly feasible; a related article is [@AT09], where the classical projection is replaced by a non-Euclidean distance-like function. The algorithm and convergence results we discuss in this paper are – to the best of our knowledge – new. The inexact projection operator we work with only moves a given point to within a certain distance from the corresponding exact projection, which is a much less restrictive requirement than imposed on the feasibility operator in [@NDP09]. Furthermore, we also employ dynamic step sizes of the form , so our results are not immediately subsumed in the fairly general framework of [@Z10] (although the notion of “computational errors” used there can be interpreted to cover projection inaccuracies as well). Notation -------- In this paper, we consider the convex optimization problem  in which we assume that $f: T \to {\bbbr}$ is a convex function (not necessarily differentiable), ${X}\subset T$ is a closed convex set, and $T \subseteq {\bbbr}^n$ is open and convex. It follows that $f$ is continuous on $T$. The set $$\label{eq:sgdef} \partial f(x) {\coloneqq}\{ h \in {\bbbr}^n {\; | \;}f(y) \geq f(x)+h^{\top}(y-x) \quad \forall\, y \in T\,\}$$ is the *subdifferential* of $f$ at a point $x \in T$; its members are the corresponding *subgradients*. Throughout this paper, we will assume  to have a nonempty set of optima $$\label{eq:Xstar} {X}^* {\coloneqq}\arg\min\{f(x) {\; | \;}x \in {X}\}.$$ An optimal point will be denoted by $x^*$ and its objective function value $f(x^*)$ by $f^*$. For a sequence $(x^k) =(x^0, x^1, x^2, \dots)$ of points, the corresponding sequence of objectives will be abbreviated by $(f_k) = (f(x^k))$. By ${\lVert {\cdot} \rVert}_p$ we denote the usual $\ell_p$-norm, i.e., for $x \in {\bbbr}^n$, $${\lVert {x} \rVert}_p {\coloneqq}\begin{cases} \big(\sum_{i=1}^{n} {\lvert {x_i} \rvert}^p\big)^{\frac{1}{p}}, & \text{if }1 \leq p < \infty,\\ \displaystyle\max_{i=1, \dots, n}\, {\lvert {x_i} \rvert}, & \text{if }p = \infty. \end{cases}$$ If no confusion can arise, we shall simply write ${\lVert {\cdot} \rVert}$ instead of ${\lVert {\cdot} \rVert}_2$ for the Euclidean ($\ell_2$-)norm. The Euclidean distance of a point $x$ to a set $Y$ is $$\label{eq:dist} {d}_Y(x){\coloneqq}\inf_{y\in Y} {\lVert {x-y} \rVert}_2.$$ For $Y$ closed and convex, has a unique minimizer, namely the orthogonal (Euclidean) projection of $x$ onto $Y$, denoted by ${\mathcal{P}}_Y(x)$. All further notation will be introduced where it is needed. The Infeasible-Point Subgradient Method (ISA) {#sect:ISA} ============================================= In the projected subgradient algorithm, we replace the exact projection ${\mathcal{P}}_{X}$ by an inexact projection. To obtain convergence to an optimal point or the optimal objective value, one needs a possibility to control the error of the inexact projection. We thus require that for any given accuracy parameter $\varepsilon \geq 0$, the inexact projection ${\mathcal{P}}_{X}^{\varepsilon} : {\bbbr}^n \to T$ satisfies $$\label{eq:IPr} {\lVert {{\mathcal{P}}_{X}^{\varepsilon}(x) - {\mathcal{P}}_{X}(x)} \rVert} \leq \varepsilon\qquad\text{for all }x\in{\bbbr}^n.$$ In particular, for $\varepsilon = 0$, we have ${\mathcal{P}}_{X}^0 = {\mathcal{P}}_{X}$. Note that ${\mathcal{P}}_{X}^\varepsilon(x)$ does not necessarily produce a point that is *closer* to ${\mathcal{P}}_{X}(x)$ (or even to ${X}$) than $x$ itself. In fact, this is only guaranteed for $\varepsilon < {d}_X(x)$. For the special case in which ${X}$ is an affine space, we give a detailed discussion of an inexact projection satisfying the above requirement in Section \[sec:CompressedSensing\]. By replacing the exact by an inexact projection in the projected subgradient algorithm, we obtain the *Infeasible-point Subgradient Algorithm* (ISA), which we will discuss in two variants in the following. ISA with a predetermined step size sequence ------------------------------------------- initialize $k {\coloneqq}0$ choose a subgradient $h^k \in \partial f(x^k)$ of $f$ at $x^k$ compute the next iterate $x^{k+1} {\coloneqq}{\mathcal{P}}_{X}^{\varepsilon_k}\left(x^k - \alpha_k h^k\right)$ increment $k {\coloneqq}k+1$ If the step sizes $(\alpha_k)$ and projection accuracies $(\varepsilon_k)$ are *predetermined* (i.e., given a priori), we obtain Algorithm \[alg:APrioriISA\]. Note that $h^k = 0$ might occur, but does not necessarily imply that $x^k$ is optimal, because $x^k$ may be infeasible. In such a case, the projection will change $x^k$ to a different point as soon as $\varepsilon_k$ becomes small enough. The stopping criterion alluded to in the algorithm statement will be ignored for the convergence analysis in the following. In practical implementations, one would stop, e.g., if no significant progress in the objective or feasibility has occurred within a certain number of iterations. We will now state our main convergence result for this variant of the ISA, using fairly standard step size conditions. The proof is provided in Section \[sect:proofs\]. \[thm:ISA\_conv\_apriori\]\ Let the projection accuracy sequence $(\varepsilon_k)$ be such that $$\label{eq:eps} \varepsilon_k \geq 0,\quad \sum_{k=0}^\infty \varepsilon_k < \infty,$$ let the positive step size sequence $(\alpha_k)$ be such that $$\label{eq:alpha} \sum_{k=0}^\infty \alpha_k = \infty,\quad\sum_{k=0}^\infty \alpha_k^2 < \infty,$$ and let the following relation hold: $$\label{eq:alphageqepsrest} \alpha_k \geq \sum_{j=k}^\infty \varepsilon_j\qquad\forall\, k=0,1,2,\dots$$ Suppose ${\lVert {h^k} \rVert} \leq H < \infty$ for all $k$. Then the sequence of the ISA iterates $(x^k)$ converges to an optimal point. Relations , , and can be ensured, e.g., by the sequences $\varepsilon_k = 1/k^2$ and $\alpha_k = 1/(k-1)$ for $k>1$: $$\sum_{j=k}^\infty \varepsilon_k \leq \int_{k-1}^\infty \frac{1}{x^2}\,dx = \frac{1}{k-1} = \alpha_k.$$ ISA with dynamic step sizes --------------------------- initialize $k {\coloneqq}0$ set $f_k {\coloneqq}f(x^k)$ choose a subgradient $h^k \in \partial f(x^k)$ of $f$ at $x^k$\[step:subgradient\] \[step:SubgradZero\] stop (at optimal feasible point $x^k \in {X}^*$)\[step:terminate\_opt\] compute the next iterate $x^{k+1} {\coloneqq}{\mathcal{P}}_{X}^{0}(x^k)$\[step:ZeroSubgradProj\] compute step size $\alpha_k {\coloneqq}\lambda_k(f(x^k) - \varphi)/{\lVert {h^k} \rVert}^2$ compute the next iterate $x^{k+1} {\coloneqq}{\mathcal{P}}_{X}^{\varepsilon_k}(x^k - \alpha_k h^k)$\[step:NextIterate\] \[step:InfBelowTarget\] set $x^{k+1} {\coloneqq}{\mathcal{P}}_{X}^{0}(x^k - \alpha_k h^k)$\[step:ExactProjection\] \[step:BelowTarget\] stop (at feasible point $x^{k+1} \in {X}$ with $f^*\leq f(x^{k+1}) \leq \varphi$)\[step:terminate\] increment $k {\coloneqq}k+1$ In order to apply the dynamic step size rule , we need several modifications of the basic method and arrive at Algorithm \[alg:DynamicISA\]. This algorithm works with an estimate $\varphi$ of the optimal objective function value $f^*$ and essentially tries to reach a feasible point $x^k$ with $f(x^k) \leq\varphi$. (Note that if $\varphi = f^*$, we would have obtained an optimal point in this case.) The use of the target value requires three changes to the basic method: 1. We need to start with a point $x^0$ with $f(x^0) \geq \varphi$; e.g., any $x^0 \in {X}$ will do (if $f_0<\varphi$, $\varphi$ is too large and should be adjusted accordingly). 2. If during the algorithm we obtain an infeasible point $x^{k+1}$ with $f(x^{k+1}) \leq \varphi$, the next step size would be zero or negative, see . In this case, we perform an *exact* projection in Step \[step:ExactProjection\] (note that this step can be replaced by an iterative inexact projection with decreasing $\varepsilon$ until we reach $f(x^{k+1}) > \varphi$ or $\varepsilon = 0$). If the new point $x^{k+1} \in {X}$ still satisfies $f(x^{k+1}) \leq \varphi$, we terminate (Step \[step:terminate\]) with a feasible point showing that $\varphi$ is too large. In this case, one can decrease $\varphi$ and iterate, thus resorting to a kind of a variable target value method (see, e.g., [@KAC91; @LS05]). 3. If $h^k = 0$ occurs during the algorithm, the step size  is meaningless. If in this case $x^k$ is feasible, it must be optimal, i.e., we have reached an unconstrained optimum that lies within ${X}$. Otherwise, we perform an exact projection in Step \[step:ZeroSubgradProj\] (or iteratively decrease $\varepsilon$ as mentioned above). The new point $x^{k+1}$ will either yield $h^{k+1} \neq 0$ or an unconstrained optimum. We obtain the following convergence results, depending on whether $\varphi$ over- or underestimates $f^*$. The proofs are deferred to the next section. \[thm:ISA\_conv\_over\] Let the optimal point set $X^*$ be bounded, $\varphi \geq f^*$, $0 < \lambda_k \leq \beta < 2$ for all $k$, and $\sum_{k=0}^{\infty} \lambda_k = \infty$. Let $(\nu_k)$ be a nonnegative sequence with $\sum_{k=0}^{\infty} \nu_k < \infty$, and let $$\begin{aligned} \nonumber \overline{\varepsilon}_k {\coloneqq}& - \left(\frac{\lambda_k(f_k - \varphi)}{{\lVert {h^k} \rVert}} + {d}_{X^*}(x^k)\right)\\ & + \sqrt{\left(\frac{\lambda_k(f_k - \varphi)}{{\lVert {h^k} \rVert}} + {d}_{X^*}(x^k)\right)^2 + \frac{\lambda_k(2 - \lambda_k)(f_k - \varphi)^2}{{\lVert {h^k} \rVert}^2}}.\label{eq:epsISAover} \end{aligned}$$ If the subgradients $h^k$ satisfy $0 < H_\ell \leq {\lVert {h^k} \rVert} \leq H_u < \infty$ and $(\varepsilon_k)$ satisfies $0 \leq \varepsilon_k \leq \min\{\overline{\varepsilon}_k,\,\nu_k\}$ for all $k$, then the following holds. 1. For any given $\delta > 0$ there exists some index $K$ such that $f(x^K) \leq \varphi + \delta$. 2. If additionally $f(x^k) > \varphi$ for all $k$ and if $\lambda_k \to 0$, then $f_k\to\varphi$ for $k\to\infty$. \[rem:ISA\_conv\_over\] 1. The sequence $(\nu_k)$ is a technicality needed in the proof to ensure $\varepsilon_k \to 0$. Note from  that $\overline{\varepsilon}_k > 0$ as long as the ISA keeps iterating, since $f_k > \varphi$ is guaranteed by Steps \[step:InfBelowTarget\]–\[step:terminate\] and $0 < \lambda_k < 2$ holds by assumption. 2. Part (i) of Theorem \[thm:ISA\_conv\_over\] essentially means that after a finite number of iterations, we reach a point $x^k$ with $f^* \leq f(x^k) \leq \varphi + \delta$. Note that this point may still be infeasible (namely, if $\varphi < f(x^k) \leq \varphi + \delta$), but the closer $f(x^k)$ gets to $\varphi$, the smaller $\overline{\varepsilon}_k$ becomes, i.e., the algorithm projects more and more accurately. Thus, one can expect the possible feasibility violation to be reasonably small, depending on the quality of the estimate $\varphi$ (and the value of the constant $\delta$). 3. On the other hand, Part (ii) shows what happens when all function values $f(x^k)$ stay above the overestimate $\varphi$ of $f^*$, and we impose a stronger condition on the relaxation parameters $\lambda_k$: We eventually obtain $f(x^k)$ arbitrarily close to $\varphi$, with vanishing feasibility violation as $k \to \infty$. Then, as well as in case of termination in Step \[step:terminate\], it may be desirable to restart the algorithm using a smaller $\varphi$. 4. The conditions ${\lVert {h^k} \rVert} \geq H_\ell > 0$, for all $k$, in Theorem \[thm:ISA\_conv\_over\] imply that all subgradients used by the algorithm are nonzero. In this case, Steps \[step:SubgradZero\]–\[step:ZeroSubgradProj\] are never executed. These conditions are often automatically guaranteed, for example, if $X$ is compact and no unconstrained optimum of $f$ lies in $X$. In this case, ${\lVert {h} \rVert} \geq H_\ell > 0$ for all $h \in \partial f(x)$ and $x \in X$. Moreover, the same holds for a small enough open neighborhood of $X$. Also, the norms of the subgradients are bounded from above. Thus, if we start close enough to $X$ and restrict $\varepsilon_k$ to be small enough, the conditions of Theorem \[thm:ISA\_conv\_over\] are fulfilled. Another example in which the conditions are satisfied appears in Section \[sec:CompressedSensing\]. \[thm:ISA\_conv\_under\] Let the set of optimal points $X^*$ be bounded, $\varphi < f^*$, $0 < \lambda_k \leq \beta < 2$ for all $k$, and $\sum_{k=0}^{\infty} \lambda_k = \infty$. Let $(\nu_k)$ be a nonnegative sequence with $\sum_{k=0}^{\infty} \nu_k < \infty$, let $$\label{eq:LkDef} L_k {\coloneqq}\frac{\lambda_k(2 - \beta)(f_k - \varphi)}{{\lVert {h^k} \rVert}^2}\big(f^* - f_k + \frac{\beta}{2 - \beta}(f^* - \varphi)\big),$$ and let $$\label{eq:epsISAunder} \tilde{\varepsilon}_k {\coloneqq}- \left(\frac{\lambda_k(f_k - \varphi)}{{\lVert {h^k} \rVert}} + {d}_{X^*}(x^k)\right) + \sqrt{\left(\frac{\lambda_k(f_k - \varphi)}{{\lVert {h^k} \rVert}} +{d}_{X^*}(x^k)\right)^2 - L_k}.$$ If the subgradients $h^k$ satisfy $0 < H_\ell \leq {\lVert {h^k} \rVert} \leq H_u < \infty$ and $(\varepsilon_k)$ satisfies $0 \leq \varepsilon_k \leq \min\{{\lvert {\tilde{\varepsilon}_k} \rvert},\,\nu_k\}$ for all $k$, then the following holds. 1. For any given $\delta > 0$, there exists some $K$ such that $f_K \leq f^* + \frac{\beta}{2-\beta}(f^*-\varphi)+\delta$. 2. If additionally $\lambda_k \to 0$, then the sequence of objective function values $(f_k)$ of the ISA iterates $(x^k)$ converges to the optimal value $f^*$. <!-- --> 1. In the case $\varphi < f^*$, if at some point $f(x^{k+1}) \leq\varphi$, Step \[step:ExactProjection\] ensures that $\varphi < f^* \leq f(x^{k+1})$. Thus, the algorithm will never terminate with Step \[step:terminate\]. 2. Moreover, infeasible points $x^k$ with $\varphi < f(x^k) < f^*$ are possible. Hence, the inequality in Theorem \[thm:ISA\_conv\_under\] (i) may be satisfied too soon to provide conclusive information regarding solution quality. Interestingly, part (ii) shows that by letting the parameters $(\lambda_k)$ tend to zero, one can nevertheless establish convergence to the optimal value $f^*$ (and $d_{{X}}(x^k)\leq d_{{{X}}^{*}}(x^k)\to 0$, i.e., asymptotic feasibility). 3. Theoretically, small values of $\beta$ yield smaller errors, while in practice this restricts the method to very small steps (since $\lambda_k \leq \beta$), resulting in slow convergence. This illustrates a typical kind of trade-off between solution accuracy and speed. 4. The use of ${\lvert {\tilde{\varepsilon}_k} \rvert}$ in Theorem \[thm:ISA\_conv\_under\] avoids conflicting bounds on $\varepsilon_k$ in case $L_k > 0$. Because $0 \leq \varepsilon_k \leq \nu_k$ holds notwithstanding, $0\leq \varepsilon_k\to 0$ is maintained. 5. The same statements on lower and upper bounds on ${\lVert {h^k} \rVert}$ as in Remark \[rem:ISA\_conv\_over\] apply in the context of Theorem \[thm:ISA\_conv\_under\]. Convergence of the ISA {#sect:proofs} ====================== From now on, let $(x^k)$ denote the sequence of points with corresponding objective function values $(f_k)$ and subgradients $(h^k)$, $h^k\in\partial f(x^k)$, as generated by the ISA in the respective variant under consideration. Let us consider some basic inequalities which will be essential in establishing our main results. The exact Euclidean projection is nonexpansive, therefore $$\label{eq:distPrx} \lVert {\mathcal{P}}_{X}(y)-x\rVert\leq {\lVert {y-x} \rVert} \quad\forall x\in{X}.$$ Hence, for the inexact projection ${\mathcal{P}}_{X}^{\varepsilon}$ we have, by (\[eq:IPr\]) and (\[eq:distPrx\]), for all $x\in{X}$ $$\begin{aligned} \nonumber\lVert{\mathcal{P}}_{X}^{\varepsilon}(y)-x\rVert&=\lVert{\mathcal{P}}_{X}^{\varepsilon}(y)-{\mathcal{P}}_{X}(y)+{\mathcal{P}}_{X}(y)-x\rVert\\ \label{eq:distIPrx}&\leq\lVert{\mathcal{P}}_{X}^{\varepsilon}(y)-{\mathcal{P}}_{X}(y)\rVert+\lVert{\mathcal{P}}_{X}(y)-x\rVert\leq\varepsilon +\lVert y-x\rVert.\end{aligned}$$ At some iteration $k$, let $x^{k+1}$ be produced by the ISA using some step size $\alpha_k$ and write $y^{k}{\coloneqq}x^k-\alpha_k h^k$. We thus obtain for every $x\in{X}$: $$\begin{aligned} \nonumber &\lVert x^{k+1}-x\rVert^2=\lVert{\mathcal{P}}_{X}^{\varepsilon_k}(y^{k})-x\rVert^2\\ \nonumber \leq~&\left(\lVert y^{k}-x\rVert+\varepsilon_k\right)^2=\lVert y^{k}-x\rVert^2+2\,\lVert y^{k}-x\rVert\,\varepsilon_k+\varepsilon_k^2\\ \nonumber =~&\lVert x^k-x\rVert^2-2\,\alpha_k(h^k)^\top(x^k-x)+\alpha_k^2\,{\lVert {h^k} \rVert}^2+2\,\lVert y^{k}-x\rVert\, \varepsilon_k+\varepsilon_k^2\\ \nonumber \leq~&\lVert x^k-x\rVert^2-2\,\alpha_k(f_k-f(x))+\alpha_k^2\,{\lVert {h^k} \rVert}^2+2\lVert x^k-x\rVert\varepsilon_k+2\,\alpha_k\,\varepsilon_k{\lVert {h^k} \rVert}+\varepsilon_k^2\\ \label{eq:distIPrxSq}=~&\lVert x^k-x\rVert^2-2\,\alpha_k(f_k-f(x))+\left(\alpha_k\,{\lVert {h^k} \rVert} +\varepsilon_k\right)^2+2\,\lVert x^k-x\rVert\,\varepsilon_k,\end{aligned}$$ where the second inequality follows from the subgradient definition and the triangle inequality. Note that the above inequalities – hold in particular for every optimal point $x^*\in{X}^*$. ISA with predetermined step size sequence {#sect:ISA_conv_apriori} ----------------------------------------- The proof of the convergence of the ISA iterates $x^k$ is somewhat more involved than for the usual subgradient method as, e.g., in [@S85]. This is due to the additional error terms by inexact projection and the fact that $f_k\geq f^*$ is not guaranteed since the iterates may be infeasible. **Proof of Theorem \[thm:ISA\_conv\_apriori\].**  We rewrite the estimate (\[eq:distIPrxSq\]) with $x=x^*\in X^*$ as $$\label{eq:basic_isa_divergent_series} \lVert x^{k+1}-x^*\rVert^2 \leq \lVert x^k-x^*\rVert^2-2\,\alpha_k\,(f_k-f^*)+\underbrace{\left(\alpha_k\lVert h^k\rVert +\varepsilon_k\right)^2+2\,\lVert x^k-x^*\rVert\,\varepsilon_k}_{=\beta_k}$$ and obtain (by applying (\[eq:basic\_isa\_divergent\_series\]) for $k=0,\dots,m$) $$\begin{aligned} \nonumber \lVert x^{m+1}-x^*\rVert^2~\leq~&\lVert x^0-x^*\rVert^2-2\sum_{k=0}^m (f_k-f^*)\alpha_k+\sum_{k=0}^m \beta_k. \end{aligned}$$ Our first goal is to show that $\sum_k\beta_k$ is a convergent series. Using ${\lVert {h^k} \rVert}\leq H$ and denoting $A{\coloneqq}\sum_{k=0}^\infty \alpha_k^2$, we get $$\label{eq:3} \nonumber \sum_{k=0}^m\beta_k~\leq~ AH^2+\sum_{k=0}^m \varepsilon_k^2+2H\sum_{k=0}^m \alpha_k \varepsilon_k +2\sum_{k=0}^m \lVert x^k-x^*\rVert\varepsilon_k.$$ Now consider the last term (without the factor $2$) and denote $D {\coloneqq}\lVert x^0-x^*\rVert$: $$\begin{aligned} \nonumber &\sum_{k=0}^m \lVert x^k-x^*\rVert\varepsilon_k~=~D\,\varepsilon_0+\sum_{k=1}^m\big\lVert{\mathcal{P}}_{X}^{\varepsilon_{k-1}}\left(x^{k-1}-\alpha_{k-1}h^{k-1}\right)-x^*\big\rVert\,\varepsilon_k\\ \nonumber \leq~&D\,\varepsilon_0+\sum_{k=1}^m\big\lVert{\mathcal{P}}_{X}^{\varepsilon_{k-1}}\left(x^{k-1}-\alpha_{k-1}h^{k-1}\right)-{\mathcal{P}}_{X}\left(x^{k-1}-\alpha_{k-1}h^{k-1}\right)\big\rVert\,\varepsilon_k\\ \nonumber &\qquad\hspace*{-2pt}+\sum_{k=1}^m\big\lVert{\mathcal{P}}_{X}\left(x^{k-1}-\alpha_{k-1}h^{k-1}\right)-x^*\big\rVert\,\varepsilon_k\\ \nonumber \leq~&D\,\varepsilon_0+\sum_{k=1}^m\varepsilon_{k-1}\varepsilon_k +\sum_{k=1}^m\big\lVert x^{k-1}-\alpha_{k-1}h^{k-1}-x^*\big\rVert\,\varepsilon_k\\ \nonumber \leq~&D\,\varepsilon_0+\sum_{k=0}^{m-1}\varepsilon_k\varepsilon_{k+1}+\sum_{k=0}^{m-1}\lVert x^k-x^*\rVert\,\varepsilon_{k+1}+\sum_{k=0}^{m-1}{\lVert {h^k} \rVert}\,\alpha_k\,\varepsilon_{k+1}\\ \label{eq:distweg1} \leq~&D\,(\varepsilon_0+\varepsilon_1)+\sum_{k=0}^{m-1}\varepsilon_k\varepsilon_{k+1}+\sum_{k=1}^{m-1}\lVert x^k-x^*\rVert\,\varepsilon_{k+1}+H\sum_{k=0}^{m-1}\alpha_k\,\varepsilon_{k+1}. \end{aligned}$$ Repeating this procedure to eliminate all terms $\lVert x^k-x^*\rVert$ for $k>0$, we obtain $$\begin{aligned} \nonumber \text{(\ref{eq:distweg1})}~\leq~\dots~\leq~&D\sum_{k=0}^m \varepsilon_k+\sum_{j=1}^m\Big(\sum_{k=0}^{m-j}\varepsilon_k \varepsilon_{k+j}+H\sum_{k=0}^{m-j}\alpha_k\varepsilon_{k+j}\Big)\\ \label{eq:distweg2} =~&D\sum_{k=0}^m\varepsilon_k+\sum_{j=1}^{m}\sum_{k=0}^{m-j}(\varepsilon_k+H\alpha_k)\,\varepsilon_{k+j}. \end{aligned}$$ Using the above chain of inequalities, and , and the abbreviation $E{\coloneqq}\sum_{k=0}^\infty \varepsilon_k$, we finally get: $$\begin{aligned} \nonumber &\lVert x^{m+1}-x^*\rVert^2 + 2\sum_{k=0}^m (f_k-f^*)\,\alpha_k \leq D^2 + \sum_{k=0}^m \beta_k\\ \nonumber\leq~&D^2+AH^2+\sum_{k=0}^m\varepsilon_k^2+2\,H\sum_{k=0}^m\alpha_k\varepsilon_k+2\,D\sum_{k=0}^m \varepsilon_k+2\sum_{j=1}^{m}\sum_{k=0}^{m-j}(\varepsilon_k+H\alpha_k)\,\varepsilon_{k+j}\\ \nonumber \leq~&D^2+AH^2+2\,D\sum_{k=0}^m\varepsilon_k+2\sum_{j=0}^m \sum_{k=0}^{m-j}\varepsilon_k\varepsilon_{k+j}+2\,H\sum_{j=0}^m\sum_{k=0}^{m-j}\alpha_k\varepsilon_{k+j}\\ \nonumber =~&D^2+AH^2+2\,D\sum_{k=0}^m\varepsilon_k+2\sum_{j=0}^m\Big(\varepsilon_j\sum_{k=j}^m\varepsilon_k\Big)+2\,H\sum_{j=0}^m\Big(\alpha_j\sum_{k=j}^m\varepsilon_k\Big)\\ \nonumber \leq~&D^2+AH^2+2\,D\sum_{k=0}^m\varepsilon_k+2\sum_{j=0}^m E\, \varepsilon_j+2H\sum_{j=0}^m \alpha_j\, \alpha_j\\ \nonumber \leq~&D^2+AH^2+2\,(D+E)\sum_{k=0}^m\varepsilon_k+2\,H\sum_{k=0}^m\alpha_k^2\\ \label{eq:5} \leq~&(D+E)^2+E^2+(2+H)\,A\,H~=:~R~<~\infty. \end{aligned}$$ Since the iterates $x^k$ may be infeasible, possibly $f_k<f^*$ and hence, the second term on the left hand side of  might be negative. Therefore, we distinguish two cases: 1. If $f_k\geq f^*$ for all but finitely many $k$, we can assume without loss of generality that $f_k\geq f^*$ for all $k$ (by considering only the “later” iterates). Now, because $f_k\geq f^*$ for all $k$, $$\sum_{k=0}^{m}(f_k-f^*)\,\alpha_k \geq \sum_{k=0}^{m}\Big(\underbrace{\min_{j=0,\dots,m} f_j}_{\eqqcolon f^*_m}-f^*\Big)\,\alpha_k = (f^*_m-f^*)\sum_{k=0}^{m}\alpha_k.$$ Together with (\[eq:5\]) this yields $$0\leq 2\,(f^*_m-f^*)\sum_{k=0}^{m}\alpha_k\leq R\quad\Longleftrightarrow\quad 0\leq f^*_m-f^*\leq\frac{R}{2\sum_{k=0}^{m}\alpha_k} .$$ Thus, because $\sum_{k=0}^{m}\alpha_k$ diverges, we have $f^*_m\to f^*$ for $m\to \infty$ (and, in particular, $\liminf_{k\to\infty}f_k = f^*$). To show that $f^*$ is in fact the only possible accumulation point (and hence the limit) of $(f_k)$, assume that $(f_k)$ has another accumulation point strictly larger than $f^*$, say $f^*+\eta$ for some $\eta>0$. Then, both cases $f_k< f^*+\tfrac{1}{3}\eta$ and $f_k> f^*+\tfrac{2}{3}\eta$ must occur infinitely often. We can therefore define two index subsequences $(m_{\ell})$ and $(n_{\ell})$ by setting $n_{(-1)}{\coloneqq}-1$ and, for $\ell\geq 0$, $$\begin{aligned} m_{\ell} &{\coloneqq}\min\{\, k {\; | \;}k>n_{\ell-1},~f_k> f^*+\tfrac{2}{3}\eta\,\},\\ n_{\ell} &{\coloneqq}\min\{\, k {\; | \;}k>m_{\ell},~f_k< f^*+\tfrac{1}{3}\eta\,\}. \end{aligned}$$ Figure \[fig:updown\] illustrates this choice of indices. ![The sequences $(m_\ell)$ and $(n_\ell)$.[]{data-label="fig:updown"}](updown){width="90.00000%"} Now observe that for any $\ell$, $$\begin{aligned} \nonumber \tfrac{1}{3}\eta &< f_{m_{\ell}}-f_{n_{\ell}} \leq H\cdot{\lVert {x^{n_{\ell}}-x^{m_{\ell}}} \rVert} \leq H\left({\lVert {x^{n_{\ell}-1}-x^{m_{\ell}}} \rVert}+H\alpha_{n_{\ell}-1}+\varepsilon_{n_{\ell}-1}\right)\\ \label{eq:est_fdiff_eta_i} &\leq \dots \leq H^2 \sum_{j=m_{\ell}}^{n_{\ell}-1}\alpha_j + H\sum_{j=m_{\ell}}^{n_{\ell}-1}\varepsilon_j, \end{aligned}$$ where the second inequality is obtained similar to . For a given $m$, let $\ell_{m}{\coloneqq}\max\{\,\ell {\; | \;}n_{\ell}-1\leq m\,\}$ be the number of blocks of indices between two consecutive indices $m_\ell$ and $n_\ell-1$ until $m$. We obtain: $$\label{eq:est_fdiff_eta_i_sum} \tfrac{1}{3}\sum_{\ell=0}^{\ell_m}\eta\leq H^2\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{n_{\ell}-1}\alpha_j +H\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{n_{\ell}-1}\varepsilon_j \leq H^2\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{n_{\ell}-1}\alpha_j +HE.$$ For $m\to\infty$, the left hand side tends to infinity, and since $HE < \infty$, this implies that $$\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{n_{\ell}-1}\alpha_j\to\infty.$$ Then, since $\alpha_k>0$ and $f_k\geq f^*$ for all $k$, (\[eq:5\]) yields $$\begin{aligned} \infty &>R \geq {\lVert {x^{m+1}-x^*} \rVert}^2+2\sum_{k=0}^{m}(f_k-f^*)\alpha_k \geq 2\sum_{k=0}^{m}(f_k-f^*)\alpha_k\\ &\geq 2\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{n_{\ell}-1}\underbrace{(f_j-f^*)}_{>\tfrac{1}{3}\eta}\alpha_j > \tfrac{2}{3}\eta \sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{n_{\ell}-1}\alpha_j. \end{aligned}$$ But for $m\to\infty$, this yields a contradiction since the sum on the right hand side diverges. Hence, there does not exist an accumulation point strictly larger than $f^*$, so we can conclude $f_k\to f^*$ as $k\to\infty$, i.e., the whole sequence $(f_k)$ converges to $f^*$. We now consider convergence of the sequence $(x^k$). From  we conclude that both terms on the left hand side are bounded independently of $m$. In particular this means $(x^k)$ is a bounded sequence. Hence, by the Bolzano-Weierstra[ß]{} Theorem, it has a convergent subsequence $(x^{k_i})$ with $x^{k_i}\to \overline{x}$ (as $i\to\infty$) for some $\overline{x}$. To show that the full sequence $(x^k)$ converges to $\overline{x}$, take any $K$ and any $k_i < K$ and observe from  that $${\lVert {x^K - \overline{x}} \rVert}^2 \leq {\lVert {x^{k_i} - \overline{x}} \rVert}^2 + \sum_{j=k_i}^{K-1}\beta_j.$$ Since $\sum_k \beta_k$ is a convergent series (as seen from the second last line of ), the right hand side becomes arbitrarily small for $k_i$ and $K$ large enough. This implies $x^k\to \overline{x}$, and since $\varepsilon_k \to 0$, $f_k \to f^*$, and ${X}^*$ is closed, $\overline{x} \in {X}^*$ must hold. 2. Now consider the case where $f_k<f^*$ occurs infinitely often. We write $(f^-_k)$ for the subsequence of $(f_k)$ with $f_k<f^*$ and $(f^+_k)$ for the subsequence with $f^*\geq f_k$. Clearly $f^-_k\to f^*$. Indeed, the corresponding iterates are asymptotically feasible (since the projection accuracy $\varepsilon_k$ tends to zero), and hence $f^*$ is the only possible accumulation point of $(f^-_k)$. Denoting $M^-_m = \{k\leq m{\; | \;}f_k<f^*\}$ and $M^+_m = \{k\leq m{\; | \;}f_k\geq f^*\}$, we conclude from  that $$\label{eq:est_f_splitted} {\lVert {x^{m+1}-x^*} \rVert}^2 + 2\sum_{k\in M^+_m} (f_k-f^*)\,\alpha_k\leq R + 2\sum_{k\in M^-_m} (f^*-f_k)\,\alpha_k.$$ Note that each summand is non-negative. To see that the right hand side is bounded independently of $m$, let $y^{k-1} = x^{k-1}-\alpha_{k-1}h^{k-1}$, and observe that here ($k\in M^-_m$), due to $f_k< f^*\leq f({\mathcal{P}}_X(y^{k-1}))$, we have $$\begin{aligned} f^*-f_k &\leq f\big({\mathcal{P}}_X(y^{k-1})\big)-f\big({\mathcal{P}}_X^{\varepsilon_{k-1}}(y^{k-1})\big)\\ &\leq (h^{k-1})^\top\big({\mathcal{P}}_X(y^{k-1})-{\mathcal{P}}_X^{\varepsilon_{k-1}}(y^{k-1})\big)\\ &\leq {\lVert {h^{k-1}} \rVert}\cdot \big\lVert{\mathcal{P}}_X(y^{k-1})-{\mathcal{P}}_X^{\varepsilon_{k-1}}(y^{k-1})\big\rVert \leq H\varepsilon_{k-1}, \end{aligned}$$ using the subgradient and Cauchy-Schwarz inequalities as well as property  of ${\mathcal{P}}_X^\varepsilon$ and the boundedness of the subgradient norms. From , using and , we thus obtain $$\begin{aligned} \nonumber &{\lVert {x^{m+1}-x^*} \rVert}^2 + 2\sum_{k\in M^+_m} (f_k-f^*)\alpha_k \leq R + 2\,H\sum_{k\in M^-_m} \alpha_k\, \varepsilon_{k-1}\\ \label{eq:est_f_splitted_var} \leq~ &R + 2\,H\sum_{k\in M^-_m} \alpha_k \alpha_{k-1} \leq R+2H\sum_{k=0}^{\infty}\alpha_k \alpha_{k-1} \leq R+4\,A\,H <\infty. \end{aligned}$$ Similar to case i), we conclude that both the sequence $(x^k)$ and the series $\sum_{k\in M^+_m} (f_k-f^*)\,\alpha_k$ are bounded. It remains to show that $f^+_k \to f^*$. Assume to the contrary that $(f^+_k)$ has an accumulation point $f^*+\eta$ for $\eta>0$. Similar to before, we construct index subsequences $(m_\ell)$ and $(p_\ell)$ as follows: Set $p_{(-1)}{\coloneqq}-1$ and define, for $\ell\geq 0$, $$\begin{aligned} m_\ell &{\coloneqq}\min\{\,k\in M^+_\infty{\; | \;}k>p_{\ell-1},\,f_k>f^*+\tfrac{2}{3}\eta\,\},\\ p_\ell &{\coloneqq}\min\{\,k\in M^-_\infty{\; | \;}k>m_{\ell}\,\}. \end{aligned}$$ Then $m_{\ell},\dots,p_{\ell}-1\in M^+_\infty$ for all $\ell$, and we have $$\tfrac{2}{3}\eta < f_{m_{\ell}}-f_{p_{\ell}}\leq H^2\sum_{j=m_{\ell}}^{p_{\ell}-1}\alpha_j +H\sum_{j=m_{\ell}}^{p_{\ell}-1}\varepsilon_j .$$ Therefore, with $\ell_m{\coloneqq}\max\{\,\ell\,\vert\,p_{\ell}-1\leq m\,\}$ for a given $m$, $$\tfrac{2}{3}\sum_{\ell=0}^{\ell_m}\eta \leq H^2\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{p_{\ell}-1}\alpha_j+H\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{p_{\ell}-1}\varepsilon_j \leq H^2\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{p_{\ell}-1}\alpha_j+H\,E.$$ Now the left hand side becomes arbitrarily large as $m\to\infty$, so that also $\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{p_{\ell}-1}\alpha_j\to\infty$, since $HE<\infty$. Note that because $\alpha_k >0$ and $$\sum_{\ell=0}^{\ell_m}\sum_{j=m_{\ell}}^{p_{\ell}-1}\alpha_j\leq\sum_{k\in M^+_m}\alpha_k,$$ this latter series must diverge as well. As a consequence, $f^*$ is itself an (other) accumulation point of $(f^+_k)$: From  we have $$\begin{aligned} \infty &>R+4AH \geq 2\sum_{k\in M^+_m}(f_k-f^*)\alpha_k\\ &\geq \sum_{k\in M^+_m}(\underbrace{\min\{\,f_j\,\vert\,j\in M^+_m,\,j\leq m\,\}}_{\eqqcolon\hat{f}^*_m}-f^*)\,\alpha_k = (\hat{f}^*_m-f^*)\sum_{k\in M^+_m}\alpha_k , \end{aligned}$$ and thus $$0\leq \hat{f}^*_m-f^*\leq\frac{R + 4\, A\, H}{\sum_{k\in M^+_m}\alpha_k}\to 0\qquad {\rm as}~m\to\infty,$$ since $\sum_{k\in M^+_m}\alpha_k$ diverges. But then, knowing $(\hat{f}^*_k)$ converges to $f^*$, we can use $(m_{\ell})$ and another index subsequence $(n_{\ell})$ given by $$n_{\ell} {\coloneqq}\min\{\,k\in M^+_\infty \,\vert\,k>m_{\ell},\,f_k<f^*+\tfrac{1}{3}\eta\,\},$$ to proceed analogously to case i) to arrive at a contradiction and conclude that no $\eta>0$ exists such that $f^*+\eta$ is an accumulation point of $(f^+_k)$. On the other hand, since $(x^k)$ is bounded and $f$ is continuous over its domain $T = \text{int}(T)$ and hence bounded over bounded subsets of $T$ (recall that for all $k$, $x^k \in \text{Im}({\mathcal{P}}_{X}^{\varepsilon_k})\subseteq T$), $(f^+_k)$ is bounded. Thus, it must have at least one accumulation point. Since $f_k\geq f^*$ for all $k\in M^+_\infty$, the only possibility left is $f^*$ itself. Hence, $f^*$ is the unique accumulation point (i.e., the limit) of the sequence $(f^+_k)$. As this is also true for $(f^-_m)$, the whole sequence $(f_k)$ converges to $f^*$. Finally, convergence of the bounded sequence $(x^k)$ to some $\overline{x}\in{X}^*$ can now be obtained just like in case i), completing the proof. ISA with dynamic Polyak-type step sizes {#sect:ISA_conv_dynamic} --------------------------------------- Let us now turn to dynamic step sizes, which often work better in practice. In the rest of this section, $\alpha_k$ will always denote step sizes of the form . Since in subgradient methods the objective function values need not decrease monotonically, the key quantity in convergence proofs usually is the distance to the optimal set ${X}^*$. For the ISA with dynamic step sizes (Algorithm \[alg:DynamicISA\]), we have the following result concerning these distances: \[lem:1\] Let $x^*\in{X}^*$. For the sequence of ISA iterates $(x^k)$, computed with step sizes $\alpha_k=\lambda_k(f_k-\varphi)/{\lVert {h^k} \rVert}^2$, it holds that $$\begin{aligned} \nonumber\lVert x^{k+1}-x^*\rVert^2~\leq~&\lVert x^k-x^*\rVert^2+\varepsilon_k^2+2\left(\frac{\lambda_k(f_k-\varphi)}{{\lVert {h^k} \rVert}}+\lVert x^k-x^*\rVert\right)\varepsilon_k\\ \label{eq:lem1}&+\frac{\lambda_k(f_k-\varphi)}{{\lVert {h^k} \rVert}^2}\big(\lambda_k(f_k-\varphi)-2(f_k-f^*)\big). \end{aligned}$$ In particular, also $$\label{eq:distXstar} {d}_{X^*}(x^{k+1})^2\leq{d}_{X^*}(x^k)^2-2\,\alpha_k(f_k-f^*)+(\alpha_k{\lVert {h^k} \rVert} + \varepsilon_k)^2+2\,{d}_{X^*}(x^k)\,\varepsilon_k.$$ Plug  into  for $x=x^*$ and rearrange terms to obtain (\[eq:lem1\]). If the optimization problem (\[eq:NLO\]) has a unique optimum $x^*$, then obviously $\lVert x^k-x^*\rVert={d}_{X^*}(x^k)$ for all $k$, so is identical to . Otherwise, note that since $X^*$ is the intersection of the closed set ${X}$ with the level set $\{x {\; | \;}f(x) = f^*\}$ of the convex function $f$, $X^*$ is closed (cf., for example, [@HUL04 Prop. 1.2.2, 1.2.6]) and the projection onto $X^*$ is well-defined. Then, considering $x^*={\mathcal{P}}_{{X}^*}(x^k)$, becomes $$\big\lVert x^{k+1}-{\mathcal{P}}_{{X}^*}(x^{k})\big\rVert^2\leq{d}_{X^*}(x^k)^2-2\alpha_k(f_k-f^*)+(\alpha_k{\lVert {h^k} \rVert}+\varepsilon_k)^2+2\,{d}_{X^*}(x^k)\,\varepsilon_k.$$ Furthermore, because obviously $f({\mathcal{P}}_{X^*}(x))=f({\mathcal{P}}_{X^*}(y))=f^*$ for all $x,y\in{\bbbr}^n$, and by definition of the Euclidean projection, $${d}_{X^*}(x^{k+1})^2~=~\big\lVert x^{k+1}-{\mathcal{P}}_{{X}^*}(x^{k+1})\big\rVert^2~\leq~ \big\lVert x^{k+1}-{\mathcal{P}}_{{X}^*}(x^{k})\big\rVert^2.$$ Combining the last two inequalities yields . Typical convergence results are derived by showing that the sequence $({\lVert {x^k-x^*} \rVert})$ is monotonically decreasing (for arbitrary $x^*\in{X}^*$) under certain assumptions on the step sizes, subgradients, etc. This is also done in [@AHKS87], where (\[eq:lem1\]) with $\varepsilon_k=0$ for all $k$ is the central inequality, cf. [@AHKS87 Prop. 2]. In our case, i.e., working with inexact projections as specified by (\[eq:IPr\]), we can follow this principle to derive conditions on the projection accuracies $(\varepsilon_k)$ which still allow for a (monotonic) decrease of the distances from the optimal set: If the last summand in (\[eq:lem1\]) is negative, the resulting gap between the distances from ${X}^*$ of subsequent iterates can be exploited to relax the projection accuracy, i.e., to choose $\varepsilon_k>0$ without destroying monotonicity. Naturally, to achieve feasibility (at least in the limit), we will need to have $(\varepsilon_k)$ diminishing ($\varepsilon_k\to 0$ as $k\to\infty$). It will become clear that this, combined with summability ($\sum_{k=0}^\infty \varepsilon_k < \infty$) and with monotonicity conditions as described above, is already enough to extend the analysis to cover iterations with $f_k < f^*$, which may occur since we project inaccurately. For different choices of the estimate $\varphi$ of $f^*$, we will now derive the proofs of Theorems \[thm:ISA\_conv\_over\] and \[thm:ISA\_conv\_under\] via a series of intermediate results. Corresponding results for exact projections ($\varepsilon_k=0$) can be found in [@AHKS87]; our analysis for approximate projections in fact improves on some of these earlier results (e.g., [@AHKS87 Prop. 10] states convergence of some *sub*sequence of the function values to the optimum for the case $\varphi<f^*$, whereas Theorem \[thm:ISA\_conv\_under\] in this paper gives convergence of the whole sequence $(f_k)$, for approximate and also for exact projections). ### Using overestimates of the optimal value. In this part we will focus on the case $\varphi\geq f^*$. As might be expected, this relation allows for eliminating the unknown $f^*$ from . \[lem:2\] Let $\varphi\geq f^*$ and $\lambda_k \geq 0$. If $f_k\geq\varphi$ for some $k\in{\bbbn}$, then $$\begin{aligned} \nonumber{d}_{{X}^*}(x^{k+1})^2~\leq~&{d}_{{X}^*}(x^k)^2+\varepsilon_k^2+2\left(\frac{\lambda_k(f_k-\varphi)}{{\lVert {h^k} \rVert}}+{d}_{{X}^*}(x^k)\right)\varepsilon_k\\ \label{eq:lem2}&+\frac{\lambda_k(\lambda_k-2)(f_k-\varphi)^2}{{\lVert {h^k} \rVert}^2}. \end{aligned}$$ This follows immediately from Lemma \[lem:1\], using $f_k\geq\varphi\geq f^*$ and $\lambda_k\geq 0$. Note that the ISA guarantees $f_k > \varphi$ by sufficiently accurate projection (otherwise the method stops, indicating $\varphi$ was too large, see Steps 13–16 of Algorithm \[alg:DynamicISA\]) and the last summand in  is always negative for $0 < \lambda_k < 2$. Hence, inexact projection ($\varepsilon_k> 0$) can always be employed without destroying the monotonic decrease of $({d}_{{X}^*}(x^k))$, as long as the $\varepsilon_k$ are chosen small enough. The following result provides a theoretical bound on how large the projection inaccuracies $\varepsilon_k$ may become. \[lem:3\] Let $0<\lambda_k<2$ for all $k$. For $\varphi\geq f^*$, the sequence $({d}_{X^*}(x^k))$ is monotonically decreasing and converges to some $\zeta\geq 0$, if $0 \leq \varepsilon_k \leq \overline{\varepsilon}_k$ for all $k$, where $\overline{\varepsilon}_k$ is defined in  of Theorem \[thm:ISA\_conv\_over\]. Considering (\[eq:lem2\]), it suffices to show that for $\varepsilon_k \leq \overline{\varepsilon}_k$, we have $$\label{eq:lem3proof} \varepsilon_k^2+2\left(\frac{\lambda_k(f_k-\varphi)}{{\lVert {h^k} \rVert}}+{d}_{X^*}(x^k)\right)\varepsilon_k+\frac{\lambda_k(\lambda_k-2)(f_k-\varphi)^2}{{\lVert {h^k} \rVert}^2}\leq 0.$$ The bound $\overline{\varepsilon}_k$ from  is precisely the (unique) positive root of the quadratic function in $\varepsilon_k$ given by the left hand side of . Thus, we have a monotonically decreasing (i.e., nonincreasing) sequence $({d}_{X^*}(x^k))$, and since its members are bounded below by zero, it converges to some nonnegative value, say $\zeta$. As a consequence, if $X^*$ is bounded, we obtain boundedness of the iterate sequence $(x^k)$: \[cor:1\] Let $X^*$ be bounded. If the sequence $({d}_{{X}^*}(x^k))$ is monotonically decreasing, then the sequence $(x^k)$ is bounded. By monotonicity of $({d}_{X^*}(x^k))$, making use of the triangle inequality, $$\begin{aligned} \lVert x^k\rVert~&=~ \big\lVert x^k-{\mathcal{P}}_{X^*}(x^k)+{\mathcal{P}}_{X^*}(x^k)\big\rVert\\ &\leq~{d}_{X^*}(x^k) + \big\lVert {\mathcal{P}}_{X^*}(x^k)\big\rVert~\leq~{d}_{X^*}(x^0)+\sup_{x\in{X}^*}\lVert x\rVert~<~\infty, \end{aligned}$$ since $X^*$ is bounded by assumption. We now have all the tools at hand for proving Theorem \[thm:ISA\_conv\_over\].\ **Proof of Theorem \[thm:ISA\_conv\_over\].** First, we prove part (i). Let the main assumptions of Theorem \[thm:ISA\_conv\_over\] hold and suppose – contrary to the desired result (i) – that $f_k>\varphi+\delta$ for all $k$. By Lemma \[lem:2\], $$\begin{aligned} \frac{\lambda_k(2-\lambda_k)(f_k-\varphi)^2}{{\lVert {h^k} \rVert}^2}~\leq~~&{d}_{X^*}(x^k)^2-{d}_{X^*}(x^{k+1})^2\\ &+\varepsilon_k^2+2\left(\frac{\lambda_k(f_k-\varphi)}{{\lVert {h^k} \rVert}}+{d}_{X^*}(x^k)\right)\varepsilon_k. \end{aligned}$$ Since $0 < H_\ell \leq {\lVert {h^k} \rVert} \leq H_u<\infty$, $0 < \lambda_k \leq \beta < 2$, and $f_k-\varphi>\delta$ for all $k$ by assumption, we have $$\frac{\lambda_k(2-\lambda_k)(f_k-\varphi)^2}{{\lVert {h^k} \rVert}^2}~\geq~\frac{\lambda_k(2-\beta)\delta^2}{H_u^2}.$$ By Lemma \[lem:3\], ${d}_{X^*}(x^k)\leq{d}_{X^*}(x^0)$. Also, by Corollary \[cor:1\] there exists $F < \infty$ such that $f_k\leq F$ for all $k$. Hence, $\lambda_k(f_k-\varphi)\leq\beta(F-\varphi)$, and since $1/{\lVert {h^k} \rVert}\leq 1/H_\ell$, we obtain $$\label{eq:thm1proof} \frac{(2-\beta)\delta^2}{H_u^2}\lambda_k\leq{d}_{X^*}(x^k)^2-{d}_{X^*}(x^{k+1})^2+\varepsilon_k^2+2\left(\frac{\beta(F-\varphi)}{H_\ell}+{d}_{X^*}(x^0)\right)\varepsilon_k.$$ Summation of the inequalities  for $k=0,1,\dots,m$ yields $$\begin{aligned} \frac{(2-\beta)\delta^2}{H_u^2}\sum_{k=0}^{m}\lambda_k \leq~ & {d}_{X^*}(x^0)^2-{d}_{X^*}(x^{m+1})^2\\ &+\sum_{k=0}^{m}\varepsilon_k^2+2\left(\frac{\beta(F-\varphi)}{H_\ell}+{d}_{X^*}(x^0)\right)\sum_{k=0}^{m}\varepsilon_k. \end{aligned}$$ Now, by assumption, the left hand side tends to infinity as $m \to \infty$, while the right hand side remains finite (note that nonnegativity and summability of $(\nu_k)$ imply the summability of $(\nu_k^2)$, properties that carry over to $(\varepsilon_k)$). Thus, we have reached a contradiction and therefore proven part (i) of Theorem \[thm:ISA\_conv\_over\], i.e., that $f_K\leq\varphi+\delta$ holds in some iteration $K$. We now turn to part (ii): Let the main assumptions of Theorem \[thm:ISA\_conv\_over\] hold, let $\lambda_k\to 0$ and suppose $f_k>\varphi$ for all $k$. Then, since we know from part (i) that the function values fall below every $\varphi+\delta$, we can construct a monotonically decreasing subsequence $(f_{K_j})$ such that $f_{K_j}\to \varphi$. To show that $\varphi$ is the unique accumulation point of $(f_k)$, assume to the contrary that there is another subsequence of $(f_k)$ which converges to $\varphi +\eta$, with some $\eta>0$. We can now employ the same technique as in the proof of Theorem \[thm:ISA\_conv\_apriori\] to reach a contradiction: The two cases $f_k<\varphi+\tfrac{1}{3}\eta$ and $f_k>\varphi+\tfrac{2}{3}\eta$ must both occur infinitely often, since $\varphi$ and $\varphi+\eta$ are accumulation points. Set $n_{(-1)}{\coloneqq}-1$ and define, for $\ell\geq 0$, $$\begin{aligned} m_\ell &{\coloneqq}\min\{\,k{\; | \;}k>n_{\ell-1},\,f_k>\varphi+\tfrac{2}{3}\eta\,\},\\ n_\ell &{\coloneqq}\min\{\,k{\; | \;}k>m_{\ell},\,f_k<\varphi+\tfrac{1}{3}\eta\,\}. \end{aligned}$$ Then, with $\infty>F\geq f_k$ for all $k$ (existing since $(x^k)$ is bounded and therefore so is $(f_k)$) and the subgradient norm bounds, we obtain $$\tfrac{1}{3}\eta < f_{m_{\ell}}-f_{n_{\ell}} \leq H_u{\lVert {x^{m_{\ell}}-x^{n_{\ell}}} \rVert} \leq \frac{H_u(F-\varphi)}{H_{\ell}}\sum_{j=m_{\ell}}^{n_{\ell}-1}\lambda_j +H_u\sum_{j=m_{\ell}}^{n_{\ell}-1}\varepsilon_j$$ and from this, denoting $\ell_m{\coloneqq}\max\{\,\ell{\; | \;}n_{\ell}-1\leq m\,\}$ for a given $m$, $$\tfrac{1}{3}\sum_{\ell=0}^{\ell_{m}}\eta \leq \frac{H_u(F-\varphi)}{H_{\ell}}\sum_{\ell=0}^{\ell_{m}}\sum_{j=m_{\ell}}^{n_{\ell}-1}\lambda_j +H_u\sum_{\ell=0}^{\ell_{m}}\sum_{j=m_{\ell}}^{n_{\ell}-1}\varepsilon_j.$$ Since for $m\to\infty$, the left hand side tends to infinity, the same must hold for the right hand side. But since $\sum_{\ell=0}^{\ell_{m}}\sum_{j=m_\ell}^{n_\ell-1}\varepsilon_j\leq \sum_{k=0}^{m}\varepsilon_k\leq \sum_{k=0}^{m}\nu_k<\infty$, this implies $$\label{eq:proof_2ii_divser} \sum_{\ell=0}^{\ell_{m}}\sum_{j=m_{\ell}}^{n_{\ell}-1}\lambda_j\to\infty \qquad\text{for }m\to\infty.$$ Also, using the same estimates as in part (i) above, (\[eq:lem2\]) yields $$\nonumber \underbrace{\tfrac{2-\beta}{H_u}}_{\eqqcolon C_1<\infty}(f_k-\varphi)^2\lambda_k \leq d_{X^*}(x^k)^2-d_{X^*}(x^{k+1})^2+\varepsilon_k^2+\underbrace{2\left(\tfrac{\beta(F-\varphi)}{H_\ell}+d_{X^*}(x^0)\right)}_{\eqqcolon C_2<\infty}\varepsilon_k$$ and thus by summation for $k=0,\dots,m$ for a given $m$, $$\label{eq:proof_2ii} C_1 \sum_{k=0}^{m}(f_k-\varphi)^2\lambda_k \leq d_{X^*}(x^0)^2 -d_{X^*}(x^{m+1})^2 +\sum_{k=0}^{m}\varepsilon_k^2+C_2\sum_{k=0}^{m}\varepsilon_k.$$ Observe that all summands of the left hand side term are positive, and thus $$C_1 \sum_{k=0}^{m}(f_k-\varphi)^2\lambda_k \geq C_1 \sum_{\ell=0}^{{\ell}_ {m}}\sum_{j=m_{\ell}}^{n_{\ell}-1}(\underbrace{f_j-\varphi}_{>\tfrac{1}{3}\eta})^2\lambda_j > \frac{C_1 \eta^2}{9} \sum_{\ell=0}^{{\ell}_ {m}}\sum_{j=m_{\ell}}^{n_{\ell}-1}\lambda_j.$$ Therefore, as $m\to\infty$, the left hand side of (\[eq:proof\_2ii\]) tends to infinity (by (\[eq:proof\_2ii\_divser\]) and the above inequality) while the right hand side expression remains finite (recall $0\leq\varepsilon_k\leq\nu_k$ with $(\nu_k)$ summable and thus also square-summable). Thus, we have reached a contradiction, and it follows that $\varphi$ is the only accumulation point (i.e., the limit) of the whole sequence $(f_k)$. This proves part (ii) and thus completes the proof of Theorem \[thm:ISA\_conv\_over\]. \[rem:conv\_x\_overest\] With more technical effort one can argue along the lines of the proof of Theorem \[thm:ISA\_conv\_apriori\] to obtain the following result on the convergence of the iterates $x^k$ in the case of Theorem \[rem:ISA\_conv\_over\]: If we additionally assume that $\sum \lambda_k^2<\infty$ and that $\lambda_k\geq \sum_{j=k}^\infty \varepsilon_k$ for all $k$, then $x^k \to \overline{x}$ for some $\overline{x} \in X$ with $f(\overline{x})=\varphi$ and $d_{X^*}(\overline{x}) = \zeta\geq 0$ ($\zeta$ being the same as in Lemma \[lem:3\]). ### Using lower bounds on the optimal value. In the following, we focus on the case $\varphi<f^*$, i.e., using a constant lower bound in the step size definition (\[eq:alphak\]). Such a lower bound is often more readily available than (useful) upper bounds; for instance, it can be computed via the dual problem, or sometimes derived directly from properties of the objective function such as, e.g., nonnegativity of the function values. Following arguments similar to those in the previous part, we can prove convergence of the ISA (under certain assumptions), provided that the projection accuracies $(\varepsilon_k)$ obey conditions analogous to those for the case $\varphi\geq f^*$. Let us start with analogons of Lemmas \[lem:2\] and \[lem:3\]. \[lem:4\] Let $\varphi<f^*$ and $0 < \lambda_k \leq \beta < 2$. If $f_k \geq \varphi$ for some $k \in {\bbbn}$, then $$\label{eq:lem4} {d}_{X^*}(x^{k+1})^2~\leq~{d}_{X^*}(x^k)^2+\varepsilon_k^2+2\left(\frac{\lambda_k(f_k-\varphi)}{{\lVert {h^k} \rVert}}+{d}_{X^*}(x^k)\right)\varepsilon_k+L_k,$$ where $L_k$ is defined in  of Theorem \[thm:ISA\_conv\_under\]. For $\varphi<f^*$, $0<\lambda_k\leq\beta<2$, and $f_k\geq\varphi$, it holds that $$\lambda_k(f_k-\varphi)-2(f_k-f^*)~\leq\beta(f_k-\varphi)-2(f_k-f^*)=\beta(f^*-\varphi)+(2-\beta)(f^*-f_k).$$ The claim now follows immediately from Lemma \[lem:1\]. \[lem:5\] Let $\varphi<f^*$, let $0<\lambda_k\leq\beta<2$ and $f_k\geq f^*+\tfrac{\beta}{2-\beta}(f^*-\varphi)$ for all $k$, and let $L_k$ be given by . Then $({d}_{X^*}(x^k))$ is monotonically decreasing and converges to some $\xi \geq 0$, if $0 \leq \varepsilon_k \leq \tilde{\varepsilon}_k$ for all $k$, where $\tilde{\varepsilon}_k$ is defined in . The condition $f_k\geq f^*+\tfrac{\beta}{2-\beta}(f^*-\varphi)$ implies $L_k \leq 0$ and hence ensures that inexact projection can be used while still allowing for a decrease in the distances of the subsequent iterates from ${X}^*$. The rest of the proof is completely analogous to that of Lemma \[lem:3\], considering  and  to derive the upper bound $\tilde{\varepsilon}_k$ given by  on the projection accuracy. We can now state the proof of our convergence results for the case $\varphi<f^*$.\ **Proof of Theorem \[thm:ISA\_conv\_under\].** Let the assumptions of Theorem \[thm:ISA\_conv\_under\] hold. We start with proving part (i): Let some $\delta>0$ be given and suppose – contrary to the desired result (i) – that $f_k > f^*+\tfrac{\beta}{2-\beta}(f^*-\varphi)+\delta$ for all $k$. By Lemma \[lem:4\], $${d}_{X^*}(x^{k+1})^2~\leq~{d}_{X^*}(x^k)^2+\varepsilon_k^2+2\left(\frac{\lambda_k(f_k-\varphi)}{{\lVert {h^k} \rVert}}+{d}_{X^*}(x^k)\right)\varepsilon_k+L_k.$$ Since $0 < H_\ell \leq {\lVert {h^k} \rVert} \leq H_u$, $0 < \lambda_k \leq \beta < 2$, and $\varphi< f_k$, and due to our assumption on $f_k$, i.e., $$f^*-f_k+\tfrac{\beta}{2-\beta}(f^*-\varphi)~<~-\delta\qquad\text{for all }k,$$ it follows that $$L_k~<~-\,\frac{\lambda_k(2-\beta)(f_k-\varphi)\delta}{H_u^2}~<~0.$$ By Lemma \[lem:5\], ${d}_{X^*}(x^k)\leq{d}_{X^*}(x^0)$, and Corollary \[cor:1\] again ensures existence of some $F<\infty$ such that $f_k\leq F$ for all $k$. Because also $\lambda_k(f_k-\varphi)\leq\beta(F-\varphi)$ and $1/{\lVert {h^k} \rVert} \leq 1/H_\ell$, we hence obtain $$\begin{aligned} \nonumber \frac{\lambda_k(2-\beta)(f_k-\varphi)\delta}{H_u^2}~<~-L_k ~\leq~ & {d}_{X^*}(x^k)^2-{d}_{X^*}(x^{k+1})^2\\ & + \varepsilon_k^2+2\left(\frac{\beta(F-\varphi)}{H_\ell}+{d}_{X^*}(x^0)\right)\varepsilon_k.\label{eq:thm3_sum_lambda_est} \end{aligned}$$ Summation of these inequalities for $k=0,1,\dots,m$ yields $$\begin{aligned} \nonumber \frac{(2-\beta)\delta}{H_u^2}\sum_{k=0}^{m}(f_k-\varphi)\lambda_k ~<~ & {d}_{X^*}(x^0)^2-{d}_{X^*}(x^{m+1})^2\\ & + \sum_{k=0}^{m}\varepsilon_k^2+2\left(\frac{\beta(F-\varphi)}{H_\ell}+{d}_{X^*}(x^0)\right)\sum_{k=0}^{m}\varepsilon_k. \label{eq:thm3_sum_lambda_more} \end{aligned}$$ Moreover, our assumption on $f_k$ yields $$f_k-\varphi~>~f^*+\tfrac{\beta}{2-\beta}f^*-\tfrac{\beta}{2-\beta}\varphi+\delta-\varphi~=~\tfrac{2}{2-\beta}(f^*-\varphi)+\delta.$$ It follows from  that $$\begin{aligned} \frac{\big(2(f^*-\varphi)+(2-\beta)\delta\big)\delta}{H_u^2}\sum_{k=0}^{m}\lambda_k <\; &{d}_{X^*}(x^0)^2-{d}_{X^*}(x^{m+1})^2\\ & + \sum_{k=0}^{m}\varepsilon_k^2+2\left(\frac{\beta(F-\varphi)}{H_\ell}+{d}_{X^*}(x^0)\right)\sum_{k=0}^{m}\varepsilon_k. \end{aligned}$$ Now, by assumption, the left hand side tends to infinity as $m\to\infty$, whereas by Lemma \[lem:5\] and the choice of $0 \leq \varepsilon_k \leq \min\{|\tilde{\varepsilon}_k|,\nu_k\}$ with a nonnegative summable (and hence also square-summable) sequence $(\nu_k)$, the right hand side remains finite. Thus, we have reached a contradiction, and part (i) is proven, i.e., there does exist some $K$ such that $f_K\leq f^*+\frac{\beta}{2-\beta}(f^*-\varphi)+\delta$. Let us now turn to part (ii): Again, let the main assumptions of Theorem \[thm:ISA\_conv\_under\] hold and let $\lambda_k\to 0$. Recall that for $\varphi<f^*$, we have $f_k>\varphi$ for all $k$ by construction of the ISA. We distinguish three cases: If $f_k<f^*$ holds for all $k\geq k_0$ for some $k_0$, then $f_k\to f^*$ is obtained immediately, just like in the proof of Theorem \[thm:ISA\_conv\_apriori\]. On the other hand, if $f_k\geq f^*$ for all $k$ larger than some $k_0$, then repeated application of part (i) yields a subsequence of $(f_k)$ which converges to $f^*$: For any $\delta > 0$ we can find an index $K$ such that $f^*\leq f_K \leq f^* + \tfrac{\beta}{2-\beta}(f^*-\varphi) + \delta$. Obviously, we get arbitrarily close to $f^*$ if we choose $\beta$ and $\delta$ small enough. However, we have the restriction $\lambda_k\leq \beta$. But since $\lambda_k\to 0$, we may “restart” our argumentation if $\lambda_k$ is small enough and replace $\beta$ with a smaller one. With the convergent subsequence thus constructed, we can then use the same technique as in the proof of Theorem \[thm:ISA\_conv\_over\] (ii) to show that $(f_k)$ has no other accumulation point but $f^*$, whence $f_k\to f^*$ follows. Finally, when both cases $f_k<f^*$ and $f_k\geq f^*$ occur infinitely often, we can proceed similar to the proof of Theorem \[thm:ISA\_conv\_apriori\]: The subsequence of function values below $f^*$ converges to $f^*$, since $\varepsilon_k\to 0$. For the function values greater or equal to $f^*$, we assume that there is an accumulation point $f^*+\eta$ larger than $f^*$, deduce that an appropriate sub-sum of the $\lambda_k$’s diverge and then sum up equation for the respective indices (belonging to $\{k{\; | \;}f_k\geq f^*\}$) to arrive at a contradiction. Note that the iterate sequence $(x^k)$ is bounded, due to Corollary \[cor:1\] (for iterations $k$ with $f_k\geq f^*$) and since the iterates with $\varphi<f_k<f^*$ stay within a bounded neighborhood of the bounded set $X^*$, since $\varepsilon_k$ tends to zero and is summable. Therefore, as $f$ is continuous over $T$ (which contains all possible $x^k$), $(f_k)$ is bounded as well and therefore must have at least one accumulation point. The only possibility left now is $f^*$, so we conclude $f_k\to f^*$. With $f_k\to f^*$ and $\varepsilon_k\to 0$, we obviously have $d_{X^*}(x^k)\to 0$ in the setting of Theorem \[thm:ISA\_conv\_under\]. Furthermore, Remark \[rem:conv\_x\_overest\] applies similarly: With more conditions on $\lambda_k$ and more technical effort one can obtain convergence of the sequence $(x^k)$ to some $\overline{x}\in X^*$. Discussion ========== In this section, we will discuss extensions of the ISA. We will also illustrate how to obtain bounds on the projection accuracies that are independent of the (generally unknown) distance from the optimal set, and thus computable. Extension to $\epsilon$-subgradients ------------------------------------ It is noteworthy that the above convergence analyses also work when replacing the subgradients by $\epsilon$-subgradients [@BM73], i.e., replacing $\partial f(x^k)$ by $$\label{eq:epssubdiff} \partial_{\gamma_k} f(x^k) {\coloneqq}\{\,h\in{\bbbr}^n {\; | \;}f(x)-f(x^k) \geq h^\top (x-x^k) - \gamma_k \quad \forall\, x\in{\bbbr}^n\,\}.$$ (To avoid confusion with the projection accuracy parameters $\varepsilon_k$, we use $\gamma_k$.) For instance, we immediately obtain the following result: \[cor:epssubgradapriori\] Let the ISA (Algorithm \[alg:APrioriISA\]) choose $h^k\in\partial_{\gamma_k}f(x^k)$ with $\gamma_k\geq 0$ for all $k$. Under the assumptions of Theorem \[thm:ISA\_conv\_apriori\], if $(\gamma_k)$ is chosen summable $(\sum_{k=0}^\infty \gamma_k<\infty)$ and such that 1. $\gamma_k\leq\mu\,\alpha_k$ for some $\mu>0$, or 2. $\gamma_k\leq\mu\,\varepsilon_k$ for some $\mu>0$, then the sequence of ISA iterates $(x^k)$ converges to an optimal point. The proof is analogous to that of Theorem \[thm:ISA\_conv\_apriori\]; we will therefore only sketch the necessary modifications: Choosing $h^k\in\partial_{\gamma_k}f(x^k)$ (instead of $h^k\in\partial f(x^k)$) adds the term $+2\alpha_k\gamma_k$ to the right hand side of . If $\gamma_k\leq \mu\,\alpha_k$ for some constant $\mu>0$, the square-summability of $(\alpha_k)$ suffices: By upper bounding $2\alpha_k\gamma_k$, the constant term $+2\mu A$ is added to the definition of $R$ in . Similarly, $\gamma_k\leq \mu\,\varepsilon_k$ does not impair convergence under the assumptions of Theorem \[thm:ISA\_conv\_apriori\], because then in  the additional summand is $$2\sum_{k=0}^m \alpha_k\gamma_k\,\leq\, 2\mu\sum_{k=0}^m \alpha_k\varepsilon_k\,\leq\, 2\mu\sum_{k=0}^m\Big(\alpha_k\sum_{\ell=k}^\infty\varepsilon_k\Big)\,\leq\, 2\mu\sum_{k=0}^m\alpha_k^2\,\leq\, 2\mu A.$$ The rest of the proof is almost identical, using $R$ modified as explained above and some other minor changes where $\gamma_k$-terms need to be considered, e.g., the term $+\gamma_{m_{\ell}}$ is introduced in , yielding an additional sum in , which remains finite when passing to the limit because $(\gamma_k)$ is summable. Similar extensions are possible when using dynamic step sizes of the form . The upper bounds  and  for the projection accuracies $(\varepsilon_k)$ will depend on $(\gamma_k)$ as well, which of course must be taken into account when extending the proofs accordingly. Then, summability of $(\gamma_k)$ (implying $\gamma_k\to 0$) is enough to guarantee convergence. In particular, one can again choose $\gamma_k\leq\mu\,\varepsilon_k$ for some $\mu>0$. We will not go into detail here, since the extensions are straightforward. Computable bounds on ${d}_{{X}^*}(x^k)$ --------------------------------------- The results in Theorems \[thm:ISA\_conv\_over\] and \[thm:ISA\_conv\_under\] hinge on bounds $\overline{\varepsilon}_k$ and $\tilde{\varepsilon}_k$ on the projection accuracy parameters $\varepsilon_k$, respectively. These bounds depend on unknown information and therefore seem of little practical use such as, for instance, an automated accuracy control in an implementation of the dynamic step size ISA. While the quantity $f^*$ can sometimes be replaced by estimates directly, it will generally be hard to obtain useful estimates for the distance of the current iterate to the optimal set. However, such estimates are available for certain classes of objective functions. We will roughly sketch several examples in the following. For instance, when $f$ is a *strongly convex function*, i.e., there exists some constant $C$ such that for all $x,y$ and $\mu \in [0,1]$ $$f(\mu x+(1-\mu)y)~\leq~\mu f(x)+(1-\mu)f(y)-\mu(1-\mu)C {\lVert {x-y} \rVert}^2,$$ one can use the following upper bound on the distance to the optimal set [@KAC91]: $${d}_{X^*}(x)~\leq~\min\,\Big\{\,\sqrt{\tfrac{f(x)-f^*}{C}},\,\tfrac{1}{2C}\,\min_{h\in\partial f(x)}\,{\lVert {h} \rVert}\,\Big\}.$$ For functions $f$ such that $f(x)\geq C\, {\lVert {x} \rVert} - D$, with constants $C,D>0$, one can make use of ${d}_{X^*}(x)\leq {\lVert {x} \rVert} + \tfrac{1}{C}(f^*+D)$, obtained by simply employing the triangle inequality. Another related example class is induced by coercive self-adjoint operators $F$, i.e., $f(x) {\coloneqq}\langle Fx,x\rangle \geq C {\lVert {x} \rVert}^2$ with some constant $C > 0$ and a scalar product $\langle\cdot,\cdot\rangle$. The (usually) unknown $f^*$ appearing above may again be treated using estimates. Yet another important class is comprised of functions which have a set of weak sharp minima [@F88] over $X$, i.e., there exists a constant $\mu>0$ such that $$\label{eq:weaksharp} f(x) - f^*~\geq~\mu\,{d}_{{X}^*}(x)\qquad\forall\,x\in {X}.$$ Using ${d}_{{X}^*}(x)\leq{d}_{{X}}(x)+{d}_{{X}^*}({\mathcal{P}}_{{X}}(x))$ for $x\in{\bbbr}^n$, we can then estimate the distance of $x$ to $X^*$ via the weak sharp minima property of $f$. An important subclass of such functions is composed of the polyhedral functions, i.e., $f$ has the form $f(x)=\max\{\,a_i^\top x+b_i {\; | \;}1\leq i\leq N\,\}$, where $a_i\neq 0$ for all $i$; the scalar $\mu$ is then given by $\mu=\min\{\,{\lVert {a_i} \rVert}\,\vert\,1\leq i\leq N\,\}$. Rephrasing (\[eq:weaksharp\]) as $${d}_{{X}^*}(x)~\leq~\frac{f(x)-f^*}{\mu}\qquad\forall x\in{X},$$ we see that for $\varphi\leq f^*$ (e.g., dual lower bounds $\varphi$), $${d}_{{X}^*}(x)~\leq~\frac{f(x)-\varphi}{\mu}\qquad\forall x\in{X}.$$ Thus, when the bounds on the distance to the optimal set derived from using the above inequalities become too conservative (i.e., too large, resulting in very small $\tilde{\varepsilon}_k$-bounds), one could try to improve the above bounds by improving the lower bound $\varphi$. Application: Compressed Sensing {#sec:CompressedSensing} =============================== Compressed Sensing (CS) is a recent and very active research field dealing, loosely speaking, with the recovery of signals from incomplete measurements. We refer the interested reader to [@CSweb] for more information, surveys, and key literature. A core problem of CS is finding the sparsest solution to an underdetermined linear system, i.e., $$\label{eq:P0} \min\,{\lVert {x} \rVert}_0\qquad\text{s.\,t.}\qquad Ax=b,\qquad\quad(A\in{\bbbr}^{m\times n},~\text{rank}(A)=m,~m<n),$$ where ${\lVert {x} \rVert}_0$ denotes the $\ell_0$ quasi-norm or support size of the vector $x$, i.e., the number of its nonzero entries. This problem is known to be $\mathcal{NP}$-hard, hence a common approach is considering the convex relaxation known as $\ell_1$-minimization or Basis Pursuit [@CDS98], $$\label{eq:P1} \min\,{\lVert {x} \rVert}_1\qquad\text{s.\,t.}\qquad Ax=b.$$ It was shown that under certain conditions, the solutions of  and  coincide, see, e.g., [@CRT06; @D06]. This motivated a large amount of research on the efficient solution of , especially in large-scale settings. In this section, we outline a specialization of the ISA to the $\ell_1$-minimization problem . #### Subgradients. The subdifferential of the $\ell_1$-norm at a point $x$ is given by $$\label{eq:L1subdiff} \partial{\lVert {x} \rVert}_1 = \Big\{~h\in [-\mathds{1},\mathds{1}]^n~\Big\vert~h_i=\frac{x_i}{\vert x_i\vert},\quad \forall\, i\in\{1,\dots,n\}\text{ with } x_i \neq 0~\Big\}.$$ We may therefore simply use the signs of the iterates as subgradients, i.e., $$\label{eq:L1subgrad} \partial{\lVert {x^k} \rVert}_1 \ni h^k~{\coloneqq}~\text{sign}(x^k)~=~ \begin{cases} \hfill 1, & (x^k)_i>0,\\ \hfill 0, &(x^k)_i=0,\\ ~-1, & (x^k)_i<0. \end{cases}$$ As long as $b \neq 0$, the upper and lower bounds on the norms of the subgradients satisfy $H_\ell \geq 1$ and $H_u \leq n$. #### Inexact Projection. For linear equality constraints as in , the Euclidean projection of a point $z\in{\bbbr}^n$ onto the affine feasible set ${X}{\coloneqq}\{\,x {\; | \;}Ax=b\,\}$ can be explicitly calculated as $$\label{eq:L1proj} {\mathcal{P}}_{{X}}(z)~=~\big(I-A^\top (AA^\top)^{-1}A\big)z+A^\top (AA^\top)^{-1}\,b,$$ where $I$ denotes the ($n\times n$) identity matrix. However, for numerical stability, we wish to avoid the explicit calculation of the matrix because it involves determining the inverse of the matrix product $AA^\top$. Instead of applying (\[eq:L1proj\]) in each iteration, we could equivalently use the following procedure: $$\begin{aligned} \label{eq:L1projIter1}&\qquad z^k{\coloneqq}x^k-\alpha_k h^k\qquad\text{(unprojected next iterate)},\\ \label{eq:L1projIter2}&\qquad\text{find }q^k\text{ solving }AA^\top q=Az^k-b,\\ \label{eq:L1projIter3}&\qquad x^{k+1}{\coloneqq}z^k-A^\top q^k.\end{aligned}$$ Note that the matrix $AA^\top$ is symmetric and positive definite, for $A$ with full (row-)rank $m$. Hence, the linear system in (\[eq:L1projIter2\]) can be solved by an iterative method, e.g., the method of Conjugate Gradients (CG) [@HS52]. In exact arithmetic, CG computes a solution to an $m$-dimensional linear system in $m$ iterations. However, rounding errors in actual computations may destroy this property. A preconditioner could be applied to improve performance, but note that (\[eq:L1projIter2\]) amounts to solving a normal equation, which are generally difficult to precondition [@S00]. Thus, we can derive an *inexact* projection operator as follows: Let $q^k$ be the *approximate* solution of  that gets updated with every CG iteration, and denote the corresponding residual by $$\label{eq:L1res} r_q^k~{\coloneqq}~AA^\top q^k-\big(A(x^k-\alpha_k h^k)-b\big).$$ We can stop the CG iteration (prematurely), as soon as the residual norm ${\lVert {r_q^k} \rVert}_2$ becomes “small enough”. Then, if ${\lVert {r_q^k} \rVert}_2>0$, $AA^\top q^k\neq Az^k -b$, and hence $x^{k+1}$ is not the exact projection of $x^k-\alpha_k h^k$ onto $X$. More specifically, we observe for the $\ell_1$-minimization problem: $$\begin{aligned} \nonumber &\lVert x^{k+1}-{\mathcal{P}}_{X}^0(x^k-\alpha_k h^k)\rVert_2\\ \nonumber =~&\lVert x^k-\alpha_k h^k-A^\top q^k-(x^k-\alpha_k h^k-A^\top (AA^\top)^{-1}(A(x^k-\alpha_k h^k)-b))\rVert_2\\ \nonumber =~&\lVert -A^\top q^k+A^\top (AA^\top)^{-1}Ax^k-\alpha_k A^\top(AA^\top)^{-1}Ah^k-A^\top (AA^\top)^{-1}b\rVert_2\\ \nonumber =~&\lVert A^\top (AA^\top)^{-1}\left(A(x^k-\alpha_k h^k)-b-AA^\top q^k\right)\rVert_2\\ \label{eq:L1IPrCGres} \leq~&\lVert A^\top (AA^\top)^{-1}\rVert_2\,\lVert AA^\top q^k-(A(x^k-\alpha_k h^k)-b)\rVert_2~=~\frac{\lVert r_q^k\rVert_2}{\sigma_{\min}(A)},\end{aligned}$$ where $\sigma_{\min}(A)=\sigma_{\min}(A^\top)$ is the smallest singular value of $A$ and also $A^\top$, i.e., the square root of the smallest eigenvalue of $AA^\top$. This is the same as the reciprocal of the largest eigenvalue of $(A^\top (AA^\top)^{-1})^\top (A^\top (AA^\top)^{-1})=(AA^\top)^{-\top}=(AA^\top)^{-1}$, which equals $\lVert A^\top (AA^\top)^{-1}\rVert_2$ by definition. Note that since $AA^\top$ is positive definite and therefore $\sigma_{\min}(A)>0$, (\[eq:L1IPrCGres\]) is well-defined. By relation (\[eq:L1IPrCGres\]), stopping the CG procedure in (\[eq:L1projIter2\]) as soon as $$\label{eq:CS_cg_stop} {\lVert {r_q^k} \rVert}_2~\leq~\sigma_{\min}(A)\,\varepsilon_k,$$ ensures that – form an inexact projection operator of the type . Furthermore, to obtain weaker, but computable, upper bounds on $(\varepsilon_k)$, we can use the results about weak sharp minima discussed in the previous section: The ł-norm can be rewritten as a polyhedral function. Omitting the details here, with $\varphi\leq f^*$ (which is easily available, e.g., $\varphi=0$), we can thus derive $${d}_{{X}^*}(x^k)~\leq~2\frac{{\lVert {Ax^k-b} \rVert}_2}{\sigma_{\min}(A)}+\frac{f_k-\varphi}{\sqrt{n}}.$$ #### Example. We now present a brief example of a prototypical implementation of Algorithm \[alg:DynamicISA\] in <span style="font-variant:small-caps;">Matlab</span>. Our test instance for (\[eq:P1\]) consists of a $512\times 2048$ matrix $A$ being a concatenation of four $512\times 512$ dictionaries, namely, a sparse band matrix, a sparse block diagonal matrix with one full row added, the Hadamard and the identity matrix (all columns normalized to unit Euclidean norm). The right hand side $b$ is computed as $b=Ax^*$, where $x^*$ is a given point with $14$ nonzeros, known to be the unique optimum by checking the *exact recovery condition* [@T04] on its support. We used $A^\top b\notin {X}{\coloneqq}\{x {\; | \;}Ax=b\}$ as our starting point, the trivial lower bound $\varphi=0$ in the step size and we computed at most two iterations of the CG method to approximate the projections (instead of using bounds that theoretically guarantee convergence). We stopped when the step size became smaller than the double precision of <span style="font-variant:small-caps;">Matlab</span>. ![Distances to feasible set and optimal point during a run of the ISA on an instance of the Basis Pursuit Problem. The vertical axes are in log-scale.[]{data-label="fig:CSex"}](Example){width="\textwidth"} Figure \[fig:CSex\] depicts the distances from the iterates $x^k$ to $x^*$ (lower left picture) and the feasibility violation ${\lVert {Ax^{k}-b} \rVert}_\infty$ (upper left), which serves as a measure for the distance from $x^k$ to $X$, per iteration. Notably, the algorithm strongly deviates from feasibility at the beginning, but eventually approaches ${X}$. The subplots on the right side show the quantities ${\lVert {Ax^k-b} \rVert}_\infty$ and ${d}_{X^*}(x^k)$ for another run of the dynamic ISA method, this time using more accurate projections (as good as CG can do), with all other parameters being equal. The inexact version took about 2.7 seconds for 2141 iterations, the one with higher accuracy projections about 4.2 seconds for 1905 iterations. Note the different scales of the feasibility violation plots for the inexact and accurate run of the algorithm: While the distance to the optimal point decreases similarly in both variants, the feasibility in the inexact version is initially violated by values in the order of ten thousand, whereas the other version maintains a feasibility accuracy of around $10^{-6}$. Concluding Remarks ================== Several aspects remain subject to future research. For instance, it would be interesting to investigate whether our framework extends to more general (infinite-dimensional) Hilbert space settings, incremental subgradient schemes, bundle methods (see, e.g., [@HUL93; @K90]), or Nesterov’s algorithm [@N05]. It is also of interest to consider how the ISA framework could be combined with error-admitting settings such as those in [@Z10; @NB10], i.e., for random or deterministic (non-vanishing) noise and erroneous function or subgradient evaluations. Some of the recent results in [@NB10], which all require feasible iterates, seem conceptually somewhat close to our convergence analyses, so we presume a blend of the two approaches to be rather fruitful. From a practical viewpoint, it will be interesting to see how the ISA, or possibly a variable target value variant as described in Remark 2, compares with other solvers in terms of solution accuracy and runtime. This goes beyond the scope of this more theoretically oriented paper. However, for the $\ell_1$-minimization problem (\[eq:P1\]) we are currently preparing an extensive computational comparison of various state-of-the-art solvers; preliminary results indicate that the ISA may indeed be competitive. [^1]: This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) within the project “Sparse Exact and Approximate Recovery” under grants LO 1436/3-1 and PF 709/1-1. Moreover, D. Lorenz acknowledges support from the DFG project “Sparsity and Compressed Sensing in Inverse Problems” under grant LO 1436/2-1.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We give an elementary proof of Burq’s resolvent bounds for long range semiclassical Schrödinger operators. Globally, the resolvent norm grows exponentially in the inverse semiclassical parameter, and near infinity it grows linearly. We also weaken the regularity assumptions on the potential.' address: 'Department of Mathematics, MIT, Cambridge, MA 02139, USA' author: - Kiril Datchev title: Quantitative limiting absorption principle in the semiclassical limit --- Let $\Delta\le 0$ be the Laplacian on $\mathbb R^n$, $n \ne 2$, and let $E>0$. Let $$\label{e:pdef} P = P_h := -h^2 \Delta + V - E, \qquad h >0,$$ where, using polar coordinates $(r,\omega) \in (0,\infty)\times \mathbb S^{n-1}$, we suppose that $V = V_h(r,\omega)$ and its distributional derivative $\partial_r V$ are in $L^\infty ((0,\infty)\times\mathbb S^{n-1})$. Suppose futher that $$\label{e:vbound} V \le (1+r)^{-\delta_0}, \qquad \partial_r V \le (1+r)^{-1-\delta_0},$$ for some $\delta_0>0$. Since $V \in L^\infty(\mathbb R^n)$, the resolvent $(P-i\varepsilon)^{-1}$ is defined $L^2(\mathbb R^n) \to H^2(\mathbb R^n)$ for $\varepsilon >0$ by the Kato–Rellich theorem. We prove the following weighted resolvent bounds: For any $s>1/2$ there are $C, R_0, h_0>0$ such that $$\label{e:t1} \left\| (1+r)^{-s} (P -i\varepsilon)^{-1} (1+r)^{-s} \right\|_{L^2(\mathbb R^n) \to L^2(\mathbb R^n)} \le e^{C/h},$$ $$\label{e:t2} \left\|(1+r)^{-s} \mathbf{1}_{\ge R_0} (P-i\varepsilon)^{-1} \mathbf{1}_{\ge R_0}(1+r)^{-s} \right\|_{L^2(\mathbb R^n) \to L^2(\mathbb R^n)} \le C / h,$$ for all $\varepsilon >0 $, $h \in (0,h_0]$, where $\mathbf{1}_{\ge R_0}$ is the characteristic function of $\{x \in \mathbb R^n \colon |x| \ge R_0\}$. This Theorem was first proved by Burq [@bu0; @bu], who required $V$ to be smooth, but allowed it to be a differential operator on an exterior domain $\mathbb R^n \setminus \overline {\mathcal O}$, $n \ge 1$. Different proofs were found by Sjöstrand [@sres] and Vodev [@vo0]. Cardoso and Vodev [@cv] gave a version for manifolds with asymptotically conic or hyperbolic ends, and, most recently, Rodnianski and Tao [@rt] considered Schrödinger operators on asymptotically conic manifolds, obtaining also bounds for low energies and other refinements. Here we consider only operators of the form , with $n \ne 2$, in order to stress the elementary nature of the proof and to present the ideas in the simplest setting; however, the assumption is mild, and our method should also give simplifications and low regularity results in more general cases. Our proof is closest in spirit to that of Cardoso and Vodev [@cv] (see also [@voasy; @vo]). The novelty is a *global* Carleman estimate of the form $$\left\| (1+r)^{-s} e^{\varphi/h}v\right\|^2_{L^2(\mathbb R^n)} \le \frac {C}{h^2}\left\|(1+r)^{s}e^{\varphi/h}(P-i\varepsilon)v\right\|^2_{L^2(\mathbb R^n)} + \frac {C\varepsilon}h \|e^{\varphi/h}v\|^2_{L^2(\mathbb R^n)},$$ with $C$ independent of the support of $v$, and with $\varphi = \varphi(r)$ nondecreasing and constant outside of a compact set: see Lemma \[l:carl\]. Carleman estimates are crucial in all the proofs mentioned above, and one nice feature of our approach is that in this setting the construction of $\varphi$ is relatively simple: see Lemma \[l:weights\]. The $h$ dependence in is optimal in general, but improvements hold under dynamical assumptions on the Hamilton flow $\Phi(t) = \exp t(2 \xi \partial_x - \partial_x V(x) \partial_\xi) $ on $T^*\mathbb R^n$. (Note, however, that $\Phi$ may be undefined under our regularity assumptions.) See [@wu] for a recent survey, and [@dy; @nz; @ch] for more recent results in this active area. For example, if $\Phi$ is *nontrapping* at energy $E$ (e.g. if $V \equiv 0$), then can be replaced by $$\label{e:nontrap} \left\| (1+r)^{-s} (P -i\varepsilon)^{-1} (1+r)^{-s} \right\|_{L^2(\mathbb R^n) \to L^2(\mathbb R^n)} \le C/h.$$ In this sense says that applying $\mathbf{1}_{\ge R_0}$ cutoffs removes the loss exhibited by compared to . It would be interesting to know if some improvement over persists if we remove one of the $\mathbf{1}_{\ge R_0}$ factors from , and if $\mathbf{1}_{\ge R_0}$ can be replaced by a finer cutoff; for some results in this direction, see [@dv; @rt; @hv]. For example, in [@dv], Vasy and I show that if $\Phi$ is ‘mildly’ trapping then holds with $\mathbf{1}_{\ge R_0}$ replaced by a microlocal cutoff vanishing only on an arbitrarily small neighborhood of the trapped set. In [@voasy; @vo], Vodev studied operators of the form , satisfying , but with $V$ replaced by $h^\nu V$ for some $\nu>0$; he showed that in that case the bound holds. He also allowed $V$ to contain a magnetic term and a less regular short range term. I am grateful to Maciej Zworski for encouraging me to write this note, and for many very helpful discussions and suggestions. Thanks also to Nicolas Burq for pointing out a problem with an earlier version of this argument. Thanks finally to Georgi Vodev for sharing his preprint [@vo], which gave me the initial idea for the proof. I am also grateful for the support of a National Science Foundation postdoctoral fellowship. Proof of Theorem ================ We begin with two lemmas; the first constructs a nondecreasing Carleman weight for $P$ which is constant outside of a compact set, and the second uses this weight to prove a global Carleman estimate. Without loss of generality, we assume $0<2s-1<\delta_0<1$. Put $$\delta := 2s - 1<\delta_0, \qquad w = w_\delta(r) := 1 - (1+r)^{-\delta},\qquad m := (1+r^2)^{(1+\delta)/4}.$$ \[l:weights\] If $\delta>0$ is small enough, there are $h_0$, $R_0>0$, and $\varphi = \varphi(r) \in C^\infty([0,\infty))$ with $\varphi' \ge 0$ and $ \operatorname{supp}\varphi' = [0,R_0]$, such that $$\label{e:lweights} \partial_r \left(w(r)(E-V_h(r,\omega) + \varphi'(r)^2 - h \varphi''(r))\right) \ge Ew'(r)/4,$$ for all $h \in (0,h_0]$, $r>0$, $\omega \in \mathbb S^{n-1}$. \[l:carl\] Let $\delta$, $h_0$, and $\varphi = \varphi(r)$ be as in Lemma \[l:weights\]. There is $C >0$ such that $$\label{e:lcarl} \left\| m^{-1} e^{\varphi/h}v\right\|^2_{L^2(\mathbb R^n)} \le \frac {C}{h^2}\left\|me^{\varphi/h}(P-i\varepsilon)v\right\|^2_{L^2(\mathbb R^n)} + \frac {C\varepsilon}h \|e^{\varphi/h}v\|^2_{L^2(\mathbb R^n)},$$ for all $v \in C_0^\infty(\mathbb R^n)$, $\varepsilon \ge0$, and $h \in (0,h_0]$. For $B, \, R, \, R_0$ (depending on $\delta$) to be determined later, put $$\psi = \psi_\delta(r) := \begin{cases} \delta_0^{-1} , &r \le R, \\ \frac B {w(r)} - \frac E 4 , & R < r < R_0, \\ 0, &r \ge R_0,\end{cases}$$ We will show that, for $\delta$ small enough, there are $B, \, R, \, R_0$ which make $\psi$ continuous and $$\label{e:psi0} - E/2 \le \psi - V - (\partial_r V - \psi')w/w', \qquad r>0, \ r \ne R, \ r \ne R_0.$$ Suppose for a moment that this is done. Fix $\rho \in C_0^\infty((0,\infty))$ with $\rho \ge 0$, $\int \rho = 1$, and for $\eta>0$, put $\rho_\eta(r) = \rho(r/\eta)/\eta$. If $\eta$ and $h_0$ are sufficiently small, then we may take $$\varphi(r) := \int_0^r \widetilde \psi(t)dt, \qquad \widetilde \psi := \rho_\eta * \sqrt{\psi}.$$ It remains to find $B$, $R$, and $R_0$ such that $\psi$ is continuous and satisfies . Note that, by we have $$V + (\partial_r V) w/w' \le G_\delta(r) := (1+r)^{-\delta_0} + \delta^{-1}(1 - (1+r)^{-\delta})(1+r)^{\delta-\delta_0},$$ and $$G'_\delta(r) = (\delta^{-1} - 1)\delta_0(1+r)^{-1-\delta_0} -\delta^{-1}(\delta_0-\delta)(1+r)^{-1-\delta_0+ \delta}.$$ So, for each $\delta \in (0,\delta_0)$, $G_\delta$ attains its maximum value at $r_{\max}$ which is given by $$\label{e:gmax} (1+r_{\max})^\delta :=(1-\delta)/(1-\delta/\delta_0) = 1 + \delta(\delta_0^{-1} - 1) + O(\delta^2).$$ Hence we have, for all $r>0$, $$\begin{split} G_\delta(r) &= (1+r)^{-\delta_0} \left(1 - \delta^{-1} + \delta^{-1}(1+r)^\delta \right) \\&\le \left(1 - \delta(\delta_0^{-1} - 1) + O(\delta^2)\right)^{\delta_0/\delta} \delta_0^{-1}(1+O(\delta)). \end{split}$$ Since $(1-x)^{1/x} \le 1/e$ for $x > 0$ and since $\delta_0<1$, this implies, for $\delta$ small enough, $$G_\delta(r) \le e^{-(1-\delta_0) + O(\delta)} \delta_0^{-1}(1+O(\delta)) \le \delta_0^{-1}.$$ Consequently, regardless of the value of $R$, we have, for $r < R$, $$\label{e:rsmall} \psi - V - (\partial_r V - \psi')w/w' \ge \delta_0^{-1} - G_\delta(r) \ge 0,$$ which implies for $r<R$. We will take $R>0$ large enough that $$\label{e:r1} r \ge R \Longrightarrow G_\delta(r) \le E/4.$$ First let us see that implies for $r>R$. Indeed, for $r > R_0$, implies $$\psi_0 - V - (\partial_r V - \psi'_0)w/w' = - V - (\partial_r V)w/w' \ge - G_\delta(r) \ge -E/4,$$ while, if $R < r < R_0$, we have $ \psi_0 + \frac w {w'} \psi_0' = -E /4$, and hence implies $$\psi_0 - V - (\partial_r V - \psi'_0)w/w'\ge - E/4 - G_\delta(r) \ge -E/2.$$ Next note that, for any $R>0$, $\psi$ is continuous if and only if we take $B$ and $R_0$ such that $$B = (\delta_0^{-1} + E/4)w(R), \qquad w(R_0) = 4B/E = (1 + 4 \delta_0^{-1} E^{-1})w(R).$$ Since $w$ takes values strictly between $0$ and $1$, this is possible if and only if $$\label{e:r2} w(R) < 1/(1+4\delta_0^{-1}E^{-1}).$$ Consequently, to complete the construction, it suffices to show that, if $\delta$ is small enough, then there is $R>0$ such that and hold. Define $R$ by $$(1+R)^{\delta-\delta_0} := \delta E/4,$$ so that $$G_\delta(R) \le \delta^{-1} (1+R)^{\delta-\delta_0} = E/4.$$ Note that, for $\delta>0$ sufficiently small we have, by , $$(1+R)^\delta = (\delta E/4)^{-\delta/(\delta_0-\delta)} = 1 + \delta_0^{-1} \delta |\ln \delta| + O(\delta) > (1+ r_{\max})^\delta.$$ So $G_\delta'(R)<0$ for $r \ge R$, and we have . Similarly, $$w(R) = 1 - (1 + R)^{-\delta} = O(\delta|\ln\delta|),$$ so this choice of $R$ also gives for $\delta>0$ sufficiently small, as desired. Let $$\begin{split} P_\varphi :=\, &e^{\varphi/h} r^{(n-1)/2} (P - i \varepsilon) r^{-(n-1)/2} e^{-\varphi/h} \\= &- h^2 \partial_r^2 + 2h \varphi' \partial_r + \Lambda + V_\varphi - E - i \varepsilon, \end{split}$$ where $$\begin{split} 0 \le \Lambda &:=\begin{cases}0, & n=1, \\ h^2 r^{-2} \left(-\Delta_{\mathbb S^{n-1}} + (n-1)(n-3)/4\right), & n\ge 3,\end{cases} \\ V_\varphi &:= V - \varphi'^2 + h\varphi''. \end{split}$$ Let $\int_{r,\omega}$ denote the integral over $(0,\infty) \times \mathbb S^{n-1}$ with respect to $drd\omega$, where $d\omega$ is the usual measure on the unit sphere $\mathbb S^{n-1}$. Then is equivalent to $$\label{e:carpphi} \int_{r,\omega} w' |u|^2 \le \frac {C}{h^2} \int_{r,\omega} \frac{|P_\varphi u|^2}{w'} + \frac {C\varepsilon}h \int_{r,\omega} |u|^2, \qquad u \in e^{\varphi/h} r^{(n-1)/2}C_0^\infty(\mathbb R^n).$$ We may assume $\varepsilon \le h$, since $w' \le1$ makes trivial for $\varepsilon > h $. We will prove $$\label{e:careps} \int_{r,\omega} \partial_r \left(w(E-V_\varphi)\right) |u|^2 \le \frac {2}{h^2} \int_{r,\omega} \frac{|P_\varphi u|^2}{w'} + \frac {C\varepsilon}h \int_{r,\omega} |u|^2,$$ which, together with , implies . In the spirit of [@cv; @rt; @voasy; @vo], put $$F(r) := \| h\partial_r u(r,\omega)\|^2_S - \langle (\Lambda + V_\varphi(r,\omega) - E)u(r,\omega),u(r,\omega)\rangle_S, \qquad r>0,$$ where $\| \cdot \|_S$ and $\langle \cdot, \cdot \rangle_S$ are the norm and inner product in $L^2(\mathbb S^{n-1})$. Note that $$\label{e:wfint} \int_0^\infty (w(r)F(r))' dr \le - \lim_{r \to 0} w(r) \liminf_{r \to 0} F(r) = 0.$$ We use the selfadjointness of $\Lambda + V_\varphi - E$ to compute the derivative of $F$ in terms of $P_\varphi$: $$\begin{split} F' &= 2 \operatorname{Re}\langle h^2u'',u'\rangle_S - 2 \operatorname{Re}\langle(\Lambda + V_\varphi - E)u,u' \rangle_S + 2r^{-1} \langle \Lambda u,u\rangle_S - \langle V_\varphi' u,u\rangle_S\\ &= - 2 \operatorname{Re}\langle P_\varphi u, u'\rangle_S + 4 h\varphi' \| u' \|_S^2 + 2\varepsilon \operatorname{Im}\langle u, u' \rangle_S + 2r^{-1} \langle \Lambda u,u\rangle_S - \langle V_\varphi' u,u\rangle_S, \\ \end{split}$$ where $u': =\partial_r u$ and $V_\varphi' : = \partial_r V_\varphi$. Consequently $$\begin{split} w F' + w' F = &- 2w \operatorname{Re}\langle P_\varphi u, u' \rangle_S + \left(4h^{-1} w \varphi' + w' \right)\|h u'\|_S^2 + 2w\varepsilon \operatorname{Im}\langle u, u'\rangle_S \\ & + \left(2wr^{-1} - w'\right) \langle \Lambda u,u\rangle_S + \langle \left(w(E-V_\varphi)\right)'u,u\rangle_S . \end{split}$$ Using $w \varphi' \ge 0$, $w'>0$, $\Lambda \ge 0$, $2wr^{-1} - w'>0$, and $ - 2 \operatorname{Re}\langle a, b \rangle + \|b\|^2 \ge -\|a\|^2 $, we obtain $$\begin{split} w F' + w' F \ge & - \frac{w^2}{h^2w'} \|P_\varphi u\|_S^2 + 2w\varepsilon \operatorname{Im}\langle u, u'\rangle_S + \langle\left(w(E-V_\varphi)\right)'u,u\rangle_S. \end{split}$$ Combining this with and using $w \le 1$ gives $$\label{e:epsrem1}\begin{split} \int_{r,\omega} \left(w(E-V_\varphi)\right)' |u|^2 \le \frac {1}{h^2} \int_{r,\omega} \frac{|P_\varphi u|^2}{w'} + & 2 \varepsilon \int_{r,\omega} |u u'| . \end{split}$$ On the other hand, for all $\gamma>0$ there is $C_\gamma$ such that $$\begin{split} \int_{r,\omega} |hu'|^2 &= \operatorname{Re}\int_{r,\omega} \bar u ( P_\varphi - 2h\varphi'\partial_r - \Lambda - V_\varphi + E + i \varepsilon)u \\ &\le \int_{r,\omega} |P_\varphi u| |u| + 2 \int_{r,\omega} \varphi' |h u'| |u| + \int_{r,\omega} |E - V_\varphi| |u|^2 \\ & \le \int_{r,\omega} |P_\varphi u|^2 + C_\gamma \int_{r,\omega} |u|^2 + \gamma \int_{r,\omega} \varphi' |h u'|^2. \end{split}$$ Choosing $\gamma = 1/(2\max \varphi')$ gives $$\label{e:epsrem2}\begin{split} \int_{r,\omega} |hu'|^2 \le 2\int_{r,\omega} |P_\varphi u|^2 + C \int_{r,\omega} |u|^2 . \end{split}$$ Applying $2 \int_{r,\omega} |u u'| \le h^{-1}\int_{r,\omega} |u|^2 + h^{-1} \int_{r,\omega} |hu'|^2 $ to , and using and $\varepsilon \le h$, gives . Put $C_0 = 2 \max \varphi$. Then, since $\varphi(r) = C_0$ for $r \ge R_0$, implies $$\begin{split} e^{-C_0/h}\left\| m^{-1} \mathbf{1}_{\le R_0} v\right\|^2_{L^2} + \left\| m^{-1} \mathbf{1}_{\ge R_0} v\right\|^2_{L^2} & \le e^{-C_0/h}\left\| m^{-1} e^{\varphi/h}v\right\|^2_{L^2} \\ &\le \frac {C}{h^2}\left\|m(P-i\varepsilon)v\right\|^2_{L^2} + \frac {C_1\varepsilon}h \|v\|^2_{L^2}, \end{split}$$ where we abbreviated $L^2(\mathbb R^n)$ as $L^2$. Then using $$\begin{split} 2 \varepsilon \|v\|^2_{L^2} = & -2 \operatorname{Im}\langle (P- i\varepsilon)v,v\rangle_{L^2} \le \gamma^{-1} \left\|m \mathbf{1}_{\ge R_0} (P - i \varepsilon) v\right\|^2_{L^2} + \\ & \gamma \|m^{-1} \mathbf{1}_{\ge R_0} v\|^2_{L^2} + \gamma_0^{-1} \left\| m\mathbf{1}_{\le R_0} (P - i \varepsilon) v\right\|^2_{L^2} + \gamma_0 \|m^{-1} \mathbf{1}_{\le R_0} v\|^2_{L^2}, \end{split}$$ with $\gamma = e^{-2C_0/h}$ and $\gamma_0 = h/C_1$ we conclude that, for $h$ sufficiently small, $$\label{e:opest}\begin{split} e^{-C/h}\left\| m^{-1} \mathbf{1}_{\le R_0} v\right\|^2_{L^2} + \left\| m^{-1} \mathbf{1}_{\ge R_0} v\right\|^2_{L^2} &\le \\ e^{C/h} \left\|m\mathbf{1}_{\le R_0} (P - i \varepsilon) v\right\|^2_{L^2} &+ \frac {C}{h^2}\left\|m \mathbf{1}_{\ge R_0}(P-i\varepsilon)v\right\|^2_{L^2}, \end{split}$$ for all $v \in C_0^\infty(\mathbb R^n)$. We will deduce from that, for any $f \in L^2$, we have $$\label{e:resest}\begin{split} e^{-C/h}\left\| \mathbf{1}_{\le R_0} (P-i\varepsilon)^{-1} m^{-1} f\right\|^2_{L^2} + \left\| m^{-1} \mathbf{1}_{\ge R_0} (P-i\varepsilon)^{-1} m^{-1} f\right\|^2_{L^2} &\le \\ e^{C/h} \left\|\mathbf{1}_{\le R_0} f \right\|^2_{L^2} &+ \frac {C}{h^2}\left\|\mathbf{1}_{\ge R_0}f \right\|^2_{L^2}, \end{split}$$ from which the Theorem follows. For this we need the fact that, for fixed $\varepsilon, h>0$, $$\label{e:bothbounded} \frac 1 {C_{\varepsilon, h}} \|m v\|_{H^2}\le \|m(P-i\varepsilon)v\|_{L^2} \le C_{\varepsilon, h} \|mv\|_{H^2}, \qquad mv \in H^2.$$ Momentarily assuming , fix $f \in L^2$, so $m(P-i\varepsilon)^{-1} m^{-1} f \in H^2$. Take $v_k \in C_0^\infty$ with $$\|mv_k - m(P-i\varepsilon)^{-1} m^{-1} f\|_{H^2} \to 0 \textrm{ as } k \to \infty.$$ Then in particular $\|m^{-1} v_k - m^{-1} (P-i\varepsilon)^{-1} m^{-1}f\|_{L^2} \to 0$, and, by , $$\|m(P - i \varepsilon) v_k - f\|_{L^2} \le C_{\varepsilon, h} \|mv_k - m(P-i\varepsilon)^{-1} m^{-1} f\|_{H^2} \to 0 \textrm{ as } k \to \infty.$$ Consequently follows by applying wtih $v_k$ in place of $v$, and letting $k \to \infty$. It remains to prove . Below, $a \lesssim b$ means $a \le C b$ with $C$ depending on $\varepsilon$ and $h$ (but not $v$). By the Kato–Rellich Theorem, $(P-i\varepsilon)^{-1}$ is bounded $L^2 \to H^2$, so $$\label{e:kr} \|mv\|_{H^2} \lesssim \|(P-i\varepsilon) mv\|_{L^2} \lesssim \|mv\|_{H^2},$$ for all $v$ with $m v \in H^2$. Meanwhile, $[P,m] = -2h^2 m' \partial_r -h^2m''-h^2(n-1)m'/r$ is bounded $H^2 \to L^2$, allowing us to deduce the second of from the second of : $$\| m(P-i\varepsilon)v\|_{L^2} \lesssim \|mv\|_{H^2} + \|[P,m]v\|_{L^2} \lesssim \|mv\|_{H^2}.$$ Similarly we deduce the first of from the first of : $$\|m v\|_{H^2} \lesssim \|m(P-i\varepsilon)v\|_{L^2} + \|[P,m]v\|_{L^2} \lesssim \|m(P-i\varepsilon)v\|_{L^2}.$$ \#1[[arXiv:\#1](http://arxiv.org/abs/#1)]{} [0]{} Nicolas Burq. Décroissance de l’énergie locale de l’équation des ondes pour le problème extérieur et absence de résonance au voisinage du réel. *Acta Math.* 180:1, 1–29, 1998. Nicolas Burq. Lower bounds for shape resonances widths of long range [S]{}chrödinger operators. , 124:4, 677–735, 2002. Fernando Cardoso and Georgi Vodev. Uniform estimates of the resolvent of the [L]{}aplace-[B]{}eltrami operator on infinite volume [R]{}iemannian manifolds [II]{}. 3:4, 673–691, 2002. Hans Christianson. High-frequency resolvent estimates on asymptotically Euclidean warped products. . Kiril Datchev and András Vasy. Propagation through trapped sets and semiclassical resolvent estimates. *Ann. Inst. Fourier* 62:6, 2347–2377, 2012. Semiclassical resolvent estimates at trapped sets. *Ann. Inst. Fourier* 62:6, 2379–2384, 2012. Semyon Dyatlov. Resonance projectors and asymptotics for r-normally hyperbolic trapped sets. . Peter Hintz and András Vasy. Non-trapping estimates near normally hyperbolic trapping. . Stéphane Nonnenmacher and Maciej Zworski. Decay of correlations for normally hyperbolic trapping. . Igor Rodnianski and Terence Tao. Effective limiting absorption principles, and applications. . Johannes Sjöstrand. Lectures on resonances. <http://sjostrand.perso.math.cnrs.fr/Coursgbg.pdf>. Georgi Vodev. Exponential bounds of the resolvent for a class of noncompactly supported perturbations of the Laplacian. *Math. Res. Lett*, 7:3, 287–298, 2000. Georgi Vodev. Semi-classical resolvent estimates for Schrödinger operators. *Asymptot. Anal.* 81:2, 157–170, 2013. Georgi Vodev. Semi-classical resolvent estimates and regions free of resonances. To appear in *Math. Nach.* Jared Wunsch. Resolvent estimates with mild trapping. *Journées équations aux dérivées partielles*. Exp. No. 13, 15 p., 2012.
{ "pile_set_name": "ArXiv" }
--- abstract: | The celebrated result of Eskin, Margulis and Mozes ([@EMM]) and Dani and Margulis ([@DM]) on quantitative Oppenheim conjecture says that for irrational quadratic forms $q$ of rank at least 5, the number of integral vectors ${\mathbf v}$ such that $q({\mathbf v})$ is in a given bounded interval is asymptotically equal to the volume of the set of real vectors ${\mathbf v}$ such that $q({\mathbf v})$ is in the same interval. In dimension $3$ or $4$, there are exceptional quadratic forms which fail to satisfy the quantitative Oppenheim conjecture. Even in those cases, one can say that almost all quadratic forms hold that two asymptotic limits are the same ([@EMM Theorem 2.4]). In this paper, we extend this result to the $S$-arithmetic version. address: 'J. Han. Research institute of Mathematics, Seoul National University [*Email address: [email protected]*]{}' author: - Jiyoung Han title: 'Quantitative Oppenheim conjecture for $S$-arithmetic quadratic forms of rank $3$ and $4$' --- Introduction ============ History {#history .unnumbered} ------- The Oppenheim conjecture, proved by Margulis [@Mar], says that the image set $q(\mathbb Z^n)$ of integral vectors of an isotropic irrational quadratic form $q$ of rank at least 3 is dense in the real line (See [@Op] for the original statement of Oppenheim conjecutre). Let $(a,b)$ be a bounded interval and let $\Omega\subset {\mathbb{R}}^n$ be a convex set containing the origin. Dani and Margulis [@DM] and Eskin, Margulis and Mozes [@EMM] established a quantitative version of Oppenheim conjecture: for a quadratic form $q$, let ${\mathbf V}_{(a,b),\Omega}(T)$ be the volume of the region $T\Omega \cap q^{-1}(a,b)$ and ${\mathbf N}_{(a,b),\Omega}(T)$ be the number of integral vectors in $T\Omega \cap q^{-1}(a,b)$. They found that if an irrational isotropic quadratic form $q$ is of rank greater than or equal to 4 and is not a split form, then ${\mathbf V}_{(a,b),\Omega}(T)$ approximates ${\mathbf N}_{(a,b),\Omega}(T)$ as $T$ goes to infinity. Moreover, in this case, the image set $q({\mathbb{Z}}^n)$ is equidistributed in the real line. Their works heavily rely on the dynamical properties of orbits of a certain semisimple Lie group, generated by its unipotent one-parameter subgroups, on the associated homogeneous space. The $S$-arithmetic space is one of the optimal candidates to consider a generalization of their work, since it has a lattice ${\mathbb{Z}}_S^n$ which is similar to the integral lattice ${\mathbb{Z}}^n$ in ${\mathbb{R}}^n$. Borel and Prasad [@BP] extend Margulis’ theorem to the $S$-arithmetic version and [Han, Lim and Mallahi-Karai]{} [@HLM] proved the quantitative version of $S$-arithmetic Oppenheim conjecture. See also [@GT] and [@KT] for flows on $S$-arithmetic symmetric spaces. Ratner [@Ra] generalized her measure rigidity of unipotent subgroups in the real case to the cartesian product of real and $p$-adic spaces. Main statements {#main-statements .unnumbered} --------------- Consider a finite set $S_f=\{p_1, \ldots, p_s\}$ of odd primes and let $S=\{\infty\}\cup S_f$. For each $p \in S$, denote by ${{\mathbb{Q}}}_p$ the completion field of ${{\mathbb{Q}}}$ with respect to the $p$-adic norm. If $p=\infty$, the norm $\|\cdot\|_\infty$ is the usual Euclidean norm and ${{\mathbb{Q}}}_\infty={\mathbb{R}}$. Define ${{\mathbb{Q}}}_S=\prod_{p\in S} {{\mathbb{Q}}}_p$. The diagonal embedding of $$\begin{split} {\mathbb{Z}}_S &=\{mp_1^{n_1}\cdots p_s^{n_s} : m, n_1, \ldots, n_s \in {\mathbb{Z}}\} \end{split}$$ is a uniform lattice subgroup in an additive group ${{\mathbb{Q}}}_S$. We will define *an $S$-lattice* $\Delta$ in ${{\mathbb{Q}}}_S^n$ as a free ${\mathbb{Z}}_S$-module of rank $n$. An *$S$-quadratic form* ${\mathsf q^{}_S}$ is a collection of quadratic forms $q_p$ defined over ${{\mathbb{Q}}}_p$, $p\in S$. We call ${\mathsf q^{}_S}$ *nondegenerate or isotropic* if every $q_p$ in ${\mathsf q^{}_S}$ is nondegenerate or isotropic, respectively. We say that ${\mathsf q^{}_S}$ is *rational* if there is a single rational quadratic form $q$ such that ${\mathsf q^{}_S}=(\lambda_p q)_{p \in S}$, where $\lambda_p \in {{\mathbb{Q}}}_p-\{0\}$ and ${\mathsf q^{}_S}$ is *irrational* if it is not rational. **Notation.** For a nonzero vector ${\mathbf v}=(v_p)_{p\in S} \in {{\mathbb{Q}}}_S^n$, the $p$-adic norm $\|{\mathbf v}\|_p$ refers the $p$-adic norm $\|v_p\|_p$ of $p$-adic component $v_p \in {{\mathbb{Q}}}_p^n$. We will use the following notation $$\sigma(p)=\left\{\begin{array}{ll} -1 & \text{if} \;p<\infty,\\ 1 & \text{if} \;p=\infty.\\ \end{array}\right.$$ We use the notation ${\mathbf v}/\|{\mathbf v}\|_p^{\sigma}:=(v_p/|v_p|_p^\sigma)_{p\in S}$ for a vector consisting of unit vectors in each place $p\in S$. For $p \in S$, we fix a star-shaped convex set $\Omega_p$ on ${{\mathbb{Q}}}_p^n$ centered at the origin defined as $$\Omega_p=\left\{ {\mathbf v}\in {{\mathbb{Q}}}_p^n : \| {\mathbf v}\|_p < \rho_p({\mathbf v}/\| {\mathbf v}\|^\sigma_p ) \right\},$$ where $\rho_p$ is a positive function on the set of unit vectors in ${{\mathbb{Q}}}_p^n$ and let $\Omega=\prod_{p\in S} \Omega_p$. If $p<\infty$, we will assume that $\rho_p$ is $({\mathbb{Z}}_p-p{\mathbb{Z}}_p)$-invariant: for any $u \in {\mathbb{Z}}_p-p{\mathbb{Z}}_p$ and for any unit vector $v_p \in {\mathbb{Z}}_p^n-p{\mathbb{Z}}_p^n$, $$\rho_p(u v_p)=\rho(v_p).$$ Let $I_p\subset {{\mathbb{Q}}}_p$ be a bounded convex set of the form $(a,b)$ if $p=\infty$ and $a+p^b{\mathbb{Z}}_p$ if $p<\infty$. We call $I_p$ a *$p$-adic interval*. Define *an $S$-interval* ${\mathsf{I}}_S=\prod_{p \in S} I_p$. Denote by ${\mathsf{T}}=(T_p)_{p\in S}\in {\mathbb{R}}_{\ge 0}\times \prod_{p\in S_f} (p^{{\mathbb{Z}}}\cup\{0\})$ the collection of radius parameters $T_p$, $p \in S$. Consider ${\mathsf{T}}\Omega=\prod_{p\in S}T_p\Omega_p$ the dilation of $\Omega$ by ${\mathsf{T}}$.\ We are interested in the number ${\mathbf N}({\mathsf{T}})={\mathbf N}({\mathsf q^{}_S}, \Omega, {\mathsf{I}}_S)({\mathsf{T}})$ of vectors ${\mathbf v}$ in ${\mathbb{Z}}_S^n \cap {\mathsf{T}}\Omega$ such that ${\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S$. Let $\mu$ be the product of Haar measures $\mu_p$ on ${{\mathbb{Q}}}_p$, $p \in S$. We assume that $\mu_\infty$ is the usual Lebesgue measure on ${\mathbb{R}}$ and $\mu_p({\mathbb{Z}}_p)=1$ when $p<\infty$. Let ${\mathbf V}({\mathsf{T}})={\mathbf V}({\mathsf q^{}_S}, \Omega, {\mathsf{I}}_S)({\mathsf{T}})$ be the volume of the set $\{{\mathbf v}\in {\mathsf{T}}\Omega : {\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S\}$ with respect to $\mu^n$ on ${{\mathbb{Q}}}_S^n$. Recall that a quadratic form of rank $4$ is *split* if it is equivalent to $x_1x_4-x_2x_3$. In [@HLM], when an isotropic irrational quadratic form ${\mathsf q^{}_S}$ is of rank $n\ge 4$ and does not contain a split form, then as ${\mathsf{T}}\rightarrow \infty$, ${\mathbf N}({\mathsf{T}})$ is approximated to ${\mathbf V}({\mathsf{T}})$. Here, we say that ${\mathsf{T}}\rightarrow \infty$ if $T_p \rightarrow \infty$ for all $p\in S$. Moreover, it is possible to estimate ${\mathbf V}({\mathsf{T}})$ in terms of ${\mathsf{I}}_S$ and ${\mathsf{T}}$. As a result, there is a constant $C({\mathsf q^{}_S}, \Omega)>0$ such that $$\lim_{{\mathsf{T}}\rightarrow \infty}{\mathbf N}({\mathsf{T}})= C({\mathsf q^{}_S}, \Omega) \mu({\mathsf{I}}_S) |{\mathsf{T}}|^{n-2},$$ where $|{{\mathsf{T}}}|=\prod_{p\in S} T_p$. When ${\mathsf q^{}_S}$ is of rank $3$ or ${\mathsf q^{}_S}$ is of rank $4$ and contains a split form, there exist quadratic forms such that ${\mathbf N}({\mathsf{T}})$ fails to approximate ${\mathbf V}({\mathsf{T}})$. For instance, there is an irrational quadratic form ${\mathsf q^{}_S}$ of rank $3$ and a sequence $({\mathsf{T}}_j)$ for a given $\varepsilon>0$ so that ${\mathbf N}({\mathsf{T}}_j)\ge |{\mathsf{T}}_j|(\log|{\mathsf{T}}_j|)^{1-\varepsilon}$ (see [@EMM Theorem 2.2] and [@HLM Lemma 9.2]). Even in the low dimensional cases, one can expect that for generic isotropic quadratic forms, ${\mathbf N}({\mathsf{T}})$ approximates ${\mathbf V}({\mathsf{T}})$, which is our main theorem. Here, the term *generic* is with respect to the following measure: fix some quadratic form ${\mathsf q^0_S}$. One can identify ${{\mathrm{SO}}}({\mathsf q^0_S})\setminus {\mathrm{SL}}_n({{\mathbb{Q}}}_S)$ with the space of quadratic forms of the same discriminant with ${\mathsf q^0_S}$. Under this identification, one can assign a natural ${\mathrm{SL}}_n({{\mathbb{Q}}}_S)$-invariant measure on the space of quadratic forms. \[main thm\] For almost all isotropic quadratic forms ${\mathsf q^{}_S}$ of rank 3 or 4, as ${\mathsf{T}}\rightarrow \infty$, $${\mathbf N}({\mathsf q^{}_S}, \Omega, {\mathsf{I}}_S)({\mathsf{T}}) \sim \lambda_{{\mathsf q^{}_S}, \Omega}\; \mu({\mathsf{I}}_S) |{\mathsf{T}}|^{n-2},$$ where $n=\operatorname{rank}({\mathsf q^{}_S})$ and $\lambda_{{\mathsf q^{}_S},\Omega}$ is a constant depending on a quadratic form ${\mathsf q^{}_S}$ and a convex set $\Omega$. On the generic quadratic forms and their Oppenheim conjecture-type problems, there is a work of Bourgain [@Bour] for real diagonal quadratic forms using analytic number theoretical method. Ghosh and Kelmer [@GK] showed another version of quantitative version of Oppenheim problem for real generic ternary quadratic forms and Ghosh, Gorodnik and Nevo [@GGN] extended this result for more general setting, such as for generic characteristic polynomial maps. Recently, Athreya and Margulis [@AM] provided the bound of error terms ${\mathbf N}_{(a,b),\Omega}(T)-{\mathbf V}_{(a,b),\Omega}(T)$ for almost every real quadratic forms of rank at least 3. For a pair of a quadratic form $q$ and a linear form $L$ of ${\mathbb{R}}^n$, Gorodnik [@Gor] showed the density of $\{(q({\mathbf v}),L({\mathbf v})) : {\mathbf v}\in \mathcal P({\mathbb{Z}}^n)\}\subseteq {\mathbb{R}}^2$ under the certain assumption and Lazar [@Laz] generalized his result for the $S$-arithmetic setting. Sargent showed the density of $M(\{{\mathbf v}\in {\mathbb{Z}}^n : q({\mathbf v})=a\})\subseteq {\mathbb{R}}$, where $q$ is a rational quadratic form and $M$ is a linear map ([@Sar1]). In [@Sar2], he showed the quantitative version of his result. In Section 2, we briefly review the symmetric space of the real and $p$-adic Lie groups. In Section 3, we introduce an $S$-arithmetic alpha function, defined on ${\mathrm{SL}}_n({{\mathbb{Q}}}_S)/{\mathrm{SL}}_n({\mathbb{Z}}_S)$ and equidistribution properties. We prove the main theorem in Section 4. Acknowledgments {#acknowledgments .unnumbered} --------------- I would like to thank Seonhee Lim and Keivan Mallahi-Karai for suggesting this problem and providing valuable advices. This paper is supported by the Samsung Science and Technology Foundation under project No. SSTF-BA1601-03. Symmetric spaces ================ Let us denote by ${\mathsf{G}}$ an $S$-arithmetic group of the form ${\mathsf{G}}=\prod_{p\in S} G_p$, where $G_p$ is a semisimple Lie group defined over ${{\mathbb{Q}}}_p$, $p\in S$. An element of ${\mathsf{G}}$ is ${\mathsf{g}}=(g_p)_{p \in S}=(g_\infty, g_1, \ldots, g_s)$, where $g_i \in G_{p_i}$. In most of cases, ${\mathsf{G}}$ will be ${\mathrm{SL}}_n({{\mathbb{Q}}}_S):=\prod_{p \in S} {\mathrm{SL}}_n({{\mathbb{Q}}}_p)$, $n\ge 3$. Consider the lattice subgroup $\Gamma={\mathrm{SL}}_n({\mathbb{Z}}_S)$ of ${\mathsf{G}}$ and the symmetric space ${\mathsf{G}}/\Gamma$, which can be embedded in the space of unimodular $S$-lattices in ${{\mathbb{Q}}}_S^n$.\ The notation ${\mathsf q^0_S}$ refers to a *standard quadratic form*, which is the collection of quadratic forms such that $$\begin{split} &\left\{\begin{array}{l} q_\infty(x_1,x_2,x_3)=2x_1x_3+x_2^2,\\ q_p(x_1,x_2,x_3)=2x_1x_3+a_1x_2^2,\;p\in S_f,\\ \end{array}\right. \hspace{0.865in}\text{if}\; \operatorname{rank}({\mathsf q^0_S})=3,\\ &\left\{\begin{array}{l} q_\infty(x_1,x_2,x_3,x_4)=2x_1x_4+a_1x_2^2+a_2x_3^2,\\ q_p(x_1,x_2,x_3,x_4)=2x_1x_4+a_1x_2^2+a_2x_3^2,\;p\in S_f, \end{array}\right. \quad\text{if}\; \operatorname{rank}({\mathsf q^0_S})=4,\\ \end{split}$$ where $a_1,a_2 \in\{\pm1\}$ if $p=\infty$ and $a_1,a_2 \in\{\pm1,\pm u_0, \pm p, \pm pu_0\}$ if $p<\infty$. Here $u_0$ is some fixed square-free integer in ${{\mathbb{Q}}}_p$ (See [@Se Section 4.2]). In ${{\mathrm{SO}}}({\mathsf q^0_S})$, we fix a maximal compact subgroup ${\mathsf{K}}$ of ${{\mathrm{SO}}}({\mathsf q^0_S})$ $${\mathsf{K}}=K_\infty \times \prod_{p\in S_f} K_p=\left({{\mathrm{SO}}}(q^0_\infty)\cap {{\mathrm{SO}}}(n)\right) \times \prod_{p\in S_f} \left({{\mathrm{SO}}}(q^0_p)\cap {\mathrm{SL}}_n({\mathbb{Z}}_p)\right)$$ and a diagonal subgroup $$\label{diagonal group A} {\mathsf{A}}=\left\{{\mathsf{a}}_{{\mathsf{t}}}=\prod_{p\in S} a_{t_{p_i}} : {\mathsf{t}}=(t_\infty, t_1, \ldots, t_s) \in {\mathbb{R}}_{\ge0}\times p_1^{{\mathbb{Z}}}\times\cdots\times p_s^{{\mathbb{Z}}}\right\},$$ where $a_{t_\infty}={\mathrm{diag}}(e^{-t_\infty}, 1, \ldots, 1, e^{t_\infty})$ and $a_{t_p}={\mathrm{diag}}(p^{t_p},1, \ldots, p^{-t_p})$, $p\in S_f$. These groups ${\mathsf{K}}$ and ${\mathsf{A}}$ will be heavily used throughout the paper. We assume that ${\mathsf{T}}$ and ${\mathsf{t}}$ have the relation $$T_\infty=e^{t_\infty}\quad \text{and}\quad T_p=p^{t_p},\;\forall p\in S_f,$$ unless otherwise specified. Also we briefly denote by $T_i$ or $t_i$ instead of $T_{p_i}$ or $t_{p_i}$ respectively. In this section, we take $G_\infty={\mathrm{SL}}_n({\mathbb{R}})$ and $G_p={\mathrm{GL}}_n({{\mathbb{Q}}}_p)$, $p<\infty$, where $n=3$ or $4$. Corresponding maximal compact subgroups are $\hat K_\infty={{\mathrm{SO}}}(n)$ and $\hat K_p={\mathrm{GL}}_n({\mathbb{Z}}_p),$ respectively. The 3-dimensional hyperbolic space ${{\mathbb{H}}}^3$ ----------------------------------------------------- Let $q^0_\infty$ be a standard isotropic quadratic form of rank $n$, $n=3,4$. The special orthogonal subgroup $H_\infty={{\mathrm{SO}}}(q^0_\infty)$ is one of the well-known linear groups ${{\mathrm{SO}}}(2,1)$, ${{\mathrm{SO}}}(3,1)$ and ${{\mathrm{SO}}}(2,2)$. Let us examine the symmetric space of $H_\infty$ quotiented by $K_\infty$ and define a metric invariant by right multiplication. **Case i) $H_\infty={{\mathrm{SO}}}(3,1)$:** Set ${{\mathbb{H}}}^3$ to be the 3-dimensional hyperbolic space ${{\mathbb{H}}}^3=\{z+ti : z=x+yj \in {{\mathbb{C}}}\;\text{and}\; t \in {\mathbb{R}}_{>0}\;\}$. The group ${{\mathrm{SO}}}(3,1)$ is locally isomorphic to ${\mathrm{PSL}}_2({{\mathbb{C}}})$, which is the group $\mathrm{Isom}^+({{\mathbb{H}}}^3)$ of orientation-preserving isometries of ${{\mathbb{H}}}^3$. Since the stabilizer of the point $i$ in ${\mathrm{PSL}}_2({{\mathbb{C}}})$ is the maximal compact subgroup ${\mathrm{{PSU}}}(2)$, we may identify the symmetric space $K_\infty\setminus {{\mathrm{SO}}}(3,1)$, which is isomorphic to ${\mathrm{{PSU}}}(2)\setminus {\mathrm{PSL}}_2({{\mathbb{C}}})$, with the hyperbolic space ${{\mathbb{H}}}^3$. Let ${\mathrm{p}}:{{\mathrm{SO}}}(3,1)\rightarrow K_\infty \setminus {{\mathrm{SO}}}(3,1)\simeq {{\mathbb{H}}}^3$ be the projection given by ${\mathrm{p}}(g)=g.i.$ Define the metric $d_\infty$ of $K_\infty\setminus {{\mathrm{SO}}}(3,1)$ by $$d_\infty(g_1, g_2)=d_\infty(K_\infty g_1, K_\infty g_2):=d_{{{\mathbb{H}}}^3}({\mathrm{p}}(g_1), {\mathrm{p}}(g_2)).$$ **Case ii) $H_\infty={{\mathrm{SO}}}(2,1)$:** Consider the isomorphism $K_{\infty}\setminus {{\mathrm{SO}}}(2,1)\simeq{{\mathrm{PSO}}}(2)\setminus {\mathrm{PSL}}_2({\mathbb{R}})\simeq {{\mathbb{H}}}^2=\{ x+ti : x \in {\mathbb{R}}\;\text{and}\; t \in {\mathbb{R}}_{>0}\:\}$. We will use the notation ${\mathrm{p}}$ for the projection ${{\mathrm{SO}}}(2,1)\rightarrow {{\mathbb{H}}}^2$ as well. We define the metric $d_\infty$ of $K_\infty\setminus {{\mathrm{SO}}}(2,1)$ by $$d_\infty(g_1, g_2):=d_{{{\mathbb{H}}}^2}({\mathrm{p}}(g_1), {\mathrm{p}}(g_2)).$$ **Case iii)$H_\infty={{\mathrm{SO}}}(2,2)$:** In the case of $H_\infty={{\mathrm{SO}}}(2,2)$, consider the isomorphism between real vector spaces ${\mathbb{R}}^4$ and $\mathcal M_{2}({\mathbb{R}})$ given by $$(x,y,z,w) \mapsto \left(\begin{array}{cc} x & y \\ z & w \end{array}\right).$$ The split form $xw-yz$ of ${\mathbb{R}}^4$ corresponds to the determinant of $\left(\begin{array}{cc} x & y \\ z & w \end{array}\right)$ in $\mathcal M_2({\mathbb{R}})$. Moreover, there is the local isomorphism from ${\mathrm{SL}}_2({\mathbb{R}})\times {\mathrm{SL}}_2({\mathbb{R}})$ to ${{\mathrm{SO}}}(2,2)$, which is induced by the action $$\label{local isom} (g_1, g_2).\left(\begin{array}{cc} x & y \\ z & w \end{array}\right)=g^{}_1\left(\begin{array}{cc} x & y \\ z & w \end{array}\right)g^t_2.$$ Hence we deduce that $$K_{\infty}\setminus {{\mathrm{SO}}}(2,2) \simeq_{loc} ({{\mathrm{SO}}}(2)\times {{\mathrm{SO}}}(2))\setminus ({\mathrm{SL}}_2({\mathbb{R}})\times {\mathrm{SL}}_2({\mathbb{R}})) \simeq {{\mathbb{H}}}^2_1 \times {{\mathbb{H}}}^2_2,$$ where ${{\mathbb{H}}}^2_1\simeq{{\mathbb{H}}}^2\simeq {{\mathbb{H}}}^2_2$. Note also that the action of $a_t$ splits into the action of $(b_t, b_t)$, where $b_t={\mathrm{diag}}(e^{t/2}, e^{-t/2})$. Put $$d_\infty=d_{{{\mathrm{SO}}}(2,2)}:=\max\{d_{{{\mathbb{H}}}^2_1}, d_{{{\mathbb{H}}}^2_2}\}.$$ \[lemma 3.12 real\] Let $H_\infty$ be one of ${{\mathrm{SO}}}(2,1)$, ${{\mathrm{SO}}}(3,1)$ or ${{\mathrm{SO}}}(2,2)$. Let $K_\infty$ be a maximal compact subgroup of $H_\infty$. Then there exists a constant $C_\infty >0$ such that for any $t>0$ and for any $r \in (0, 2t)$, $$\label{eq lemma 3.12 real} \left|\left\{k \in K_\infty : d_\infty(a^{}_t k a^{-1}_t, 1) \le r \right\}\right|<C_\infty e^{-2t+r},$$ where $|\cdot|$ is the normalized Haar measure on $K_\infty$. Assume that $H_\infty={{\mathrm{SO}}}(3,1)\cong {\mathrm{SL}}_2({{\mathbb{C}}})$ and $K_\infty\cong {\mathrm{{SU}}}(2)$. For each $t >0$, $K_\infty$ acts transitively on the hyperbolic sphere $S_{2t}\subset {{\mathbb{H}}}^3$ of radius $2t$ centered at $i$. Since $d_\infty$ is $H_\infty$-invariant, $$\begin{split} \left|\left\{k \in K_\infty : d_{\infty}(a^{}_t k a^{-1}_t, 1) \le r \right\}\right| &=\left|\left\{k \in K_\infty : d_{\infty}(a^{}_t k , a^{}_t) \le r \right\}\right|\\ &=\left|\left\{y \in S_{2t} : d_{{{\mathbb{H}}}^3}(y, e^{2t}i) \le r \right\}\right|. \end{split}$$ The $K_\infty$-invariant measure of $S_{2t}$ is identified with the normalized Lebesgue measure of the unit sphere ${{\mathbb{S}}}^2$ which is isomorphic to $\partial {{\mathbb{H}}}^3$. Let $x=e^{2t}i$ and let $\theta$ be the angle between two geodesics from $i$ to $x$ and from $i$ to $y$ (Figure 1). By the hyperbolic law of cosines $$\cosh(d(x,y))=1+\sinh^2 (2t)\cdot (1-\cos\theta)=1+2\sinh^2(2t) \sin^2(\frac \theta 2),$$ and since $d(x,y)\le r$, we obtain that $\theta$ is bounded by $e^{-2t+r/2}$. Hence $$\left|\left\{k \in K_\infty : d_{\infty}(a_t k a^{-1}_t, 1)\le r\right\}\right|\le C_\infty e^{2(-2t+r/2)}$$ for some constant $C_\infty>0$. The case of $H_\infty={{\mathrm{SO}}}(2,1)\cong {\mathrm{SL}}_2({\mathbb{R}})$ follows immediately from the compatibility between two embeddings ${{\mathrm{SO}}}(2,1)\hookrightarrow {{\mathrm{SO}}}(3,1)$ and ${{\mathbb{H}}}^2 \hookrightarrow {{\mathbb{H}}}^3$ with the projection ${\mathrm{p}}$ (see also [@EMM Lemma 3.12]). If $H_\infty={{\mathrm{SO}}}(2,2)$, using the local isometry , $K_\infty\cong {{\mathrm{SO}}}(2)\times{{\mathrm{SO}}}(2)$ acts transitively on the product $S_{t}\times S_{t}$ of two hyperbolic spheres in ${{\mathbb{H}}}^2\times {{\mathbb{H}}}^2$. Thus we have $$\begin{split}&\left|\left\{k \in K_\infty : d_\infty(a^{}_t k a^{-1}_t, 1) \le r \right\}\right|\\ &\hspace{0.3in}=\left|\{k_1\in {{\mathrm{SO}}}(2) : d_{{{\mathbb{H}}}}(b_t k_1, b_t)\le r\}\times\{k_2\in {{\mathrm{SO}}}(2) : d_{{{\mathbb{H}}}}(b_t k_2, b_t)\le r\}\right|\\ &\hspace{0.3in}< C \left(e^{-t+r/2}\right)^2=Ce^{-2t+r}. \end{split}$$ (-4.3,4.8) rectangle (4.3,0); (-0.9284,4.1919) arc (180:360:0.9284 and 0.1); (0.9284,4.1919) arc (65:115:2.1865 and 2.1865); (-4.7,0) – (4.7,0); (0,0) – (0,5.1); (1.6,1.35) – (-1.6,-1.35); at (3.9,4.4) [${{\mathbb{H}}}^2$]{}; at (4.7,4.9) [${{\mathbb{H}}}^3$]{}; (0,0.4) circle (1.4pt); at (-0.2,0.4) [$i$]{}; (0,2.2228) circle (2.1865); (-2.1865,2.2228) arc (180:360:2.1865 and 0.6); (2.1865,2.2228) arc (0:180:2.1865 and 0.6); (0,4.4093) circle(1.4pt); at (0.23, 4.55) [$x$]{}; (0.9284,4.1919) circle(1.4pt); at (0.99, 4.4329) [$y$]{}; (0,0.4) arc (178:155:9.8497 and 9.8497); (0,0.4) arc (2:180-155:9.8497 and 9.8497); (-0.9284,4.1919) arc (180:360:0.9284 and 0.1); (0.9284,4.1919) arc (0:180:0.9284 and 0.1); (0,2) arc (90:70:0.5 and 0.5); at (0.15,2.3) [$\theta$]{}; A tree in the building ${\mathcal{B}}_n$ ---------------------------------------- For a prime $p$ and $n\ge 2$, let us briefly recall the Euclidean building ${\mathcal{B}}_n$, whose vertex set $\mathcal V({\mathcal{B}}_n)$ is isomorphic to ${\mathrm{GL}}_n({\mathbb{Z}}_p)\setminus {\mathrm{GL}}_n({{\mathbb{Q}}}_p)$. Recall the Cartan decomposition ([@PR Theorem 3.14]) $${\mathrm{GL}}_n({{\mathbb{Q}}}_p)={\mathrm{GL}}_n({\mathbb{Z}}_p)\cdot\hat A\cdot {\mathrm{GL}}_n({\mathbb{Z}}_p)={\mathrm{GL}}_n({\mathbb{Z}}_p)\cdot \hat A^+ \cdot{\mathrm{GL}}_n({\mathbb{Z}}_p),$$ where $$\begin{split} \hat A&=\{{\mathrm{diag}}(p^{m_1}, p^{m_2}, \ldots, p^{m_n}) : m_1, m_2, \ldots, m_n \in {\mathbb{Z}}\},\\ \hat A^+&=\{{\mathrm{diag}}(p^{m_1}, p^{m_2}, \ldots, p^{m_n}) : 0\le m_1\le m_2\le\cdots\le m_n\}\subseteq \hat A. \end{split}$$ Consider the space of free ${\mathbb{Z}}_p$-modules $L$ of rank $n$ in ${{\mathbb{Q}}}_p^n$ on which ${\mathrm{GL}}_n({{\mathbb{Q}}}_p)$ acts by $$g.({\mathbb{Z}}_p {\mathbf v}_1\oplus\cdots\oplus {\mathbb{Z}}_p{\mathbf v}_n)={\mathbb{Z}}_p({\mathbf v}_1g)\oplus \cdots \oplus {\mathbb{Z}}_p({\mathbf v}_ng),$$ where $g\in {\mathrm{GL}}_n({{\mathbb{Q}}}_p)$ and ${\mathbf v}_1,\ldots,{\mathbf v}_n \in {{\mathbb{Q}}}_p^n$ are ${\mathbb{Z}}_p$-basis of $L$. Two rank-$n$ free ${\mathbb{Z}}_p$-modules $L_1$ and $L_2$ are said to be *equivalent* if $L_1=p^m L_2$ for some $m \in {\mathbb{Z}}$. The building ${\mathcal{B}}_n$ is an $(n-1)$-dimensional CW complex whose vertices are equivalence classes of free ${\mathbb{Z}}_p$-modules $[L]$ of rank $n$. Vertices $[L_0]$, $\ldots$, $[L_k]$ in ${\mathcal{B}}_n$ form a $k$-simplex if there are representatives $L_0$, $\ldots$, $L_k$ in ${{\mathbb{Q}}}_p^n$ such that $$p L_0 \subsetneq L_1 \subsetneq \cdots \subsetneq L_k \subsetneq L_0.$$ In particular, at each vertex $[L]$ in ${\mathcal{B}}_n$, the number of adjacent vertices with $[L]$ equals the number of proper nontrivial subspaces of the space $({\mathbb{Z}}_p/p{\mathbb{Z}}_p)^n$. Note that at most $(n-1)$ vertices form a simplex in ${\mathcal{B}}_n$: there is an ${\mathbb{Z}}_p$-generating set $\{{\mathbf v}_1, {\mathbf v}_2, \ldots, {\mathbf v}_n\}$ of $L$ such that $$\begin{split} &[pL]=[\langle p{\mathbf v}_1, p{\mathbf v}_2, \ldots, p{\mathbf v}_n\rangle] \subsetneq [\langle{\mathbf v}_1, p{\mathbf v}_2, \ldots, p{\mathbf v}_n\rangle] \subsetneq\\ &\hspace{0.2in} [\langle{\mathbf v}_1, {\mathbf v}_2, p{\mathbf v}_3, \ldots, p{\mathbf v}_n\rangle]\subsetneq [\langle{\mathbf v}_1, \ldots, {\mathbf v}_{n-1}, p{\mathbf v}_n\rangle \subsetneq [\langle{\mathbf v}_1, \ldots, {\mathbf v}_n\rangle]=[L]. \end{split}$$ Vertices $[L]$ in ${\mathcal{B}}_n$ is also denoted by $n$ by $n$ matrices whose rows are ${\mathbb{Z}}_p$ generators of $L$. By right multiplication of ${\mathrm{GL}}_n({\mathbb{Z}}_p)$, they are represented by upper triangle matrices whose diagonal entries are in $p^{{\mathbb{Z}}_{\ge0}}$. In ${\mathcal{B}}_n$, an apartment $\mathcal A$ is defined as follows. The vertex set $\mathcal V(\mathcal A)$ is $$\mathcal V(\mathcal A)=\{[ag]\in {\mathrm{GL}}_n({\mathbb{Z}}_p)\setminus {\mathrm{GL}}_n({{\mathbb{Q}}}_p) : a \in \hat A\}$$ for some $g\in{\mathrm{GL}}_n({{\mathbb{Q}}}_p)$. A $k$-simplicial complex is in $\mathcal A$ if its vertices are all contained in $\mathcal A$. Denote by $\mathcal A_0$ the apartment whose vertex set is $$\{[{\mathrm{diag}}(p^{m_1}, \ldots, p^{m_n})] : m_i \in {\mathbb{Z}}_{\ge 0}\}.$$ Note that $\mathcal A$ is isometric to ${\mathbb{R}}^{n-1}$ and $\mathcal V(\mathcal A_0)$ is a lattice in ${\mathbb{R}}^{n-1}$ (See Figure 2), which is the reason that ${\mathcal{B}}_n$ is called a Euclidean building. There is the natural covering map ${\mathrm{p}}:{\mathcal{B}}_n\rightarrow \mathcal A_0$ given by $${\mathrm{p}}: \left[\left(\begin{array}{cccc} p^{m_1} & v_{12} & \cdots & v_{1n}\\ & p^{m_2} & \cdots & v_{2n}\\ & & \ddots & \vdots\\ & & & p^{m_n} \end{array}\right)\right]\mapsto [{\mathrm{diag}}(p^{m_1}, p^{m_2}, \ldots, p^{m_n})],$$ where the matrix in the above map is a reduced type, i.e., $m_i \ge 0$ and $\gcd\{p^{m_j}, v_{ij}\}=1$, $1\le i<j\le n$. We give a distance $d$ on $\mathcal V({\mathcal{B}}_n)$ by $d([L_1],[L_2])=\ell$, where $\ell$ is the minimal number of edges connecting $[L_1]$ and $[L_2]$. Let $a:{\mathbb{Z}}\rightarrow \mathcal V(\mathcal A_0)$ be any geodesic in the vertex set of $\mathcal A_0$. Then the inverse image $${\mathrm{p}}^{-1}(\{a(m) : m \in {\mathbb{Z}}\})$$ is a tree in ${\mathcal{B}}_n$. If the above set $\{a(m)\}$ is generated by one element, then ${\mathrm{p}}^{-1}(\{a(m)\})$ is a regular tree. More basic properties about buildings can be found in [@AB], [@Robertson] for Euclidean buildings, and see also [@Setree] for Bruhat-Tits trees. (-4\*1.25,0) – (4\*1.25,0); (-4\*1.25, 3\^.5/2\*1.25) – (4\*1.25, 3\^.5/2\*1.25); (-4\*1.25, 3\^.5\*1.25) – (4\*1.25, 3\^.5\*1.25); (-4\*1.25, -3\^.5/2\*1.25) – (4\*1.25, -3\^.5/2\*1.25); (-4\*1.25, -3\^.5\*1.25) – (4\*1.25, -3\^.5\*1.25); (-4\*1.25, 27\^.5/2\*1.25) – (4\*1.25, 27\^.5/2\*1.25); (-1\*1.25, -3\^.5\*1.25) – (1.5\*1.25, 27\^.5/2\*1.25); (0\*1.25, -3\^.5\*1.25) – (2.5\*1.25, 27\^.5/2\*1.25); (1\*1.25, -3\^.5\*1.25) – (3.5\*1.25, 27\^.5/2\*1.25); (2\*1.25, -3\^.5\*1.25) – (4\*1.25, 3\^.5\*1.25); (-2\*1.25, -3\^.5\*1.25) – (0.5\*1.25, 27\^.5/2\*1.25); (-3\*1.25, -3\^.5\*1.25) – (-0.5\*1.25, 27\^.5/2\*1.25); (-4\*1.25, -3\^.5\*1.25) – (-1.5\*1.25, 27\^.5/2\*1.25); (-1.5\*1.25, 27\^.5/2\*1.25) – (1\*1.25, -3\^.5\*1.25); (-0.5\*1.25, 27\^.5/2\*1.25) – (2\*1.25, -3\^.5\*1.25); (0.5\*1.25, 27\^.5/2\*1.25) – (3\*1.25, -3\^.5\*1.25); (1.5\*1.25, 27\^.5/2\*1.25) – (4\*1.25, -3\^.5\*1.25); (-2.5\*1.25, 27\^.5/2\*1.25) – (0\*1.25, -3\^.5\*1.25); (-3.5\*1.25, 27\^.5/2\*1.25) – (-1\*1.25, -3\^.5\*1.25); (-4\*1.25, 3\^.5\*1.25) – (-2\*1.25, -3\^.5\*1.25); (-4\*1.25,-3\^.5\*1.25) – (-3\*1.25, -3\^.5\*1.25); (-3\*1.25,-3\^.5\*1.25) – (-2.5\*1.25, -3\^.5/2\*1.25); (-2.5\*1.25,-3\^.5/2\*1.25) – (-1.5\*1.25, -3\^.5/2\*1.25); (-1.5\*1.25, -3\^.5/2\*1.25) – (-1\*1.25, 0); (-1\*1.25,0) – (0,0); (0,0) – (0.5\*1.25, 3\^.5/2\*1.25); (0.5\*1.25, 3\^.5/2\*1.25) – (1.5\*1.25, 3\^.5/2\*1.25); (1.5\*1.25, 3\^.5/2\*1.25) – (2\*1.25, 3\^.5\*1.25); (2\*1.25, 3\^.5\*1.25) – (3\*1.25, 3\^.5\*1.25); (3\*1.25, 3\^.5\*1.25) – (3.5\*1.25, 27\^.5/2\*1.25); (3.5\*1.25, 27\^.5/2\*1.25)–(4\*1.25, 27\^.5/2\*1.25); (0.5\*1.25, 3\^.5/2\*1.25) circle (1.5pt); (2\*1.25, 3\^.5\*1.25) circle (1.5pt); (3.5\*1.25, 27\^.5/2\*1.25) circle (1.5pt); (-2.5\*1.25, -3\^.5/2\*1.25) circle (1.5pt); (-4\*1.25, -3\^.5\*1.25) circle (1.5pt); (-1\*1.25,0) circle (1.4pt); at (-1\*1.25, -0.3\*1.25) [$\mathrm{Id}_3$]{}; (-1\*1.25,0) – (0,0); at (0.7\*1.25, -0.3\*1.25) [${\mathrm{diag}}(1,1,p)$]{}; at (-0.5\*1.25, 3\^.5/2\*1.25+0.2\*1.25) [${\mathrm{diag}}(1,p,p)$]{}; (0,0) – (0.5\*1.25, 3\^.5/2\*1.25); at (0.5\*1.25+1.75, 3\^.5/2\*1.25+0.3) [${\mathrm{diag}}(1,p,p^2)\simeq {\mathrm{diag}}(p^{-1},1,p)$]{}; \[lemma 3.12 p-adic\] Let $H_p$ be a ${{\mathbb{Q}}}_p$-rank one connected and simply connected semisimple algebraic subgroup of ${\mathrm{GL}}_n({{\mathbb{Q}}}_p)$. Denote the Cartan decomposition of $H_p$ by $H_p=K_p\cdot \hat A\cdot K_p $, where $K_p=H_p\cap {\mathrm{GL}}_n({\mathbb{Z}}_p)$. Then the symmetric space $K_p \setminus H_p$ is a tree. Note that there is the natural embedding from $K_p\setminus H_p$ to ${\mathrm{GL}}_n({\mathbb{Z}}_p)\setminus {\mathrm{GL}}_n({{\mathbb{Q}}}_p)$ given by $K_p g \mapsto {\mathrm{GL}}_n({\mathbb{Z}}_p)g$, $g\in H_p$. For $g \in H_p$, we denote ${\mathrm{GL}}_n({\mathbb{Z}}_p)g$ by $K_p g$ and $\{{\mathrm{GL}}_n({\mathbb{Z}}_p)g : g \in H_p\}$ by $K_p\setminus H_p$. Since $H_p$ is of ${{\mathbb{Q}}}_p$-rank one, up to an appropriate conjugation, $K_p\hat A$ is generated by some element $a={\mathrm{diag}}(p^{m_1},\ldots,p^{m_n})$ with $m_1\le\cdots\le m_n$ and $m_1\neq 0$. Hence we may assume that $K_p \hat A=\langle a\rangle$. Choose a geodesic $\{a(m) : m\in {\mathbb{Z}}\}$ in $\mathcal V(\mathcal A_0)$ containing $K_p \hat A$. Then $K_p\setminus H_p$ is a subset of a tree ${\mathrm{p}}^{-1}(\{a(m) : m\in {\mathbb{Z}}\})$. Let us show that for a $4$-dimensional nondegenerate isotropic quadratic form $q_p$ on ${{\mathbb{Q}}}^4_p$, if it is not a split form, then the special orthogonal group ${{\mathrm{SO}}}(q_p)$ is of ${{\mathbb{Q}}}_p$-rank one, similar to the case of real. \[p-adic (3,1)\] Let $q_p$ be an isotropic non-split quadratic form of rank $4$. If we denote ${{\mathrm{SO}}}(q_p)=K\cdot \hat A\cdot K$, where $K={{\mathrm{SO}}}(q_p)\cap {\mathrm{SL}}_4({\mathbb{Z}}_p)$, then $\hat A$ is isomorphic to the diagonal subgroup $\left\{{\mathrm{diag}}(p^{-t},1,1, p^{t}) : t \in {\mathbb{Z}}_{\ge 0}\right\}$. Without loss of generality, let $q_p(x,y,z,w)=2xw+\alpha_1y^2+\alpha_2z^2$, where $\alpha_1, \alpha_2\in \{1, u_0, p, u_0p\}$ for some fixed $1\le u_0 \le p-1$, square-free over ${{\mathbb{Q}}}_p$. Since $q_p$ is not a split form, $\alpha_2/\alpha_1\neq -1$. Hence, up to a change of variables, we may further assume that 1. $\alpha_2/\alpha_1 \in \{p, pu^{}_0, pu^{-1}_0\}$ if $-1$ is square-free; 2. $\alpha_2/\alpha_1 \in \{p, pu^{}_0, pu^{-1}_0, u^{}_0\}$ otherwise. Since any semisimple element of a connected algebraic group is contained in a maximal abelian subgroup and two maximal abelian subgroups are conjugate to each other ([@Sp Theorem 6.3.5]), we may assume that $$\left\{{\mathrm{diag}}(p^{-t}, 1, 1, p^t) : t \in {\mathbb{Z}}_{\ge0}\right\} \subseteq \hat A.$$ If the ${{\mathbb{Q}}}_p$-rank of $\hat A$ is larger than one, there is an element $a=(a_{ij})_{1\le i,j\le 4}\in \hat A-{\mathrm{SL}}_4({\mathbb{Z}}_p)$ such that $a\neq{\mathrm{diag}}(p^{-t},1,1,p^t)$ for any $t$. From the commutativity, ${\mathrm{diag}}(p^{-t},1,1,p^t)\hspace{0.025in}a=a\hspace{0.025in}{\mathrm{diag}}(p^{-t},1,1,p^t)$ for any $t\in {\mathbb{Z}}$, it follows that $$a=\hspace{-0.05in}\left(\begin{array}{cccc} a_{11} & 0 & 0 & 0 \\ 0 & a_{22} & a_{23} & 0 \\ 0 & a_{32} & a_{33} & 0 \\ 0 & 0 & 0 & a_{44} \\ \end{array}\right)\hspace{-0.05in}=\hspace{-0.05in}\left(\begin{array}{cccc} a_{11} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & a_{44} \\ \end{array}\right)\hspace{-0.05in}\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & a_{22} & a_{23} & 0 \\ 0 & a_{32} & a_{33} & 0 \\ 0 & 0 & 0 & 1 \\ \end{array}\right).$$ Since $a\in {{\mathrm{SO}}}(q_p)$, it satisfies that for any $(x,y,z,w)\in {{\mathbb{Q}}}_p^4$, $$\begin{split} &2xw+\alpha_1 y^2+\alpha_2 z^2\\ &\hspace{0.2in}= 2(a_{11}x)(a_{44}w)+\alpha_1(a_{22}y+a_{23}z)^2+\alpha_2(a_{32}y+a_{33}z)^2\\ &\hspace{0.2in}= 2\left(a_{11}a_{44}\right)xw+\left(\alpha_1 a^{2}_{22}+\alpha_2 a^2_{32}\right)y^2+2\left(\alpha_1a_{22}a_{23}+\alpha_2a_{32}a_{33}\right)yz\\ &\hspace{0.2in}\quad+\left(\alpha_1 a^{2}_{23}+\alpha_2a^2_{33}\right)z^2. \end{split}$$ Since $a_{11}$, $a_{44}$ have no restriction except $a_{11}a_{44}=1$, by multiplying appropriate $(p^{-t},1,1,p^t)$ and $(u,1,1,u^{-1})$ to $a$, where $t\in {\mathbb{Z}}$ and $u \in {\mathbb{Z}}_p-p{\mathbb{Z}}_p$, we may assume that $a_{11}=a_{44}=1$. Hence we need to find $(a_{22},a_{23},a_{32},a_{33})\notin {\mathbb{Z}}_p^4$ such that (a) $a^{2}_{22}+(\alpha_2/\alpha_1) a^2_{32}=1$, (b) $a_{22}a_{23}+(\alpha_2/\alpha_1)a_{32}a_{33}=0$, (c) $(\alpha_1/\alpha_2)a^{2}_{23}+a^2_{33}=1$.\ If $\alpha_2/\alpha_1=p$, $pu^{}_0$, $pu^{-1}_0$, let us denote $\alpha_2/\alpha_1=pu$, where $u\in \{1, u^{}_0, u^{-1}_0\}$. Suppose that $(a_{22}, a_{32})\notin {\mathbb{Z}}_p^2$ and let $a^2_{22}+pua^2_{32}=\sum_{i\ge m} c_ip^i$, where $m\in {\mathbb{Z}}$ is the first term such that $c_m\neq 0$. Denote by $\nu:{{\mathbb{Q}}}_p\rightarrow {\mathbb{Z}}$ the $p$-adic valuation. Then 1. If $|a_{22}|_p\ge|a_{32}|_p$, then $m$ is even and $\nu(a_{22})<0$. Then $c_m=\left(a_{22} p^{m/2}\right)^2 \mod p^m$ and $m=2\nu(a_{22})<0$ which contradicts to (a). 2. If $|a_{22}|_p <|a_{32}|_p$, then $m$ is odd and $\nu(a_{32})<0$. Then $c_m=u\left(a_32 p^{(m-1)/2}\right)^2 \mod p^m$ and $m=2\nu(a_{32})+1<0$ which is also impossible according to (a). Hence $(a_{22},a_{32})\in {\mathbb{Z}}_p^2$. Similarly, $(a_{23},a_{33})\in {\mathbb{Z}}_p^2$. Now, assume $\alpha_2/\alpha_1=u_0$. In this case, $-1$ is a square. If there exists $(a_{22},a_{23},a_{32},a_{33})$ satisfying (a) through (c), then $u_0=-(a_{22}/a_{32})^2$ which contradict to the fact that $u_0$ is square-free. If $q_p$ is an isotropic quadratic form of rank 3, ${{\mathrm{SO}}}(q_p)$ is isomorphic to ${\mathrm{SL}}_2({{\mathbb{Q}}}_p)$. Then the symmetric space $K_p\setminus {{\mathrm{SO}}}(q_p)$ is isomorphic to a subtree of the $(p+1)$ regular tree ${\mathcal{T}}_p$. By Lemma \[lemma 3.12 p-adic\] and Lemma \[p-adic (3,1)\], if $q_p$ is an isotropic non-split quadratic form of rank 4, then $K_p\setminus{{\mathrm{SO}}}(q_p)$ is a subtree of the building ${\mathcal{B}}_n$. Lastly, if $q_p$ is a split form of rank 4, then ${{\mathrm{SO}}}(q_p)$ is locally isomorphic to ${\mathrm{SL}}_2({{\mathbb{Q}}}_p)\times{\mathrm{SL}}_2({{\mathbb{Q}}}_p)$ as in . Then $K_p\setminus {{\mathrm{SO}}}(q_p)$ is properly embedded to the product ${\mathcal{T}}_p \times {\mathcal{T}}_p$ of two $(p+1)$-regular trees. Let $q^0_p$ be a standard $p$-adic isotropic quadratic form of rank $n=3$ or $4$. Let $K_p={{\mathrm{SO}}}(q_p) \cap {\mathrm{SL}}_n({\mathbb{Z}}_p)$. For any $r\in (0,2t)$, $$\label{p-adic counting} \left|\left\{k\in K_p : d_p(a^{}_tka^{-1}_t, 1) \le r \right\}\right| \le \frac p {p+1} p^{-(2t-r)}.$$ Suppose that $q_p$ is an isotropic quadratic form of rank 3 or of rank 4 and non-split. By Lemma \[lemma 3.12 p-adic\] and Lemma \[p-adic (3,1)\], $K_p\setminus{{\mathrm{SO}}}(q_p)$ is isomorphic to a tree. Since $d_p$ is right $K_p$-invariant, the left hand side of is $$\left|\left\{k\in K_p : d_p(a^{}_tk, a^{}_t) \le r \right\}\right| =\frac{\#\{K_pa_tk : d_{tree}(K_pa_tk, K_pa_t)\le r\}}{\#\{K_pa_tk : k\in K_p\}}.$$ For any $k\in K_p$, consider the geodesic segment $[K_p, K_pa_tk]$ in ${\mathcal{T}}_p$. If $d_p(a_tk, a_t)\le r$, since $K_p \setminus {{\mathrm{SO}}}(q_p)$ is a tree, $[K_p, K_pa_tk]$ and $[K_p, K_pa_t]$ have the common segment of length at least $2t-\lceil r/2 \rceil$. Therefore $$\begin{split} \left|\left\{k\in K_p : d_p(a^{}_tk, a^{}_t) \le r \right\}\right| &=\frac {\#\{K_pa_tk : K_pa_{t-\lceil r/4 \rceil}k=K_pa_{t-\lceil r/4 \rceil} \}} {\#\{K_pa_tk : k \in K_p\}}\\ & =\frac{1}{\#\{K_pa_{t-\lceil r/4 \rceil}k : k \in K_p\}}, \end{split}$$ where $\lceil r\rceil$ is the largest integer less than or equal to $r$. Since ${{\mathrm{SO}}}(2xz-y^2)({{\mathbb{Q}}}_p)$ is embedded in ${{\mathrm{SO}}}(q_p)$, for any $t\in {\mathbb{N}}$, there are at least $(p+1)p(p^2)^{(t-1)-\lceil r/4 \rceil}$ vertices in the sphere of radius $2t-\lceil r/2 \rceil$ in $K_p \setminus {{\mathrm{SO}}}(q_p)$. In the split case, the action of $a_{2t}={\mathrm{diag}}(p^{2t},1,1,p^{-2t})$ on $K_p\setminus {{\mathrm{SO}}}(q_p)$ is converted to the action of $(b_t,b_t)$ on ${\mathrm{SL}}_2({\mathbb{Z}}_p)\setminus{\mathrm{SL}}_2({{\mathbb{Q}}}_p)\times{\mathrm{SL}}_2({\mathbb{Z}}_p)\setminus{\mathrm{SL}}_2({{\mathbb{Q}}}_p)\cong {\mathcal{T}}_p\times{\mathcal{T}}_p$, where $b_t={\mathrm{diag}}(p^t,p^{-t})$. Then similar to the case of ${{\mathrm{SO}}}(2,2)({\mathbb{R}})$, we obtain the inequality . Equidistribution property for generic points ============================================ For $x \in {\mathsf{G}}/\Gamma$, let $\Delta_x$ be an $S$-lattice in ${{\mathbb{Q}}}_S^n$ associated to $x$, i.e., if $x={\mathsf{g}}\Gamma$, $\Delta_x={\mathsf{g}}{\mathbb{Z}}^n_S$. We call $L\subseteq {{\mathbb{Q}}}^n_S$ *a $\Delta_x$-rational subspace* if $L$ is generated by elements in $\Delta_x$. If $L$ is a $\Delta_x$-rational subspace, $\Delta_x \cap L$ is an $S$-lattice in $L$. Let $d(L)=d_{\Delta_x}(L)$ be a covolume of $\Delta_x \cap L$ in $L$. If $\{{\mathbf v}_1, \ldots, {\mathbf v}_j\}$ is an ${\mathbb{Z}}_S$-basis of a $j$-dimensional $\Delta_x$-rational subspace $L$, then $d(L)$ is given by $$d(L)=\prod_{p \in S} \|{\mathbf v}_1\wedge \cdots \wedge {\mathbf v}_j\|_p.$$ Here for each $p \in S$, the $p$-adic norm $\|\cdot\|_p$ of ${{\mathbb{Q}}}_p$ is canonically extended to $\bigwedge^j({{\mathbb{Q}}}_p^n)$, [which we also denote by $\|\cdot\|_p$.]{} Define a function $\alpha^{}_S : {\mathsf{G}}/\Gamma \rightarrow {\mathbb{R}}_{>0}$ by $$\alpha^{}_S(x)=\sup\left\{\frac 1 {d(L)} : L\;\text{is}\; \Delta_x\text{-rational subspace}\;\right\}.$$ \[Schmidt lemma\] For a bounded function $f : {{\mathbb{Q}}}_S^n\rightarrow {\mathbb{R}}$ of compact support, the *Siegel transform* $\tilde f$ of $f$ is defined by $$\tilde f (x)=\sum_{{\mathbf v}\in \Delta_x} f({\mathbf v}),\quad x \in {\mathsf{G}}/\Gamma.$$ Then there is a constant $C_f>0$ such that $\tilde f(x)<C_f\alpha^{}_S(\Delta_x)$ for any $x \in {\mathsf{G}}/\Gamma$. By Lemma \[Schmidt lemma\] and Proposition \[lemma 3.10\], one can easily deduce that a Siegel transform $\tilde f$ is in $\mathcal L^s$. [@HLM Lemma 3.9]\[lemma 3.10\] The function $\alpha^{}_S$ on ${\mathrm{SL}}_n({{\mathbb{Q}}}_S)/{\mathrm{SL}}_n({\mathbb{Z}}_S)$ belongs to every $\mathcal L^s$, $1\le s < n$. In this section, we prove the equidistribution theorem for functions bounded by $\alpha^{}_S$. We first recall the Howe-Moore theorem for ${\mathrm{SL}}_n({{\mathbb{Q}}}_S)$. The statement for more general settings can be found in [@Benoist] or [@Oh]. \[matrix coeff\] Consider ${\mathsf{G}}={\mathrm{SL}}_n({{\mathbb{Q}}}_S)$ and $\Gamma={\mathrm{SL}}_n({\mathbb{Z}}_S)$, $n\ge 3$. Take a maximal subgroup $\hat {\mathsf{K}}$ of ${\mathsf{G}}$ as $\hat {\mathsf{K}}=\underset{p\in S}{\prod}{\hat K_p} ={{\mathrm{SO}}}(n) \times \underset {p \in S_f}{\prod} {\mathrm{SL}}_n({\mathbb{Z}}_p)$. Then, there exist constants $\lambda_p>0$, $p \in S$ satisfying the following: if $f_1, f_2 \in \mathcal L^2_0({\mathsf{G}}/\Gamma)$ satisfy either (1) both $f_1$ and $f_2$ are smooth of compact support or (2) both $f_1$ and $f_2$ are left $\hat {\mathsf{K}}$-invariant, then there exists a constant $C_{f_1, f_2}>0$ so that $$\int_{{\mathsf{G}}/\Gamma} f_1({\mathsf{g}}x)f_2(x) dx \le C_{f_1,f_2} e^{-\lambda_\infty d(g_\infty, 1)} \prod_{p \in S_f} p^{-\lambda_p d(g_p, 1)},$$ where ${\mathsf{g}}=(g_p)_{p\in S} \in {\mathsf{G}}$. Recall that ${\mathsf{a}}^{}_t$ is a diagonal element of ${\mathsf{A}}$ defined in . \[lemma 3.12\] Let ${\mathsf q^{}_S}$ be a nondegenerate isotropic quadratic form of rank $n=3$ or $4$. Let ${\mathsf{K}}$ be a maximal compact subgroup of ${{\mathrm{SO}}}({\mathsf q^{}_S})$ as in Section 3. Then there exist ${\mathsf{t}}_0\succ 0$ and a constant $C_S>0$ such that for ${\mathsf{t}}\succ {\mathsf{t}}_0$ and for any $r>0$, $$\label{eq lemma 3.12}\left|\left\{{\mathsf{k}}\in {\mathsf{K}}: \sum_{p \in S} d_p({\mathsf{a}}^{}_{\mathsf{t}}{\mathsf{k}}{\mathsf{a}}^{-1}_{\mathsf{t}},1) \le r \right\}\right|< C_S\; r^{|S_f|}\exp {\left( r -\sum_{p\in S} t_p\right)}.$$ Let $s=|S_f|$ and consider $(k_1, \ldots, k_s) \in {\mathbb{Z}}_{\ge0}$ such that $\sum_i k_i \le r$. From Lemma \[lemma 3.12 real\] and Lemma \[lemma 3.12 p-adic\], $$\begin{split} &\left|\left\{{\mathsf{k}}=(k_p) \in {\mathsf{K}}: \begin{array}{c} d_\infty (a^{}_{t_\infty} k_\infty a^{-1}_{t_\infty}, 1)\le r-\sum_{i=1}^s k_i\;\text{and}\\ d_{p_i}(a^{}_{t_{p_i}} k_{p_i} a^{-1}_{t_{p_i}},1)\le k_i, 1\le i\le s\end{array}\right\} \right|\\ &\hspace{1.7in}< C_\infty\;e^{(r-\sum_i k_i)-t_\infty} \times \prod_{i=1}^s \frac {p_i}{p_i+1}\; {p_i}^{{k_i}-2t_{p_i}}. \end{split}$$ The number of such nonnegative vectors $(k_1, \ldots, k_s)$ is bounded above by $r^s$. Since $e<p$ for any odd prime $p$, the inequality holds for $C_S:=C_\infty \prod_i p_i/(p_i+1)$. We say that a sequence $({\mathsf{t}}_j)$ is *divergent* if $({\mathsf{t}}_j)$ escapes any bounded subset of ${\mathbb{R}}_{>0}\times \prod_{p\in S_f}p^{{\mathbb{Z}}}$ as $j\rightarrow \infty$. Recall that in Theorem \[main thm\], we are interested in the asymptotic limit of the given quantity when every component $t_p$ of ${\mathsf{t}}$ goes to infinity. However, for the proof of the main theorem, we need to consider more general divergence in the next proposition and corollary. \[prop 3.13\] Let ${\mathsf q^{}_S}$ and ${\mathsf{K}}$ be as in Lemma \[lemma 3.12\] and let $\phi$ be a continuous function on ${\mathsf{G}}/\Gamma={\mathrm{SL}}_n({{\mathbb{Q}}}_S)/{\mathrm{SL}}_n({\mathbb{Z}}_S)$. Assume that there are constants $s\in (0,n/2)$ and $C_{\phi,s}>0$ for which $$|\phi(x) |< C_{\phi,s} \alpha(x)^s_S,\; \forall x \in {\mathsf{G}}/\Gamma.$$ Then for any nonnegative bounded function $\nu$ on ${\mathsf{K}}$ and a divergent sequence $({\mathsf{t}}_j)$, $$\label{eq prop 3.13} \lim_{j \rightarrow \infty} \int_{\mathsf{K}}\phi({\mathsf{a}}_{{\mathsf{t}}_j} {\mathsf{k}}x_0)\nu({\mathsf{k}}) dm({\mathsf{k}}) = \int_{{\mathsf{G}}/\Gamma} \phi\; d{\mathsf{g}}\int_{\mathsf{K}}\nu\; dm$$ for almost all $x_0 \in {\mathrm{SL}}_n({{\mathbb{Q}}}_S)/{\mathrm{SL}}_n({\mathbb{Z}}_S)$. Let $\varepsilon >0$ be arbitrary. For any function $f$ on ${\mathsf{G}}/\Gamma$, define $$A_{{\mathsf{t}}}f(x)=\int_{{\mathsf{K}}} f({\mathsf{a}}_{{\mathsf{t}}} {\mathsf{k}}x)\nu({\mathsf{k}}) dm({\mathsf{k}}).$$ For ${\mathsf{t}}\succ 0$, let us denote by ${\mathsf{s}\hspace{0.01in}}({\mathsf{t}})=\sum_{p\in S} t_p$. We claim that one can find ${\mathsf{t}}_0\succ 0$ and constants $C,\;\lambda>0$ such that for any ${\mathsf{t}}\succ {\mathsf{t}}_0$, $$\label{eq prop 3.13 (1)}\left|E_{{\mathsf{t}}}:= \left\{ x \in {\mathsf{G}}/\Gamma : \left|A_{{{\mathsf{t}}}'}\phi(x) - \int\phi \int \nu \right| > \varepsilon \;\text{for some}\;{\mathsf{t}}'\succ {\mathsf{t}}\;\right\}\right| < Ce^{-\lambda {\mathsf{s}\hspace{0.01in}}({\mathsf{t}})}.$$ Then implies that $$\left|\left\{x\in {\mathsf{G}}/\Gamma :\left| \begin{array}{l} \lim_{j \rightarrow \infty} \int_{\mathsf{K}}\phi({\mathsf{a}}_{{\mathsf{t}}_j} {\mathsf{k}}x_0)\nu({\mathsf{k}}) dm({\mathsf{k}})\\ \hspace{1in}- \int_{{\mathsf{G}}/\Gamma} \phi\; d{\mathsf{g}}\int_{\mathsf{K}}\nu\; dm \end{array}\right| > \varepsilon \right\}\right|\le\left|\bigcap^\infty_{j=1} E_{{\mathsf{t}}_j} \right|=0.$$ Let $A(r)=\left\{ x \in {\mathsf{G}}/\Gamma : \alpha^{}_S(x) > r \right\}$. Choose a smooth non-negative function $g_r : {\mathsf{G}}/\Gamma \rightarrow [0,1]$ such that 1. $g_r$ is ${\mathsf{K}}$-invariant, i.e., $g_r({\mathsf{k}}x)=g_r(x)$ for all ${\mathsf{k}}\in {\mathsf{K}}$; 2. $g_r(x)=1$ if $x$ is in $A(r+1)$ and $g_r(x)=0$ if $x$ is outside of $A(r)$. Take $\phi_1=\phi g_r$ and $\phi_2=\phi (1-g_r)$. Then $\phi=\phi_1 + \phi_2$ and $\phi_2$ is compactly supported. Choose $\theta >0$ such that $1 \le s+\theta<n/2$. Then for any $x \in {\mathsf{G}}/\Gamma$, $$\begin{split} \phi_1(x)&=\phi(x) g_r(x)\le C_\phi \alpha^s_S(x) g_r(x)=C_\phi \alpha^{-\theta}_S(x)\alpha^{s+\theta}_S(x)g_r(x) \\ &\le C_\phi r^{-\theta} \alpha^{s+\theta}_S(x). \end{split}$$ Put $\psi=C_\phi r^{-\theta} \alpha^{s+\theta}_S(x)$. By the above inequalities and Proposition \[lemma 3.10\], $$|\phi_1|\le \psi \;\text{and}\; \psi \in \mathcal L^1({\mathsf{G}}/\Gamma)\cap\mathcal L^2({\mathsf{G}}/\Gamma).$$ Let $f\in \mathcal L^2_0({\mathsf{G}}/\Gamma)$ be either a smooth function of compact support or a left ${\mathsf{K}}$-invariant function. Later, we will put $f=\phi_2-\int_{\mathsf{K}}\phi_2$ or $\psi-\int_{\mathsf{K}}\psi$. By Theorem \[matrix coeff\], there are $\lambda_f$, $C_f>0$ such that for any ${\mathsf{g}}\in {\mathsf{G}}$, $$\label{eq prop 3.13 (5)} \left|\int_{{\mathsf{G}}/\Gamma} f({\mathsf{g}}x)f(x)dx\right|\le C_f \exp\left(-\lambda_f \sum_{p \in S} d_p(g_p, 1)\right),$$ where ${\mathsf{g}}=(g_p)_{p\in S}$. For instance, we can take $\lambda_f=\min\{\lambda_p, p\in S\}$, where $\lambda_p$’s are in Proposition \[matrix coeff\]. Note that $$\begin{split} \| A_{{\mathsf{t}}} f\|^2_2 &=\int_{{\mathsf{G}}/\Gamma} \left(\int_{{\mathsf{K}}} f({\mathsf{a}}_{{\mathsf{t}}}{\mathsf{k}}x)\nu({\mathsf{k}}) dm({\mathsf{k}})\right)^2 dx \\ &= \int_{{\mathsf{G}}/\Gamma} \int_{{\mathsf{K}}}\int_{{\mathsf{K}}} f({\mathsf{a}}_{{\mathsf{t}}}{\mathsf{k}}_1 x) f({\mathsf{a}}_{{\mathsf{t}}}{\mathsf{k}}_2 x)\nu({\mathsf{k}}_1)\nu({\mathsf{k}}_2)dm({\mathsf{k}}_2)dm({\mathsf{k}}_1) dx \\ &=\int_{{\mathsf{G}}/\Gamma} \int_{{\mathsf{K}}}\int_{{\mathsf{K}}} f({\mathsf{a}}^{}_{{\mathsf{t}}}{\mathsf{k}}^{}_1 {\mathsf{k}}^{-1}_2 {\mathsf{a}}^{-1}_{{\mathsf{t}}} x) f(x)\nu({\mathsf{k}}_1)\nu({\mathsf{k}}_2)dm({\mathsf{k}}_2)dm({\mathsf{k}}_1) dx\\ &=\int_{{\mathsf{K}}}\int_{{\mathsf{G}}/\Gamma} f({\mathsf{a}}^{}_{{\mathsf{t}}}{\mathsf{k}}{\mathsf{a}}^{-1}_{{\mathsf{t}}} x) f(x) dx \int_{{\mathsf{K}}}\nu({\mathsf{k}}{\mathsf{k}}_2)\nu({\mathsf{k}}_2)dm({\mathsf{k}}_2)dm({\mathsf{k}})\\ &=\int_{{\mathsf{K}}}\left(\int_{{\mathsf{G}}/\Gamma} f({\mathsf{a}}^{}_{{\mathsf{t}}} {\mathsf{k}}{\mathsf{a}}^{-1}_{{\mathsf{t}}}x)f(x) dx\right)(\nu*\hat\nu)({\mathsf{k}}) dm(k), \end{split}$$ where $\hat\nu({\mathsf{k}}):=\nu({\mathsf{k}}^{-1})$, $\forall{\mathsf{k}}\in {\mathsf{K}}$. By Lemma \[lemma 3.12\], $$\left|\mathcal U:=\left\{{\mathsf{k}}\in {\mathsf{K}}: \sum_{p \in S} d_p({\mathsf{a}}^{}_{{\mathsf{t}}} {\mathsf{k}}{\mathsf{a}}^{-1}_{{\mathsf{t}}}, 1 ) \le {\mathsf{s}\hspace{0.01in}}({\mathsf{t}}) \right\}\right| \le C_S \left({{\mathsf{s}\hspace{0.01in}}({\mathsf{t}})} \right)^{|S_f|} \exp\left(-{{\mathsf{s}\hspace{0.01in}}({\mathsf{t}})}\right).$$ Hence by Cauchy-Schwartz inequality, we have that $$\label{eq prop 3.12 (6)} \begin{split} &\left|\int_{\mathcal U}\int_{{\mathsf{G}}/\Gamma} f({\mathsf{a}}^{}_{{\mathsf{t}}} {\mathsf{k}}{\mathsf{a}}^{-1}_{{\mathsf{t}}} x) f(x) dx (\nu *\hat \nu)({\mathsf{k}}) dm({\mathsf{k}})\right|\\ &\hspace{1.5in}\le \max(\nu *\hat \nu) \|f\|^2_2\; C_S \left( {{\mathsf{s}\hspace{0.01in}}({\mathsf{t}})}\right)^{|S_f|} \exp\left(- {{\mathsf{s}\hspace{0.01in}}({\mathsf{t}})} \right). \end{split}$$ It is deduced from that $$\label{eq prop 3.12 (7)} \left|\int_{{\mathsf{K}}-\mathcal U} \int_{{\mathsf{G}}/\Gamma} f({\mathsf{a}}^{}_{{\mathsf{t}}}{\mathsf{k}}{\mathsf{a}}^{-1}_{{\mathsf{t}}}x)f(x) dx (\nu*\hat\nu)({\mathsf{k}}) dm({\mathsf{k}})\right| \le \max(\nu*\hat\nu)C_f \exp\left(-\lambda_f {\mathsf{s}\hspace{0.01in}}({\mathsf{t}})\right).$$ Hence by combining and , there are ${\mathsf{t}}^1_0\succ0$ and a constant $\lambda^1=\lambda^1(f, {\mathsf{t}}^1_0)>0$ such that for ${\mathsf{t}}\succ{\mathsf{t}}^1_0$, $$\|A_{{\mathsf{t}}} f\|^2_2 \le \exp(-\lambda^1 {\mathsf{s}\hspace{0.01in}}({\mathsf{t}})).$$ Now let us show following inequalities: $$\label{eq prop 3.13 (2)} \left|\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}'}\phi_1 - \int_{{\mathsf{G}}/\Gamma} \phi_1 \int_{{\mathsf{K}}} \nu \right|> \frac {\varepsilon} 2 \;\text{for some}\; {\mathsf{t}}'\succ {\mathsf{t}}\right\}\right| < C_1 \exp(-\lambda_{\psi} {\mathsf{s}\hspace{0.01in}}({\mathsf{t}})),$$ $$\label{eq prop 3.13 (3)} \left|\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}'}\phi_2 - \int_{{\mathsf{G}}/\Gamma} \phi_2 \int_{{\mathsf{K}}} \nu \right|> \frac {\varepsilon} 2 \;\text{for some}\; {\mathsf{t}}'\succ {\mathsf{t}}\right\}\right| < C_2 \exp(-\lambda_{\phi_2} {\mathsf{s}\hspace{0.01in}}({\mathsf{t}}))$$ for some positive constants $C_1$ and $C_2$. It is obvious that and imply . First we note that $\alpha^{}_S({\mathsf{g}}x)/\alpha^{}_S(x) \le \max\{\prod_{p\in S}\|\wedge^i({\mathsf{g}})\|_p : i=1, \ldots, n\}$. Let ${\mathsf{t}}_{\tau}=(\tau, 0, \ldots, 0)$. Then there is $M>0$ such that for all $x\in {\mathsf{G}}/\Gamma$, for all $\tau \in [0,1]$, $\alpha^{s+\theta}_S({\mathsf{a}}_{{\mathsf{t}}_\tau} x)\le M\alpha_S^{s+\theta}(x)$. Hence we have $$\psi ({\mathsf{a}}_{{\mathsf{t}}_\tau} x) \le M \psi(x), \;\forall\tau\in[0,1].$$ Choose $r>0$ sufficiently large so that $\|\phi_1\|_1 \le \|\psi\|_1 \le \varepsilon/(8\max(\nu*\hat\nu)M)$. Then $$\begin{split} &\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}+{\mathsf{t}}_\tau}\phi_1 - \int \phi_1 \int \nu \right|> \frac \varepsilon 2\;\text{for some}\; \tau \in [0,1] \right\}\\ &\hspace{0.4in}\subseteq\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}+{\mathsf{t}}_\tau}\phi_1 \right|> \frac \varepsilon 4\;\text{for some}\; \tau \in [0,1] \right\}\\ &\hspace{0.4in}\subseteq\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}}\phi_1 \right|> \frac \varepsilon {4M} \; \right\} \subseteq\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}}\psi \right|> \frac \varepsilon {4M} \; \right\}\\ &\hspace{0.4in}\subseteq\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}}\phi_1 - \int \psi \int \nu \right|> \frac \varepsilon {8M} \; \right\}. \end{split}$$ By taking $f=\psi - \int_{{\mathsf{G}}/\Gamma} \psi$ in and using the Chebyshev’s inequality, there are ${\mathsf{t}}^{\psi}_0\succ0$ and $\lambda_\psi>0$ such that for any ${\mathsf{t}}\succ {\mathsf{t}}^{\psi}_0$, $$\begin{split} &\left|E({\mathsf{t}}, {\mathsf{t}}+{\mathsf{t}}_1)=\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}+{\mathsf{t}}_\tau}\phi_1 - \int \phi_1 \int \nu \right|> \frac \varepsilon 2\;\text{for some}\; \tau \in [0,1] \right\}\right|\\ &\hspace{0.8in} \le \left(\frac {8M} {\varepsilon} \right)^2 \exp (-\lambda_{\psi} {\mathsf{s}\hspace{0.01in}}({\mathsf{t}})). \end{split}$$ Hence using the geometric series argument, there is a constant $C_1>0$ such that the inequality  holds. $$\begin{split} &\left|\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}'}\phi_1 - \int \phi_1 \int \nu \right|> \frac \varepsilon 2\;\text{for some}\; {\mathsf{t}}'\succ {\mathsf{t}}\right\}\right|\\ &\hspace{1in}\le \sum_{\scriptsize \begin{array}{cc}t'_\infty=t_\infty+n,\\ \forall n\in {\mathbb{N}}\cup\{0\}\end{array}}\sum_{\scriptsize\begin{array}{cc}\forall t'_p\ge t_p,\\ p\in S_f\end{array}} \left|E({\mathsf{t}}', {\mathsf{t}}'+{\mathsf{t}}_1) \right| \le\; C_1 \exp(-\lambda_{\psi} {\mathsf{s}\hspace{0.01in}}({\mathsf{t}})). \end{split}$$ For , note that since $\phi_2$ is compactly supported, $\phi_2$ is uniformly continuous. Hence there is $\delta>0$ such that $$\left|A_{{\mathsf{t}}+{\mathsf{t}}_\tau} \phi_2(x) - A_{{\mathsf{t}}} \phi_2(x)\right|<\frac {\varepsilon} 4$$ for all ${\mathsf{t}}\succ 0$, $x\in {\mathsf{G}}/\Gamma$ and $\tau\in[0,\delta]$. Again by and Chebyshev’s inequality, there are ${\mathsf{t}}^{\phi_2}_0\succ 0$ and $\lambda_{\phi_2}>0$ such that for all ${\mathsf{t}}\succ {\mathsf{t}}^{\phi_2}_0$, $$\begin{split} &\left|\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}+{\mathsf{t}}_\tau}\phi_2 - \int_{{\mathsf{G}}/\Gamma} \phi_2 \int_{{\mathsf{K}}} \nu \right|> \frac {\varepsilon} 2 \;\text{for some}\; \tau \in [0, \delta] \right\}\right|\\ &\hspace{0.4in}\le \left|\left\{x \in {\mathsf{G}}/\Gamma : \left|A_{{\mathsf{t}}}\phi_2 - \int_{{\mathsf{G}}/\Gamma} \phi_2 \int_{{\mathsf{K}}} \nu \right|> \frac {\varepsilon} 4 \; \right\}\right| \le \left(\frac 4 \varepsilon \right)^2 \exp(-\lambda_{\phi_2} {\mathsf{s}\hspace{0.01in}}({\mathsf{t}})). \end{split}$$ Then follows from the similar geometric argument used above. From now on, a function $f_p$ on ${{\mathbb{Q}}}_p^n$, $p \in S$, is always assumed to be compactly supported. If $p < \infty$, we additionally assume that $f_p$ is $({\mathbb{Z}}_p-p{\mathbb{Z}}_p)$-invariant: $$\label{UFCf} f_p(ux_1, x_2, \ldots, x_{n-1}, u^{-1}x_n)=f_p(x_1, x_2, \ldots, x_n),\;\forall u \in {\mathbb{Z}}_p -p{\mathbb{Z}}_p.$$ Let $\nu$ be the product of non-negative continuous functions $\nu_p$ on the unit sphere in ${{\mathbb{Q}}}_p^n$, $p \in S$. For $p\in S_f$, we also assume that $$\label{UFCnu} \nu_p(u{\mathbf v})=\nu_p({\mathbf v}), \;\forall u \in {\mathbb{Z}}_p - p{\mathbb{Z}}_p.$$ Define a function $J_f$ for $f=\prod_{p\in S}f_p$ by $J_f=\prod_{p\in S} J_{f_p}$, where $$\begin{split} J_{f_\infty}(r, \zeta_\infty)&= \frac 1 {r^{n-2}} \int_{{\mathbb{R}}^{n-2}} f_\infty(r, x_2, \ldots, x_{n-1}, x_n) dx_2 \cdots dx_{n-1},\\ J_{f_p}(p^{-r},\zeta_p)&=\frac 1 {p^{r(n-2)}}\int_{{{\mathbb{Q}}}_p^{n-2}} f_p(p^{-r}, x_2, \ldots, x_{n-1}, x_n) \ dx_2\cdots dx_{n-1},\;p\in S_f, \end{split}$$ where $x_n$ is determined by the equation $\zeta_p=q^0(p^{-r}, x_2, \ldots,x_{n-1}, x_n)$ (If $p=\infty$, replace $p^{-r}$ by $r$). By Lemma 3.6 in [@EMM] and Lemma 4.1 in [@HLM], for sufficiently small $\varepsilon>0$, there are $c(K_p)>0$ and $t_p^0> 0$ for each $p \in S$, such that if $t_p>t^0_p$, $$\label{real Jf} \begin{split} &\left|c(K_\infty) e^{t_\infty(n-2)}\int_{K_\infty} \hspace{-0.1in}f_\infty(a_{t_\infty}k_\infty{\mathbf v})\nu(k^{-1}_\infty{\mathbf{ e}}_1)dm(k_\infty)\right.\\ &\hspace{1.7in} \left.-J_{f_\infty}(\|{\mathbf v}\|_\infty e^{-t_\infty}, q^0_\infty({\mathbf v}))\nu(\frac{{\mathbf v}}{\|{\mathbf v}\|^\sigma_\infty})\right|<\varepsilon, \end{split}$$ $$\label{p-adic Jf}\begin{split} &\left|c(K_p)p^{t_p(n-2)} \int_{K_p} f_p(a_{t_p}k_p{\mathbf v})\nu_p(k^{-1}_p {\mathbf{ e}}_1)dm(k_p)\right.\\ &\hspace{1.4in}-\left.J_{f_p}(p^{t_p}\|{\mathbf v}\|_p^{\sigma}, q^0_p({\mathbf v}))\nu(\frac{{\mathbf v}}{\|{\mathbf v}\|_p^{\sigma}})\right|<\varepsilon,\; p \in S_f. \end{split}$$ Recall that $\sigma=1$ for the infinite place and $\sigma=-1$ for the finite place. Define $$\|g{\mathbf v}\|^\sigma / {\mathsf{T}}^\sigma=(\|g_p{\mathbf v}\|^\sigma_p/T^\sigma_p)_{p\in S}.$$ By Lemma 3.6 in [@EMM] and Lemma 4.1 in [@HLM], there is a constant $c({\mathsf{K}})>0$ such that for sufficiently small $\varepsilon>0$ and sufficiently large ${\mathsf{t}}\succ 0$, $$\label{eq prop 3.6} \begin{split} &\left| J_f \left( \frac{\| {\mathsf{g}}{\mathbf v}\|^\sigma}{{{\mathsf{T}}}^\sigma}, {\mathsf q^{}_S}( {\mathsf{g}}{\mathbf v}) \right) \nu \left( \frac{{\mathsf{g}}{\mathbf v}}{\| {\mathsf{g}}{\mathbf v}\|^\sigma} \right)\right.\\ &\hspace{1.5in}\left.-c({\mathsf{K}})|{\mathsf{T}}|^{n-2}\int_{\mathsf{K}}\tilde{f}(a_{\mathsf{t}}{\mathsf{k}}{\mathsf{g}})\nu({\mathsf{k}}^{-1}{\mathbf{ e}}_1)dm({\mathsf{k}})\right|\leq \varepsilon. \end{split}$$ \[upper bound\] Let $({\mathsf{T}}_j=(T_{j,p})_{p\in S})_{j\in{\mathbb{N}}}$ be a divergent sequence. Define $$S'=\{p \in S : (T_{j,p})_{j\in {\mathbb{N}}}\;\text{is bounded}\;\}\subsetneq S.$$ Let $\Omega, {\mathsf{I}}_S=(I_p)_{p\in S}$ be as in Theorem \[main thm\]. Then for almost all nondegenerate isotropic quadratic form ${\mathsf q^{}_S}=(q_p)_{p\in S}$, there is a constant $C=C(S')>0$ such that $$\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{T}}_j\Omega : {\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S\right\}\right|< C \prod_{p\in S-S'}(T_{j,p})^{n-2},$$ where $n$ is the rank of ${\mathsf q^{}_S}$. Since we want to show the upper bound, for simplicity, we may assume that $\Omega$ is the product of unit balls in ${{\mathbb{Q}}}_p^n$, $p \in S$. Let $\varepsilon>0$ be given. For each $p \in S$, let $\mathcal C_p$ be a compact set of the space of nondegenerate isotropic quadratic forms over ${{\mathbb{Q}}}_p$ of a given signature. Let $\mathcal C=\prod_{p \in S} \mathcal C_p$. Let ${\mathsf{g}}_{{\mathsf q^{}_S}}$ be an element of ${\mathrm{SL}}_n({{\mathbb{Q}}}_S)$ such that ${\mathsf q^{}_S}({\mathbf v})={\mathsf q^0_S}({\mathsf{g}}_{{\mathsf q^{}_S}}{\mathbf v})$ for all ${\mathbf v}\in {{\mathbb{Q}}}_S^n$. Then for each $p \in S$, there is $\beta_p=\beta_p(\mathcal C_p)>0$ such that if ${\mathsf q^{}_S}\in \mathcal C$, $\beta_p^{-1}\le \|{\mathsf{g}}_{{\mathsf q^{}_S}} {\mathbf v}\|_p/\|{\mathbf v}\|^\sigma_p \le \beta_p$ for all ${\mathbf v}\in {{\mathbb{Q}}}_S^n$. Choose bounded continuous functions $f_p$, $p \in S$, of compact support such that $$J_{f_p}\ge 1+\varepsilon\quad\text{on}\quad [\frac 1 {2\beta_p}, \beta_p]\times I_p.$$ If ${\mathbf v}$ satisfies that $T_p/2 \le \|{\mathbf v}\|_p \le T_p$ and $q_p({\mathbf v})\in I_p$, then $$J_{f_p}(\|{\mathsf{g}}{\mathbf v}\|_p/T^{\sigma}_p, q^0_p({\mathsf{g}}{\mathbf v}))\ge 1+\varepsilon$$ for ${\mathsf{g}}={\mathsf{g}}_{{\mathsf q^{}_S}}$ with ${\mathsf q^{}_S}\in \mathcal C$. By and , there is ${\mathsf{T}}^0=(T^0_p)_{p\in S}\succ 0$ such that for each $p \in S$ and for all $T_p > T^0_p$, $$\left|c(K_p)T_p^{n-2}\int_{K_p} f_p(a_{t_p}k_p {\mathbf v}) dm(k_p) - J_{f_p}\left(\frac{\|{\mathbf v}\|_p^{\sigma}}{T^\sigma_p}, \zeta\right)\right|<\varepsilon.$$ For $p \in S'$, we may further assume that $T_{j,p}\le T^0_p$ for all $j$. Finally for each $p \in S$, choose a nonnegative bounded function $g_p$ on ${{\mathbb{Q}}}_p^n$ of compact support such that if $\|{\mathbf v}\|_p\le T^0_p$ and $t_p\le \log_p(T_p^0)$, $$\int_{K_p} g_p(a_{t_p} k_p {\mathbf v}) dm(k_p) \ge 1.$$ Take $h_{S'}=\prod_{p \in S'} g_p\times\prod_{p \in S-S'} f_p$ and let $\widetilde{h_{S'}}$ be the Siegel transform of $h_{S'}$ defined in Lemma \[Schmidt lemma\]. Then for ${\mathsf{T}}=(T_p)$ such that $T_p>T_p^0$, $\forall p \in S-S'$, we obtain that $$\begin{split} &\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n :\begin{array}{c} \|{\mathbf v}\|_p\le T_p^0, \;\forall p \in S'\\ T_p/2<\|{\mathbf v}\|_p\le T_p,\;\forall p \in S-S' \end{array}, \;{\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S \right\}\right|\\ &\le \sum_{{\mathbf v}\in {\mathbb{Z}}_S^n}\left( \prod_{p \in S'}\int_{K_p} g_p(a_{t_p}k_p{\mathsf{g}}_{{\mathsf q^{}_S}} {\mathbf v})dm(k_p)\times\hspace{-0.15in}\prod_{p\in S-S'}\hspace{-0.08in}T_p^{n-2}\int_{K_p} f_p(a_{t_p}k_p{\mathsf{g}}_{{\mathsf q^{}_S}}{\mathbf v})dm(k_p)\right)\\ &=\left(\prod_{p\in S-S'} T_p^{n-2}\right)\int_{{\mathsf{K}}} \widetilde {h_{S'}} ({\mathsf{a}}_{{\mathsf{t}}}{\mathsf{k}}{\mathsf{g}}_{{\mathsf q^{}_S}} {\mathbf v})dm({\mathsf{k}}). \end{split}$$ By Proposition \[prop 3.13\], for almost all quadratic form ${\mathsf q^{}_S}$, as ${\mathsf{t}}$ diverges, $$\int_{{\mathsf{K}}} \widetilde{h_{S'}}({\mathsf{a}}_{{\mathsf{t}}}{\mathsf{k}}{\mathsf{g}}_{{\mathsf q^{}_S}}{\mathrm{SL}}_n({\mathbb{Z}}_S))dm({\mathsf{k}})\rightarrow \int_{{\mathsf{G}}/\Gamma} \widetilde{h_{S'}} d{\mathsf{g}}<\infty.$$ Hence there is a constant $C'>0$ such that for all ${\mathsf{T}}_j$, $$\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n :\hspace{-0.08in}\begin{array}{c} \|{\mathbf v}\|_p\le T_p^0, \;\forall p \in S'\\ T_{j,p}/2<\|{\mathbf v}\|_p\le T_{j,p},\;\forall p \in S-S' \end{array}\hspace{-0.05in}, \;{\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S \right\}\right|< C'\hspace{-0.1in}\prod_{p\in S-S'} (T_{j,p})^{n-2}.$$ Therefore we have $$\left|{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{T}}_j\Omega : {\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S\}\right| < \left( C'\hspace{-0.05in}\prod_{p \in S-S'}\frac {p}{p-1}\right) \left(\prod_{p\in S-S'} T_{j,p}\right)^{n-2}.$$ The proof of the main theorem ============================= The proof of Theorem \[main thm\] is similar to that of Theorem 1.5 in [@HLM] except we use Proposition \[prop 3.13\] instead of Theorem 7.1 in [@HLM]. Let $\mathcal C$ be any compact subset of the space of isotropic quadratic forms with equal signature and let $\varepsilon>0$ be given. Let $\phi$ and $\nu$ be compactly supported functions defined as before. Theorem 7.1 in [@HLM] says that there is ${\mathsf{t}}_0={\mathsf{t}}_0(\mathcal C)\succ0$ such that except on a finite union of orbits of ${{\mathrm{SO}}}({\mathsf q^0_S})$, $x \in \mathcal C$ satisfies the following: for all ${\mathsf{t}}\succ {\mathsf{t}}_0$, $$\left|\int_{{\mathsf{K}}} \phi({\mathsf{a}}_{{\mathsf{t}}}{\mathsf{k}}x)\nu({\mathsf{k}})dm({\mathsf{k}})- \int_{{\mathsf{G}}/\Gamma} \phi d{\mathsf{g}}\int_{{\mathsf{K}}} \nu dm \right|<\varepsilon.$$ \[prop 3.7\] Let ${\mathsf q^{}_S}$ be an isotropic quadratic form of rank $n=3$ or $4$. Let $f=\prod f_p$ and $\nu=\prod \nu_p$ be as before. Assume further that $f$ satisfies the following condition: there is a nonnegative continuous function $f^{+}$ of compact support on ${{\mathbb{Q}}}_S^n$ such that $\operatorname{supp}(f)\subset \operatorname{supp}(f^+)^\circ$, where $A^\circ$ is an interior of $A$ and $$\label{eq prop 3.7 (3)} \sup_{{\mathsf{t}}\succ 0} \int_{{\mathsf{K}}} \widetilde{f^+} ({\mathsf{a}}_{{\mathsf{t}}} {\mathsf{k}}{\mathsf{g}}\Gamma) dm({\mathsf{k}}) = M < \infty.$$ Then there exists ${\mathsf{t}}_0\succ 0$ such that if ${\mathsf{t}}\succ {\mathsf{t}}_0$, $$\label{eq prop 3.7} \begin{split} &\left| |{{\mathsf{T}}}|^{-(n-2)}\sum_{{\mathbf v}\in{\mathbb{Z}}_S^n} J_f \left( \frac{\| {\mathsf{g}}{\mathbf v}\|^\sigma}{{{\mathsf{T}}}^\sigma}, {\mathsf q^{}_S}( {\mathsf{g}}{\mathbf v}) \right) \nu \left( \frac{{\mathsf{g}}{\mathbf v}}{\| {\mathsf{g}}{\mathbf v}\|^\sigma} \right)\right.\\ &\hspace{2in}\left.-c({\mathsf{K}})\int_{\mathsf{K}}\tilde{f}(a_{\mathsf{t}}{\mathsf{k}}{\mathsf{g}})\nu({\mathsf{k}}^{-1}{\mathbf{ e}}_1)dm({\mathsf{k}})\right|\leq \epsilon. \end{split}$$ We first claim that there is a constant $c>0$ such that $$\label{eq prop 3.7 (2)} \left|\Pi:=\left\{{\mathbf v}\in {\mathbb{Z}}^n_S : J_f \left( \frac{\| {\mathsf{g}}{\mathbf v}\|^\sigma}{{{\mathsf{T}}}^\sigma}, {\mathsf q^{}_S}( {\mathsf{g}}{\mathbf v}) \right) \nu \left( \frac{{\mathsf{g}}{\mathbf v}}{\| {\mathsf{g}}{\mathbf v}\|^\sigma} \right)\neq 0\right\}\right|< c |{\mathsf{T}}|^{n-2}.$$ Since the interior of $\operatorname{supp}(f^+)$ contains $\operatorname{supp}(f)$, there is $\rho>0$ such that $$J_{f^+} > \rho \quad \text{on}\; \operatorname{supp}(J_f).$$ Note that since $J_{f^+}$ is compactly supported, the set $\Pi$ is finite. Take $\varepsilon>0$ such that $\varepsilon < \rho/2$. Then by , for sufficiently large ${\mathsf{t}}$, we have that $$\rho|\Pi|\le\sum_{{\mathbf v}\in \Pi} J_{f^+}\left( \frac{\| {\mathsf{g}}{\mathbf v}\|^\sigma}{{{\mathsf{T}}}^\sigma}, {\mathsf q^{}_S}( {\mathsf{g}}{\mathbf v}) \right) \le c({\mathsf{K}}) |{\mathsf{T}}|^{n-2}\int_{{\mathsf{K}}} \widetilde{f^+}({\mathsf{a}}_{\mathsf{t}}{\mathsf{k}}{\mathsf{g}}\Gamma)dm({\mathsf{k}})+\varepsilon|\Pi|.$$ By , $\rho/2\; |\Pi| \le c({\mathsf{K}})M|{\mathsf{T}}|^{n-2}$. This implies . Now follows from applying by putting $\varepsilon/(c({\mathsf{K}})\rho M)$ instead of $\varepsilon$ and taking summation over all ${\mathbf v}\in {\mathbb{Z}}_S^n$. Recall that a convex set $\Omega$ is defined using a non-negative continuous function $\rho=\prod_{p\in S} \rho_p$, where $\rho_p$ is a positive function on the unit sphere of ${{\mathbb{Q}}}_p^n$, $p \in S$. Define the shell $\hat\Omega$ of $\Omega$ by $$\hat\Omega= \left\{{\mathbf v}\in {{\mathbb{Q}}}_S^n : \rho_p({\mathbf v}/\|{\mathbf v}\|_p^\sigma)/2 < \|{\mathbf v}\|_p \le \rho_p({\mathbf v}/\|{\mathbf v}\|_p^\sigma), \;\forall p \in S \right\}.$$ Note that when $p<\infty$, the inequality in the above definition is in fact the equality: $\|{\mathbf v}\|_p=\rho_p({\mathbf v}/\|{\mathbf v}\|^\sigma_p)$. The following proposition was originally stated for $\Omega$ but the proof can be easily modified for $\hat \Omega$. [@HLM Proposition 1.2]\[volume-asym\] There is a constant $\lambda=\lambda({\mathsf q^{}_S},\Omega)>0$ such that as ${\mathsf{T}}\rightarrow \infty$, $${\mathrm{vol}}\left\{{\mathbf v}\in {{\mathbb{Q}}}_S^n\cap {\mathsf{T}}\hat\Omega : {\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S \right\}\sim \lambda({\mathsf q^{}_S},\hat\Omega) \cdot | {\mathsf{I}}_S | \cdot |{\mathsf{T}}|^{n-2}.$$ [@HLM Lemma 5.2] Let $f$ and $\nu$ be as in Proposition \[prop 3.7\]. Take $$h_p( {\mathbf v}_p, \zeta_p)=J_{f_p}(\|{\mathbf v}_p\|_p^{\sigma}, \zeta_p)\nu_p({\mathbf v}_p/\|{\mathbf v}_p\|_p^{\sigma}),\quad p \in S$$ and set $h({\mathbf v}, \zeta)= \prod_{p \in S} h_p( {\mathbf v}_p, \zeta_p)$. Then we have $$\label{eq lemma 3.9} \begin{split} &\lim_{{\mathsf{T}}\rightarrow \infty} |{\mathsf{T}}|^{-(n-2)} \int_{{{\mathbb{Q}}}_S^n} h\left(\frac {\mathbf v}{{\mathsf{T}}^\sigma}, {\mathsf q^{}_S}({\mathbf v})\right) d{\mathbf v}\\ &\hspace{2in}= c({\mathsf{K}}) \int_{{\mathsf{G}}/\Gamma}\tilde{f}({\mathsf{g}})\, d{\mathsf{g}}\prod_{p \in S} \int_{K_p} \nu_p(k_p^{-1}{\mathbf{ e}}_1)dm(k_p). \end{split}$$ Since $f$ in Proposition \[prop 3.7\] is compactly supported, there is $f^+$ such that $\operatorname{supp}(f_p)\subset \operatorname{supp}(f^+_p)^\circ$. By Proposition \[prop 3.13\] with $\phi=f^+$ and $\nu\equiv1$, $$\lim_{{\mathsf{t}}\rightarrow \infty}\int_{{\mathsf{K}}} \widetilde{f^+}({\mathsf{a}}_{\mathsf{t}}{\mathsf{k}}{\mathsf{g}}\Gamma)dm({\mathsf{k}})=\int_{{\mathsf{G}}/\Gamma}\widetilde{f^+} d{\mathsf{g}}$$ for almost all $x={\mathsf{g}}\Gamma \in {\mathsf{G}}/\Gamma$. By Lemma \[Schmidt lemma\] and Proposition \[lemma 3.10\], the integral of $\widetilde{f^+}$ over ${\mathsf{G}}/\Gamma$ is finite. Hence one can apply Proposition \[prop 3.7\] for almost all $x \in {\mathsf{G}}/\Gamma$. We also remark that the set of functions of the form $J_f \nu$, where $f=\prod f_p$, $\nu=\prod \nu_p$ with and respectively, is a generating set of $$\mathcal L=\{F({\mathbf v}, \zeta) : \left({{\mathbb{Q}}}_S^n\right) \times {{\mathbb{Q}}}_S\rightarrow {\mathbb{R}}\; | \;F(u{\mathbf v}, \zeta)=F({\mathbf v}, \zeta),\; \forall u \in {\mathbb{Z}}_p - p{\mathbb{Z}}_p,\; \forall p \in S_f\}.$$ Hence Proposition \[prop 3.7\] holds for functions in $\mathcal L$ as well (see details in [@HLM]). Define $$L(h):=\lim_{{\mathsf{T}}\rightarrow \infty} |{\mathsf{T}}|^{-(n-2)} \int_{{{\mathbb{Q}}}_S^n} h\left(\frac {\mathbf v}{{\mathsf{T}}^\sigma}, {\mathsf q^{}_S}({\mathbf v})\right) d{\mathbf v},\; h \in \mathcal L.$$ The characteristic function $\mathbbm 1_{{\mathsf{T}}\hat\Omega\times{\mathsf{I}}_S}({\mathsf{g}}{\mathbf v},\zeta)$ is contained in $\mathcal L$. Let $\varepsilon>0$ be given. Take continuous functions $h^a, h^b \in \mathcal L$, depending on ${\mathsf{T}}$, $\varepsilon$ and ${\mathsf q^{}_S}$ such that $$\label{eq last (3)} h^b({\mathsf{g}}{\mathbf v}, \zeta) \le \mathbbm 1_{{\mathsf{T}}\hat\Omega\times{\mathsf{I}}_S}({\mathsf{g}}{\mathbf v},\zeta) \le h^a({\mathsf{g}}{\mathbf v}, \zeta)\quad\text{and}\quad \left|L( h^a) -L( h^b) \right| < \varepsilon.$$ From , and , for $h=h^a$, $h^b$ and any $\varepsilon>0$, there is ${\mathsf{T}}_0 \succ 0$ such that if ${\mathsf{T}}\succ {\mathsf{T}}_0$, $$\label{eq last (1)} \left| |{\mathsf{T}}|^{-(n-2)} \sum_{{\mathbf v}\in {\mathbb{Z}}^n_S} h\left( \frac {{\mathsf{g}}{\mathbf v}}{{\mathsf{T}}^\sigma}, {\mathsf q^{}_S}({\mathsf{g}}{\mathbf v})\right) - L(h) \right| < \varepsilon$$ for almost every ${\mathsf{g}}\Gamma\in {\mathsf{G}}/\Gamma$. By the definition of $L(h)$ and rescaling ${\mathsf{T}}_0 \succ 0$ if necessary, we also obtain that $$\label{eq last (2)} \left| |{\mathsf{T}}|^{-(n-2)} \int_{{{\mathbb{Q}}}_S^n}h\left( \frac {{\mathsf{g}}{\mathbf v}}{{\mathsf{T}}^\sigma}, {\mathsf q^{}_S}({\mathsf{g}}{\mathbf v})\right) -L(h)\right|<\varepsilon.$$ Combining and with , if we regard $h$ as the characteristic function of ${\mathsf{T}}\hat\Omega\times {\mathsf{I}}_S$, $$\left|\;\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{T}}\hat\Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right| - {\mathrm{vol}}\left(\left\{{\mathbf v}\in {{\mathbb{Q}}}_S^n \cap {\mathsf{T}}\hat\Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right)\right|<4\varepsilon.$$ Since $$\begin{split}&\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{T}}\Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right|\\ &\hspace{0.4in}= \hspace{-0.2in}\sum_{\scriptsize \begin{array}{c} n_j\in \{0\}\cup{\mathbb{N}}\\j\in\{0,\ldots, s\}\end{array}}\hspace{-0.1in} \left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap (2^{-n_0}T_\infty, p^{-n_1}_1 T_1, \ldots, p^{-n_s}_s T_s)\hat \Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right|, \end{split}$$ the lemma follows from Proposition \[volume-asym\] and the classical argument of geometric series, if we obtain the following: as ${\mathsf{T}}\rightarrow \infty$, the summation over all ${\mathsf{S}}=(S_\infty, S_1, \ldots, S_s)=(2^{-n_0}T_\infty, p^{-n_1}_1 T_1, \ldots, p^{-n_s}_s T_s)$ with ${\mathsf{S}}\nsucc {\mathsf{T}}_0=(T^0_\infty, T^0_1, \ldots, T^0_s)$ is $$\sum_{{\mathsf{S}}}\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{S}}\hat\Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right|=o(|{\mathsf{T}}|^{n-2})\quad\text{as}\quad {\mathsf{T}}\rightarrow \infty.$$ For this, by rescaling ${\mathsf{T}}^0$ if necessary, let us assume that Corollary \[upper bound\] holds for any $S'\subsetneq S$. Denote the set $\{{\mathsf{S}}=(2^{-n_0}T_\infty, p^{-n_1}_1 T_1, \ldots, p^{-n_s}_s T_s) \prec {\mathsf{T}}: {\mathsf{S}}\nsucc {\mathsf{T}}_0\}$ by $\cup\{\Psi_{S'}: S'\subseteq S\}$, where $$\Psi_{S'}:=\left\{{\mathsf{S}}=(S_\infty, S_1, \ldots, S_s) : S_p \le T^0_p,\;\forall p\in S'\;\text{and}\; S_p > T^0_p,\;\forall p\in S-S'\right\}.$$ Then for $S'\subsetneq S$, by Corollary \[upper bound\], there is a constant $C(S')>0$ such that $$\label{eq last (5)}\begin{split} \sum_{{\mathsf{S}}\in \Psi_{S'}}\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{S}}\hat\Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right|&< \left|\{{\mathbf v}\in {\mathbb{Z}}_S^n\cap {\mathsf{T}}_{S'}\Omega : {\mathsf q^{}_S}({\mathbf v})\in {\mathsf{I}}_S\}\right|\\ &<C(S')\prod_{p\in S-S'} (T_p)^{n-2}, \end{split}$$ where ${\mathsf{T}}_{S'}=(T'_p)_{p\in S}$. Here $T'_p=T^0_p$ for $p\in S'$ and $T'_p=T_p$ for $p\in S-S'$. Hence the left hand side of is $o(\prod_{p \in S-S'} T_p^{n-2})$. If $S'=S$, since $$\sum_{{\mathsf{S}}\in \Psi_{S}}\left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{S}}\hat\Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right|\le \left|\left\{{\mathbf v}\in {\mathbb{Z}}_S^n \cap {\mathsf{T}}_0\Omega : {\mathsf q^{}_S}({\mathbf v}) \in {\mathsf{I}}_S \right\}\right|<\infty,$$ it is obviously $o(|{\mathsf{T}}|^{n-2})$. [1]{} P. Abramenko, K. S. Brown, *Buildings*, Theory and applications. Graduate Texts in Mathematics, 248. Springer, New York, 2008. J. Athreya, G. Margulis, *Values of random polynomials at integer points*, (English summary) J. Mod. Dyn. 12 (2018), 9–16. Y. Benoist, *Five lectures on lattices in semisimple Lie groups* (English, French summary), Géométries à courbure négative ou nulle, groupes discrets et rigidités, 117–176, Sémin. Congr., 18, Soc. Math. France, Paris, 2009. Borel, Prasad, *Values of isotropic quadratic forms at $S$-integral points.* Compositio Math. 83(1992), no.3, 347-372. J. Bourgain, *A quantitative Oppenheim theorem for generic diagonal quadratic forms*, (English summary) Israel J. Math. 215 (2016), no. 1, 503–512. S. Dani and G. Margulis, *Limit distributions of orbits of unipotent flows and values of quadratic forms*, I. M. Gel’fand Seminar, 91-137, Adv. Soviet Math., 16, Part 1, Amer. Math. Soc., Providence, RI, 1993. A. Eskin, G. Margulis and S. Mozes, *Upper bounds and asymptotics in a quantitative version of the Oppenheim conjecture*, Ann. of Math. (2) 147 (1998), no. 1, 93-141. A. Eskin, G. Margulis and S. Mozes, *Quadratic forms of signature $(2,2)$ and eigenvalue spacings on rectangular $2$-tori.* Ann. of Math. (2) 161(2005), no. 2, 679-725. A. Ghosh, D. Kelmer, *A quantitative Oppenheim theorem for generic ternary quadratic forms*, (English summary) J. Mod. Dyn. 12 (2018), 1–8. A. Ghosh, A. Gorodnik, A. Nevo, *Optimal density for values of generic polynomial maps*, [arXiv:1801.01027 \[math.NT\]]{} A. Gorodnik, *Oppenheim conjecture for pairs consisting of a linear form and a quadratic form,* Trans. Amer. Math. Soc. 356 (2004), no. 11, 4447–4463. J. Han, S. Lim, K. Mallahi-Karai, *Asymptotic distribution of values of isotropic quadratic forms at $S$-integral points*, Journal of Modern Dynamics, Volume 11, 2017, 501-550 D. Kleinbock and G. Tomanov, *Flows on $S$-arithmetic homogeneous spaces and applications to metric Diophantine approximation*, Comment. Math. Helv. 82(2007), no. 3, 519-581. Y. Lazar, *Values of pairs involving one quadratic form and one linear form at $S$-integral points*, J. Number Theory, Volume 181, December 2017, 200-217 G. A. Margulis, *Formes quadratriques indéfinies et flots unipotents sur les espaces homogénes.* (French summary), C. R. Acad. Sci. Paris. Sér. I Math. 304 (1987), no. 10. 249-253. H. Oh, *Uniform pointwise bounds for matrix coefficients*, Duke Math. J. 113 (2002) p. 133-192 A. Oppenheim, *The minima of indefinite quaternary quadratic forms*, Proc. Nat. Acad. Sci. U. S. A. 15 (9), 724-727. V. Platonov, A. Rapinchuk, *Algebraic Groups and Number Theory*, Academic Press, INC., 1994. M. Ratner, *Raghunathan’s conjectures for Cartesian products of real and $p$-adic Lie groups*, Duke Math. J. 77 (1995), no. 2, 275-382. G. Robertson, *Euclidean Buildings* (lecture), “Arithmetic Geometry and Noncommutative Geometry”, Masterclass, Utrecht, March 1-5, 2010 O. Sargent, *Density of values of linear maps on quadratic surfaces,* J. Number Theory 143 (2014), 363–384. O. Sargent, *Equidistribution of values of linear forms on quadratic surfaces,* Algebra Number Theory 8 (2014), no. 4, 895–932. W. Schmidt, *Approximation to algebraic numbers,* Enseignement Math. (2) 17 (1971), 187-253. J.-P. Serre, [*A course in Arithmetic*]{}. Title of the French original edition : Cours d’Arithmétique. Graduate Texts in Mathematics Vol. 7 (New York, Springer-Verlag, 1973). J.-P. Serre, *Trees*. Translated from the French original by John Stillwell. Corrected 2nd printing of the 1980 English translation. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2003. x+142 pp. T. A. Springer, *Linear algebraic groups* (English summary), Second edition, Progress in Mathematics, 9, Birkhäuser Boston, Inc., Boston, MA, 1998. G. Tomanov, *Orbits on Homogeneous Spaces of Arithmetic Origin and Approximations*, Adv. Stud. Pure Math., 26, Math. Soc. Japan, Tokyo, 2000.
{ "pile_set_name": "ArXiv" }
--- abstract: 'For graphs $H$ and $F$, the generalized Turán number $ex(n,H,F)$ is the largest number of copies of $H$ in an $F$-free graph on $n$ vertices. We consider this problem when both $H$ and $F$ have at most four vertices. We give sharp results in almost all cases, and connect the remaining cases to well-known unsolved problems. Our main new contribution is applying the progressive induction method of Simonovits for generalized Turán problems.' author: - 'Dániel Gerbner[^1]' title: Generalized Turán problems for small graphs --- Introduction ============ One of the most studied area of extremal Combinatorics is Turán theory, which seeks to determine ${\ensuremath{\mathrm{ex}}}(n,F)$, the largest number of edges in an $F$-free graph on $n$ vertices. A natural generalization is ${\ensuremath{\mathrm{ex}}}(n,H,F)$, the largest number of copies of $H$ in $F$-free graphs on $n$ vertices. After several sporadic results (see e.g. [@BGY2008; @gls; @G2012; @gypl; @HHKNR2013; @zykov]), the systematic study of this problem was initiated by Alon and Shikhelman [@ALS2016]. Since then, this problem (most commonly referred to as *generalized Turán problem*) has attracted several researchers, see e.g. [@cc; @cvk; @chase; @GGMV2017+; @GMV2017; @GP2017; @gp2; @gs2017; @gstz; @mq; @wang]. Many bounds and exact results have been proved, for several pairs of graphs. In this paper, we examine the case when both $H$ and $F$ have at most four vertices. We collect the known results and prove new results where needed. We feel it is important to put these results in the proper context, thus we state both the existing and the new results in the most general form. We even refer to strong general results when the specific small case we need is trivial. We collect the results concerning small graphs in Table \[table\] below. We start with some notation and definition. We denote by ${{\mathcal N}}(H,G)$ the number of copies of $H$ in $G$. The generalized Turán function is ${\ensuremath{\mathrm{ex}}}(n,H,F):=\max \{{{\mathcal N}}(H,G): G \text{ is an $F$-free graph on $n$ vertices}\}$. For graphs we use the following notation. $K_n$ is the complete graph on $n$ vertices, $K_{a,b}$ is the complete bipartite graph with parts of size $a$ and $b$, $K_{a,b,c}$ is the complete 3-partite graph with parts of size $a$, $b$ and $c$. The Turán graph $T_r(n)$ is a complete $r$-partite graph on $n$ vertices with each part having size $\lfloor n/r\rfloor$ or $\lceil n/r\rceil$. $C_n$ denotes the cycle on $n$ vertices, $P_n$ denotes the path on $n$ vertices (with $n-1$ edges) and $S_n$ denotes the star on $n$ vertices. $M_\ell$ denotes the matching with $\ell$ edges (thus $2\ell$ vertices). We also introduce some less usual notation. $T_\ell$ is a graph on $l+3$ vertices with $l+3$ edges, that consists of a triangle and $\ell$ other vertices, connected to the same vertex of the triangle ($T_1$ is also called sometimes the paw graph). $D(k,n)$ is the graph consisting of $\lfloor n/k\rfloor$ copies of $K_k$ and a clique on the remaining vertices. $\overline{K_{s,t}}$ is $K_{s,t}$ with every pair of vertices inside the part of size $s$ connected by an edge. We denote by $G_{n,k,\ell}$ the graph whose vertex set is partitioned into 3 classes, $A$, $B$ and $C$ with $|A| = n-k+\ell$, $|B| = \ell$, $|C| = k - 2\ell$ such that vertices of $B$ have degree $n-1$, $A$ is an independent set, $C$ is a clique, and there is no edge between $A$ and $C$. $F(n)$ denotes the friendship graph on $n$ vertices, which has a vertex of degree $n-1$ and a largest matching $M_{\lfloor (n-1)/2\rfloor}$ on the other vertices. Following [@gp2], if $F$ is a $k$-chromatic graph and $H$ does not contain $F$, then we say that $H$ is *$F$-Turán-good*, if ${\ensuremath{\mathrm{ex}}}(n,H,F)={{\mathcal N}}(H,T_{k-1}(n))$ for every $n$ large enough. We shorten $K_k$-Turán-good to *$k$-Turán-good*. Observe that if $F$ contains isolated vertices, then for $n\ge |V(H)|$, the same $n$-vertex graphs contain $F$ and the graph $F'$ we obtain by deleting the isolated vertices from $F$. Therefore, ${\ensuremath{\mathrm{ex}}}(n,H,F)={\ensuremath{\mathrm{ex}}}(n,H,F')$. If $H$ contains $k$ isolated vertices, let $H'$ be the graph we obtain by deleting the isolated vertices from $H$. Then each copy of $H'$ in an $n$-vertex graph $G$ extends to a copy of $H$ exactly $\binom{n-|V(H')|}{k}$ ways, thus it is enough to determine ${\ensuremath{\mathrm{ex}}}(n,H',F)$. Therefore, we can restrict ourselves to the case neither $F$ nor $H$ contains isolated vertices. With this restriction there are ten graphs on at most four vertices. We collect a summary of the results in a $10 \times 10$ table. Here we explain what is in the table. If the column is $F$ and the row is $H$, the entry summarizes what we know about ${\ensuremath{\mathrm{ex}}}(n,H,F)$. If the entry is 0, that means $H$ contains $F$, thus ${\ensuremath{\mathrm{ex}}}(n,H,F)=0$. Otherwise, the entry does not contain the value of ${\ensuremath{\mathrm{ex}}}(n,F)$, it contains a letter and the number of a theorem (or proposition, corollary or observation). The letter E means we know ${\ensuremath{\mathrm{ex}}}(n,H,F)$ *exactly*, provided $n$ is large enough. The letter A means we know the asymptotics, while the letter $B$ means we only have some bounds and we do not even know the order of magnitude. The numbers after the letter refer to a statement that contains the actual result regarding ${\ensuremath{\mathrm{ex}}}(n,H,F)$. Usually it is a more general result. \[table\] $K_2$ $P_3$ $K_3$ $M_2$ $S_4$ $P_4$ $C_4$ $T_1$ $B_2$ $K_4$ ------- ------- ---------------- -------------- ----------------- --------------- ------------------ ----------------- -------------- ---------------- ------------------ $K_2$ $0$ E, \[erga\] E, \[Tur\] E, \[erga2\] E, \[star\] E, \[erga\] A, \[furedi\] E, \[sim\] E, \[sim\] E, \[Tur\] $P_3$ $0$ 0 E, \[gpl2\] E, \[matching\] E, \[regu\] E, \[paths\] E, \[friends\] E, \[tri1\] E, \[p3cc\] E, \[gpl2\] $K_3$ $0$ 0 0 E, \[cons\] E, \[gls\] E, \[chch\] A, \[als\] E, \[trian\] B, \[als2\] E, \[zykov\] $M_2$ $0$ E, \[cseresz\] E, \[gmv\] 0 E, \[regfor\] E, \[dani\] A, \[matchbip\] E, \[matc\] E, \[matc\] E, \[matc\] $S_4$ $0$ 0 E, \[induc\] E, \[matching\] 0 E, \[starpaths\] E, \[korstar\] E, \[tegy\] E, \[s4b2\] E, \[stacli\] $P_4$ $0$ 0 E, \[gpl2\] 0 E, \[regu\] 0 A, \[coun\] E, \[tri1\] E, \[turgoo2\] E, \[p4k4\] $C_4$ 0 0 E, \[gpl2\] 0 E, \[sta\] 0 0 E, \[tri1\] E, \[turgoo2\] E, \[gpl3\] $T_1$ $0$ 0 0 0 0 0 E, \[tetel\] 0 B, \[rsze\] E, \[turgoodgp\] $B_2$ 0 0 0 0 0 0 0 0 0 E, \[gpl2\] $K_4$ 0 0 0 0 0 0 0 0 0 0 : Generalized Turán numbers of small graphs The exact results here are proved only for $n$ large enough. We are not interested in small values of $n$, and we do not mention in the table the cases where in fact we have exact results for all $n$. In some cases, the result we state follows from a more general theorem, stated only for $n$ large enough, and it would not be hard to obtain the exact value of ${\ensuremath{\mathrm{ex}}}(n,H,F)$ for every $n$ in case of the particular small graphs we study here. Our main new contribution is applying the progressive induction method of Simonovits [@sim] for generalized Turán problems and use it to resolve a problem of Gerbner and Palmer [@gp2]. \[p3cc\] If $F$ is a 3-chromatic graph with a color-critical edge, then $P_3$ is $F$-Turán-good. The rest of this paper is organized as follows. In Section 2 we state the existing results we use. We state them in the most general form, but it is always immediate how they imply the bounds for our specific cases. In Section 3 we state and prove most of our new results. In Section 4 we introduce progressive induction, prove Theorem \[p3cc\] and another result. We finish the paper with some concluding remarks in Section 5. Earlier results =============== In this section we state earlier results that imply some of the bounds. As the first row of the table corresponds to counting edges, we start with some results concerning ordinary Turán problems. We shall begin with Turán’s theorem. \[Tur\] We have ${\ensuremath{\mathrm{ex}}}(n,K_k)=|E(T_{k-1}(n))|$, i.e. $K_2$ is $k$-Turán-good. We say that an edge $e$ of a graph $G$ is *color-critical* if deleting $e$ from $G$ decreases the chromatic number of the graph. Simonovits [@sim] showed that the Turán graph has the largest number of edges if we forbid any $k$-chromatic graph with a color-critical edge, provided $n$ is large enough. \[sim\] If $F$ has chromatic number $k$ and a critical edge, and $n$ is large enough, then ${\ensuremath{\mathrm{ex}}}(n,F)=|E(T_{k-1}(n))|$, i.e. $K_2$ is $F$-Turán-good. Moreover, $T_{k-1}(n)$ is the unique extremal graph. It is trivial to determine the Turán number of stars. We state it here so that we can refer to it. \[star\] ${\ensuremath{\mathrm{ex}}}(n,S_k)=\lfloor (k-2)n/2\rfloor$. Erdős and Gallai [@erga] studied ${\ensuremath{\mathrm{ex}}}(n,P_k)$, but obtained the exact value only for $n$ divisible by $k-1$. Faudree and Schelp [@fasc] improved their result and showed the following. \[erga\] For every $n$ and $k$ we have ${\ensuremath{\mathrm{ex}}}(n,P_k)=|E(D(k-1,n))|$. \[erga2\] If $n>2l$, then ${\ensuremath{\mathrm{ex}}}(n,M_\ell)=|E(\overline{K_{k-1,n-k+1}})|$. Füredi [@fur] determined the asymptotics of ${\ensuremath{\mathrm{ex}}}(n,K_{2,t})$. For infinitely many values of $n$, the exact value of ${\ensuremath{\mathrm{ex}}}(n,C_4)$ was also found by Füredi [@fure]. \[furedi\] ${\ensuremath{\mathrm{ex}}}(n,K_{2,t})=(1+o(1))\frac{1}{2}\sqrt{t-1}n^{3/2}$. Let us continue with results concerning generalized Turán problems. The first such result is due to Zykov [@zykov]. \[zykov\] If $r<k$, then $K_r$ is $k$-Turán-good. It was generalized by Ma and Qiu [@mq] to graphs with a color-critical edge. \[MQ\] Let $F$ be a graph with a color-critical edge and chromatic number more than $r$. Then $K_r$ is $F$-Turán-good. Győri, Pach and Simonovits [@gypl] started the study of $k$-Turán-good graphs. \[gpl\] Let $r\ge 3$, and let $H$ be a $(k-1)$-partite graph with $m>k-1$ vertices, containing $\lfloor m/(k-1)\rfloor$ vertex disjoint copies of $K_{k-1}$. Suppose further that for any two vertices $u$ and $v$ in the same component of $H$, there is a sequence $A_1,\dots,A_s$ of $(k-1)$-cliques in $H$ such that $u\in A_1$, $v\in A_s$, and for any $i<s$, $A_i$ and $A_{i+1}$ share $k-2$ vertices. Then $H$ is $k$-Turán-good. \[gpl2\] Paths and even cycles are $3$-Turán-good and $T_{k-1}(m)$ is $k$-Turán-good. \[gpl2.5\] If $H$ is a complete multipartite graph, then ${\ensuremath{\mathrm{ex}}}(n,H,K_k)={{\mathcal N}}(H,G)$ for some complete $(k-1)$-partite graph $G$. \[gpl3\] $C_4$ and $K_{2,3}$ are $k$-Turán-good. A result similar to Theorem \[gpl\] was obtained by Gerbner and Palmer [@gp2]. \[turgoodgp\] Let $H$ be a $k$-Turán-good graph. Let $H'$ be any graph constructed from $H$ in the following way. Choose a complete subgraph of $H$ with vertex set $X$, add a vertex-disjoint copy of $K_{k-1}$ to $H$ and join the vertices in $X$ to the vertices of $K_{k-1}$ by edges arbitrarily. Then $H'$ is $k$-Turán-good. As a single vertex is a complete graph, this implies that $T_1$ is also 4-Turán-good. \[turgoo2\] $P_4$ and $C_4$ are $B_2$-Turán-good. Cambie, de Verclos and Kang [@cvk] studied the case of forbidden stars. \[regu\] Let $T$ be a tree on $k$ vertices and $n$ be large enough. If $nr$ is even, let $G$ be an arbitrary $(r-1)$-regular graph with diameter more than $k$. If $nr$ is odd, let $G$ be an arbitrary graph with diameter more than $k$ that has $n-1$ vertices of degree $r-1$ and one vertex of degree $r-2$. (Note that $G$ exists because $n$ is large enough.) Then ${\ensuremath{\mathrm{ex}}}(n,T,S_r)={{\mathcal N}}(T,G)$. Győri, Salia, Tompkins and Zamora [@gstz] studied the case of forbidden paths. \[paths\] We have ${\ensuremath{\mathrm{ex}}}(n,P_3,P_k)={{\mathcal N}}(P_3,G_{n,k-1, \lfloor (k-2)/2\rfloor})$. \[starpaths\] If $k\ge 4$, $r\ge 3$ and $n$ is large enough, then ${\ensuremath{\mathrm{ex}}}(n,S_r,P_k)={{\mathcal N}}(S_r,G_{n,k-1, \lfloor (k-2)/2\rfloor})$. We remark that in case $k=4$, $G_{n,k-1, \lfloor (k-2)/2\rfloor}=S_n$, thus ${\ensuremath{\mathrm{ex}}}(n,S_k,P_4)=\binom{n-1}{k-1}$. Instead of using the above theorem, one could easily deduce this from the fact that every component of a $P_4$-free graph is either a triangle or a star. \[chch\] For every $n$, $k$ and $r$ we have ${\ensuremath{\mathrm{ex}}}(n,K_k,P_r)={{\mathcal N}}(K_k,D(r-1,n))$. Wang [@wang] showed the following. \[cons\] We have $$ex(n,K_k,\ell K_2)=\max\left\{\binom{2\ell-1}{k},\binom{\ell-1}{k}+(n-\ell+1)\binom{\ell-1}{k-1}\right\}.$$ The following was shown by Chase [@chase], proving a conjecture of Gan, Loh and Sudakov [@gls]. \[gls\] If $k>2$, then ${\ensuremath{\mathrm{ex}}}(n,K_k,S_r)={{\mathcal N}}(K_k,D(r-1,n))$. Alon and Shikhelman [@ALS2016] obtained several results. Here we can use the following ones. \[als2\] $n^{2-o(1))}={\ensuremath{\mathrm{ex}}}(n,K_3,B_k)=o(n^2)$. \[als\] ${\ensuremath{\mathrm{ex}}}(n,K_3,K_{2,t})=(1+o(1))\frac{1}{6}(t-1)^{3/2}n^{3/2}$. Gerbner and Palmer [@GP2017] determined the asymptotic number of paths and cycles of any length in $K_{2,t}$-free graphs. \[coun\] ${\ensuremath{\mathrm{ex}}}(n, P_k, K_{2,t}) = (\frac{1}{2}+ o(1))(t - 1)^{(k-1)/2}n^{(k+1)/2}$. Gerbner, Methuku and Vizer [@GMV2017] studied generalized Turán problems when the forbidden graph is disconnected. They also obtained the following result for the case the other graph is disconnected. \[gmv\] $M_l$ is 3-Turán-good. The *inducibility* of a graph $H$ is the largest number of induced copies of $H$ that an $n$-vertex graph can contain. Brown and Sidorenko [@brosid] showed that for $H=S_4$, the most copies of $H$ are contained in either $K_{k,n-k}$ or $K_{k+1,n-k-1}$, where $k=\lfloor \frac{n}{2}-\sqrt{(3n-4)/2}\rfloor$. As in a triangle-free graph (or a $T_1$-free graph) every copy of a star is induced, this implies the same upper bound for ${\ensuremath{\mathrm{ex}}}(n,S_4,K_3)$. As the constructions are triangle-free, this implies the following (for more on the connection of inducibility and generalized Turán problems, see [@ggmv]). \[induc\] ${\ensuremath{\mathrm{ex}}}(n,S_4,K_3)=\max\{{{\mathcal N}}(S_4,K_{k,n-k}),{{\mathcal N}}(S_4,K_{k+1,n-k-1})\}$, where $k=\lfloor \frac{n}{2}-\sqrt{(3n-4)/2}\rfloor$. New results =========== In this section we present our new results. We often state them in a more general form than needed. \[friends\] $${\ensuremath{\mathrm{ex}}}(n,P_3, C_4)={{\mathcal N}}(P_3,F(n))= \left\{ \begin{array}{l l} \binom{n}{2} & \textrm{if\/ $n$ is odd},\\ \binom{n}{2}-1 & \textrm{if\/ $n$ is even}.\\ \end{array} \right.$$ The lower bounds are given by the friendship graph $F(n)$. Recall that it has a vertex $v$ of degree $n-1$, and a matching of $\lfloor (n-1)/2\rfloor$ edges on the remaining vertices. Then for any two vertices different from $v$, they are endpoints of a $P_3$ with $v$ in the middle. For $v$ and another vertex $u$, if $u$ is connected to $u'$ in the matching, then $uu'v$ is a $P_3$ with $u$ and $v$ as endpoints. Thus every pair of vertices, except $\{v,w\}$ forms the endpoints of a $P_3$, where $w$ is the vertex not in the matching in case $n$ is even. For the upper bound, let $G$ be a $C_4$-free graph. We count the copies of $P_3$ by their endpoints; obviously any two vertices have at most one common neighbor by the $C_4$-free property, thus ${{\mathcal N}}(P_3,G)\le\binom{n}{2}$. Let $n$ be even, $G$ be a $C_4$-free graph on $n$ vertices and assume indirectly that ${{\mathcal N}}(P_3,G)=\binom{n}{2}$, i.e. every pair of vertices has a common neighbor. Let $v$ be an arbitrary vertex and $U$ be its neighborhood. Observe that any vertex of $U$ has a common neighbor with $v$ only if there is a perfect matching in $U$, thus $|U|$ is even. Also, there cannot be any other edges inside $U$ because of the $C_4$-free property. Let $U'$ be the set of $n-|U|-1$ vertices not connected to and different from $v$, thus $|U'|$ is odd. Each vertex of $U'$ is connected to exactly one vertex in $U$; at least one because that is the common neighbor with $v$, and at most one because of the $C_4$-free property. Thus there is an odd number of edges between $U$ and $U'$. As each vertex of $U$ has two neighbors outside $U'$, it means the sum of the degrees of vertices in $U$ is odd. Thus there is a vertex of odd degree in $U$. But we have obtained that an arbitrary vertex of $G$ has to be of even degree, a contradiction. \[korstar\] If $r\ge 4$, then ${\ensuremath{\mathrm{ex}}}(n,S_r,C_4)=\binom{n-1}{r-1}$. The lower bound is given by the star $S_n$. For the upper bound, let $G$ be a $C_4$-free graph with maximum degree $\Delta$ and consider two of its vertices $u$ and $v$. Let us consider the copies of $S_r$ where $u$ and $v$ are leaves. They have at most one common neighbor, that has to be a center of the $S_r$, and then we have at most $\binom{\Delta-2}{r-3}$ ways to choose the other leaves. This way we count every copy of $S_r$ $\binom{r-1}{2}$ times, thus ${{\mathcal N}}(S_r,G)\le \frac{1}{\binom{r-1}{2}}\binom{n}{2}\binom{\Delta-2}{r-3}$. If $\Delta \le n-3$, this finishes the proof. If $\Delta=n-1$ and $w$ has degree $n-1$, then no other vertex can have degree more than 2, thus $w$ is the only center of copies of $S_r$ and ${{\mathcal N}}(S_r,G)=\binom{n-1}{r-1}$. If $\Delta=n-2$ and $w$ has degree $n-2$, let $x$ be the only vertex not adjacent to $w$. Then the degree of $x$ is at most one, as it has at most one common neighbor $y$ with $w$. Observe that the degree of $y$ is at most 3 and the degree of any other vertex is at most 2, thus ${{\mathcal N}}(S_r,G)\le \binom{n-2}{r-1}+1$ (where the $+1$ term appears only if $r=4$), finishing the proof. \[trian\] ${\ensuremath{\mathrm{ex}}}(n,K_3,T_1)={\ensuremath{\mathrm{ex}}}(n,K_3,P_4)={{\mathcal N}}(K_3,D(3,n))=\lfloor n/3\rfloor$. Obviously, in a $T_1$-free or $P_4$-free graph, the vertices of a triangle are not connected to any other vertex, thus the triangles are vertex disjoint. The following observations are simple consequences of the facts that an $M_2$-free graph is a star or a triangle and a $P_3$-free graph is a matching. \[matching\] If $k\ge 3$, then ${\ensuremath{\mathrm{ex}}}(n,S_k,M_2)=\binom{n-1}{k-1}$. For $k=2$, we have $${\ensuremath{\mathrm{ex}}}(n,S_2,M_2))= \left\{ \begin{array}{l l} n & \textrm{if\/ 3 divides $n$},\\ n-1 & \textrm{otherwise}.\\ \end{array} \right.$$ \[cseresz\] ${\ensuremath{\mathrm{ex}}}(n,M_k,P_3)=\binom{\lfloor n/2\rfloor}{k}$. \[dani\] ${\ensuremath{\mathrm{ex}}}(n,M_2,P_4)={{\mathcal N}}(M_2,D(3,n))$ if $n\neq 4$ and ${\ensuremath{\mathrm{ex}}}(4,M_2,P_4)=1$. We prove the statement by induction on $n$, it is trivial if $n\le 4$. Consider $n\ge 5$. Observe that every connected component of a $P_4$-free graph is either a triangle or a star. Let $G$ be a $P_4$-free graph with the maximum number of copies of $M_2$. Let $G'$ be the graph obtained by removing a star component $S_r$ from $G$ (we are done if there is no such component). Then ${{\mathcal N}}(M_2,G)={{\mathcal N}}(M_2,G')+(r-1)|E(G')|$. Assume first $G'=D(3,n-r)$. If $r\ge 3$, then we can remove three vertices from $S_r$ and place a triangle on those vertices. It is easy to see that the number of copies of $M_2$ increases this way, a contradiction. If $r=1$ or $r=2$, we are done if $n-r$ is divisible by 3 (as in that case the union of $D(3,n-r)$ and $S_r$ is $D(3,n)$). Otherwise, we have an $S_1$ or $S_2$ component in $G'$. We unite the two star components. If they were two isolated vertices, then we add an edge connecting them, if they were an isolated vertex and an edge, we place a triangle there. In these cases the number of copies of $M_2$ clearly increases. If they were two edges, we delete them and place a triangle on three of these vertices. In this case we removed a copy of $M_2$, but increased the number of edges. As there is at least one triangle component in $G$, this increases the number of copies of $M_2$ by at least three, thus the total number of copies of $M_2$ increases, a contradiction. Assume now $G'\neq D(3,n-r)$. Note that we can assume $n-r=4$ and $G'=M_2$. Indeed, otherwise both the number of copies of $M_2$ and the number of edges are maximized by $D(3,n-r)$ (using Theorem \[erga\] and induction). If $r=n-4\ge 3$, just as in the other case above, we can remove three vertices from $S_r$ and place a triangle on those vertices to increase the number of copies of $M_2$, a contradiction. If $r=1$, $G$ consists of two edges and an isolated vertex, but an edge and a triangle contains more copies of $M_2$, a contradiction. If $r=2$, then $G=M_3$, and $2K_3$ contains more copies of $M_2$, a contradiction finishing the proof. Using the well-known fact that ${\ensuremath{\mathrm{ex}}}(n,F)=O(n)$ only if $F$ is a forest, we can prove an asymptotic result for ${\ensuremath{\mathrm{ex}}}(n,M_k,F)$ in case $F$ contains a cycle and we know ${\ensuremath{\mathrm{ex}}}(n,F)$ asymptotically. \[matchbip\] If $F$ is not a forest, then ${\ensuremath{\mathrm{ex}}}(n,M_k,F)=(1+o(1)){\ensuremath{\mathrm{ex}}}(n,F)^k/k!$. Consider an $F$-free graph. We can pick each of the $k$ edges ${\ensuremath{\mathrm{ex}}}(n,F)$ ways, and we count each copy of $M_k$ exactly $k!$ times. Let us consider now an $F$-free graph $G$ with ${\ensuremath{\mathrm{ex}}}(n,F)$ edges. We claim that it contains $(1+o(1)){\ensuremath{\mathrm{ex}}}(n,F)^k/k!$ copies of $M_k$. We prove it by induction on $k$. The base case $k=1$ is immediate. Assume that the statement holds for $k-1$ and prove it for $k$. Consider an arbitrary copy of $M_{k-1}$. Then it can be extended to an $M_k$ by any edge not incident to its $2k-2$ vertices. Thus we can choose any of at least ${\ensuremath{\mathrm{ex}}}(n,F)-(2k-2)n=(1+o(1)){\ensuremath{\mathrm{ex}}}(n,F)$ edges. This way we obtain ${\ensuremath{\mathrm{ex}}}(n,F)^k/(k-1)!$, but count each copy of $M_k$ exactly $k$ times. \[matc\] $M_\ell$ is $F$-Turán-good for every $F$ with a color-critical edge. We use induction on $\ell$, the base case $\ell=1$ is Theorem \[sim\]. Recall that by Observation \[matchbip\] we have ${\ensuremath{\mathrm{ex}}}(n,M_\ell,F)=\Theta(n^{2l})$. Let $n$ be large enough, $G$ be an $F$-free graph on $n$ vertices with the largest number of copies of $M_\ell$, and let $\chi(F)=k+1$. [**Case 1.**]{} $G$ has chromatic number more than $k$. We will show that $|E(T_k(n)|{{\mathcal N}}(M_{\ell-1},T_k(n-2))-{{\mathcal N}}(M_\ell,G)=\Omega(n^{2\ell-1})$ and $|E(T_k(n)|{{\mathcal N}}(M_{\ell-1},T_k(n-2))-{{\mathcal N}}(M_\ell,T_k(n))=O(n^{2\ell-2})$, which implies that $T_k(n)$ contains more copies of $M_\ell$ than $G$, a contradiction. A theorem of Erdős and Simonovits [@valenc] states that if $F$ is $(k+1)$-chromatic and has a color-critical edge, then there is a vertex $v$ of degree at most $(1-\frac{1}{k-4/3})n$ in every $n$-vertex $F$-free graph with chromatic number more than $k$. We claim that $|E(T_k(n)|-|E(G)|=\Omega(n)$. Indeed, by deleting $v$ we obtain a graph with at most $|E(T_k(n-1))|$ edges, and we can delete a vertex from $T_k(n)$ to obtain $T_k(n-1)$. As we delete $\Omega(n)$ more edges in the second case, we are done with the claim. We count the copies of $M_\ell$ by picking an edge and then picking $M_{\ell-1}$ independently from it. In $G$, this can be done at most $(|E(T_k(n)|-\Omega(n)){{\mathcal N}}(M_{\ell-1},T_k(n-2))$ ways. Compared to $|E(T_k(n)|{{\mathcal N}}(M_{\ell-1},T_k(n-2))$, this is smaller by $\Theta(n^{2\ell-1})$. We claim that $|E(T_k(n)|{{\mathcal N}}(M_{\ell-1},T_k(n-2))-|{{\mathcal N}}(M_\ell,T_k(n))|=O(n^{2\ell-2})$, which finishes the proof. In fact we show the stronger statement $|E(T_k(n)||E(T_k(n-2)|\dots|E(T_k(n-2\ell+2)|-|{{\mathcal N}}(M_\ell,T_k(n))|=O(n^{2\ell-2})$. Indeed, we can pick the first edge $|E(T_k(n)|$ ways. Then we pick the remaining edges one by one. To pick the $i$th edge, we have to pick an edge from the graph $G_i$ we obtain by deleting the endpoints of the edges picked earlier. $G_i$ is a complete $k$-partite graph on $n-2i+2$ vertices with parts of size at most $\lceil n/k\rceil$ and at least $\lfloor n/k\rfloor-i+1$, as we removed at most $i-1$ vertices from each part. Therefore, we could obtain $T_k(n-2i+2)$ from $G_i$ by moving a constant $c_i$ number of vertices from some parts to other parts. It is easy to see that each such move decreases the number of edges by a constant, therefore we have $|E(G_i)|=|E(T_k(n-2i+2)|-c'_i$ for some constant $c_i'$. Hence $|{{\mathcal N}}(M_\ell,T_k(n))|=|E(T_k(n)|(|E(T_k(n-2)|-c'_1)\dots(|E(T_k(n-2\ell+2)|-c_{\ell-1}')$. Each term we subtract from $|E(T_k(n)||E(T_k(n-2)|\dots|E(T_k(n-2\ell+2)|$ has a constant $c_i'$ and at most $\ell-1$ terms that are quadratic, thus the difference is $O(n^{2\ell-2})$. [**Case 2**]{}. $G$ has chromatic number at most $k$. Then we can assume that $G$ is a complete $k$-partite graph, as adding edges do not decrease the number of copies of $M_\ell$ and this way we cannot violate the $F$-free property. We show that making the graph more balanced does not decrease (in fact it increases) the number of copies of $M_\ell$. More precisely, assume that part $A$ has size $a-1$ and part $B$ has size at least $a+1$, and let $G'$ be $G$ restricted to the other parts. Let us move a vertex $v$ from $B$ to $A$. This means we delete the edges from $v$ to the $a-1$ vertices $u_1,\dots,u_{a-1}$ of $A$, and add edges from $v$ to the other (at least) $a$ vertices $w_1,\dots,w_a$ of $B$. We claim that the resulting graph has more copies of $M_\ell$. We show this by induction on $\ell$, the base case $\ell=1$ is well-known and trivial. When deleting the edge $vu_i$, we deleted the copies of $M_\ell$ that contained this edge and an $M_{\ell-1}$ on the other vertices. The graph $G_i$ on those other vertices consists of $G'$ and a part of size $a-2$ and a part of size $b\ge a$. Altogether we removed $\sum_{i=1}^{a-1} {{\mathcal N}}(M_{\ell-1},G_i)$ copies of $M_\ell$. When adding the edge $vw_i$, we added copies of $M_\ell$ that contained this edge and an $M_{\ell-1}$ on the other vertices. The graph $G'_i$ on those other vertices consists of $G'$, a part of size $a-1$, and a part of size $b-1\ge a-1$. Altogether we added at least $\sum_{i=1}^{a} {{\mathcal N}}(M_{\ell-1},G'_i)$ copies of $M_\ell$. By induction, $G'_i$ has more copies of $M_{\ell-1}$ than $G_i$, finishing the proof (as it shows that we added more copies of $M_\ell$, than what was deleted, even without using the edge $vw_a$). \[sta\] If $H$ is $(k-2)$-regular, then ${\ensuremath{\mathrm{ex}}}(n,H,S_k)=\lfloor n/|V(H)|\rfloor$. Let $G$ be an $S_k$-free graph. Obviously for any copy of $H$ in $G$, there are no further edges incident to its vertices, thus copies of $H$ are vertex-disjoint. On the other hand, one can take $\lfloor n/|V(H)|\rfloor$ vertex disjoint copies of $H$, and the resulting graph is $S_k$-free. \[rsze\] $n^{\ell+2-o(1)}\le {\ensuremath{\mathrm{ex}}}(n,T_\ell,B_k)=o(n^{\ell+2})$. The upper bound easily follows from Proposition \[als2\]: there are $o(n^2)$ triangles in a $G$-free graph, and $O(n^\ell)$ ways to choose the $\ell$ additional leaves. For the lower bound, we use the same construction that gives the lower bound in Proposition \[als2\]. It is a construction by Ruzsa and Szemerédi [@rsz], a graph $G$ with $n^{2-o(1)}$ edges where every edge is contained in exactly one triangle. Observe that a vertex with degree $d$ is contained in exactly $d/2$ triangles. We have that $G$ contains $n^{2-o(1)}$ triangles. Observe that the number of copies of $T_\ell$ in $G$ is $\sum_{v\in V(G)} \frac{d(v)}{2}\binom{d(v)-2}{\ell}$. Indeed, we pick a vertex $v$, pick a neighbor of $v$ $d_i$ ways, that determines a triangle. We count every triangle containing $v$ twice. Then we pick $l$ other neighbors of $v$ to be added as leaves. By the power mean inequality, we have $$n^{2-o(1)}\le \sum_{v\in V(G)} d(v) \le n\left(\frac{\sum_{v\in V(G)} d(v)^{\ell+1}}{n}\right)^{1/{\ell+1}},$$ which implies $\sum_{v\in V(G)} d(v)^{\ell+1}\ge n^{\ell+2-o(1)}$ and finishes the proof. \[tetel\] If $n$ is large enough, then $${\ensuremath{\mathrm{ex}}}(n,T_1, C_4)={{\mathcal N}}(T_1,F(n))= \left\{ \begin{array}{l l} \binom{n}{2}-\frac{3(n-1)}{2} & \textrm{if\/ $n$ is odd},\\ \binom{n}{2}-2n-3 & \textrm{if\/ $n$ is even}.\\ \end{array} \right.$$ Assume indirectly that there exists an $n$-vertex $C_4$-free graph $G$ with more than ${{\mathcal N}}(T_1,F(n))$ copies of $T_1$. We will count the copies of $T_1$ the following way. Consider an unordered pair $\{u,v\}$ of vertices. We count the copies of $T_1$ where one of $u$ and $v$ corresponds to the vertex of degree 1 in $T_1$, and the other corresponds to a vertex of degree two in $T_1$. In $G$, $u$ and $v$ have at most one common neighbor $w$, that has to correspond to the vertex of degree three in $T_1$. Then the last vertex of the $T_1$ is a common neighbor of either $u$ and $w$ or $v$ and $w$. Thus there are at most two copies of $T_1$ obtained this way, and we count every $T_1$ twice this way. We say that these copies of $T_1$ *belong* to the pair $\{u,v\}$, thus every copy of $T_1$ belongs to at most two pairs of vertices. Note that this argument immediately gives the upper bound ${\ensuremath{\mathrm{ex}}}(n,T_1, C_4)\le\binom{n}{2}$. There is a vertex of $G$ with degree at least $n-8$. Let us consider an auxiliary graph $H$ on the same vertex set $V(G)$, where $u$ and $v$ are connected in $H$ if no $T_1$ belongs to them in $G$. Obviously, $H$ has less than $2n-3$ edges by our indirect assumption, thus there is a vertex $x$ with degree at most 3 in $H$. Let $\{x_1,x_2,x_3\}$ contain all the neighbors of $x$ in $H$. Observe that $G$ is a subgraph of $H$. Indeed, if $uv\in E(G)$, and $w$ is their common neighbor, they form a triangle, and $uw$ and $vw$ both have a common neighbor in the triangle. Thus neither the pair $(u,w)$, nor the pair $(v,w)$ has another common neighbor, that could correspond to the fourth vertex of $T_1$. Thus no copy of $T_1$ belongs to $\{u,v\}$, hence $uv\in E(H)$. This implies that $x$ has degree at most 3 in $G$. Assume first that $x$ is connected to $x_1,x_2,x_3$ in $G$. Then for every other vertex $y$, there is a $P_3$ in $G$ from $x$ to $y$, because they are not connected to $x$ in $H$. Therefore, $y$ is connected to $x_1$, $x_2$ or $x_3$ in $G$, but only one of them, as they have another common neighbor $x$. Let $X_i$ be the set of neighbors of $x_i$ in $G$, that are different from $x,x_1,x_2,x_3$. A vertex in $X_i$ can be connected in $G$ to at most one vertex of $X_1,X_2,X_3$, thus has degree at most 4 in $G$. A vertex in $X_1$ is connected in $G$ by a $P_3$ to every vertex in $X_1$, but in $X_2$ to at most three vertices. Indeed, its only neighbors in $X_1,X_2,X_3$ are each connected to at most one vertex in $X_2$. Therefore in the auxiliary graph $H$ at least $|X_1|(|X_2|-3+|X_3|-3)$ edges go from $X_1$ to $X_2\cup X_3$. By the same reasoning for $X_2$ and $X_3$, we obtain that $$\begin{aligned} |E(H)|\ge \frac{|X_1|(|X_2|+|X_3|-6)+|X_2|(|X_1|+|X_3|-6)+|X_3|(|X_2|+|X_1|-6)}{2}=\\ |X_1||X_2|+|X_1||X_3|+|X_2||X_3|-3(|X_1|+|X_2|+|X_3|)=|X_1||X_2|+|X_1||X_3|+|X_2||X_3|-3n+12.\end{aligned}$$ In particular, this is greater than $2n-3$ (which is a contradiction) unless the sum of the two smallest set, say $|X_2|+|X_3|$ is at most $5$ (if $n$ is large enough), which implies that $x_1$ has degree at least $n-8$. If the degree of $x$ is 2 in $G$, let without loss of generality $x_1$ and $x_2$ be its neighbors, and similarly to the previous case let $X_i$ be the set of neighbors of $x_i$ in $G$ that are different from $x,x_1,x_2$. Then all but at most one of the other vertices ($x_3$) is in $X_1\cup X_2$, as they are connected to $x$ by a $P_3$ in $G$. A vertex in $X_i$ is connected by a $P_3$ in $G$ to every vertex in $X_1$, but at most three vertices in $X_2$ (through its neighbors in $X_1$ and $X_2$, and $z$). Therefore, we have $$2n-3\ge |E(H)|\ge \frac{|X_1|(|X_2-3)+|X_2|(|X_1|-3)}{2}=|X_1||X_2|-3(n-3)/2,$$ which implies that either $|X_1|$ or $|X_2|$ is at most 3, hence either $x_1$ or $x_2$ has degree at least $n-6$. Finally, if $x$ has degree 1 in $G$, its neighbor is connected in $G$ to all but two of the other vertices, thus has degree at least $n-3$. Let $u$ have degree at least $n-8$ in $G$. Let $U$ be the set of at most 7 vertices not connected to $u$ and different from $u$. We claim that vertices in $U$ are in at most $7+15\binom{7}{3}=532$ copies of $T_1$. Indeed, each of those vertices is connected to $V(G)\setminus U$ by at most one edge, thus the triangle in $T_1$ is totally inside or totally outside $U$. Let us consider first the triangles totally outside $U$. Every neighbor $v$ of $u$ is in at most one such triangle (that consists of $v$, $u$ and their at most one common neighbor). At most 7 edges go from $U$ to the neighborhood of $u$, and there is only one way any one of those edges can extend a triangle outside $U$ to a copy of $T_1$. Thus there are at most 7 copies of $T_1$ where the triangle is totally outside $U$. There are at most $\binom{7}{3}$ triangles inside $U$ (obviously there are even fewer, because of the $C_4$-free property). They each have three endpoints, and those points have degree at most 7, thus there are at most 5 ways to extend the triangle to a copy of $T_1$ from that endpoint. Let us now delete the vertices of $U$ from $G$ to obtain $G'$. On the $n'=n-|U|$ vertices of $G'$, we have a vertex $u$ of degree $n'-1$ in $G'$. Obviously, there can only be a matching on the other vertices of $G'$, thus $G'$ is a subgraph of $F_{n'}$ and ${{\mathcal N}}(T_1,G')\le {{\mathcal N}}(T_1,F_{n'})$. Therefore, ${{\mathcal N}}(T_1,G)\le {{\mathcal N}}(T_1,F_{n'})+532<{{\mathcal N}}(T_1,F(n))$, a contradiction. For the last inequality, observe that if we add $|U|$ vertices as neighbors of $u$, then each newly added vertex is in $\Omega(n)$ copies of $T_1$. \[stacli\] $S_4$ is 4-Turán-good. Let $G$ be the $n$-vertex $K_4$-free graph with the most number of copies of $S_4$. By Proposition \[gpl2.5\], we can assume $G=K_{a,b,c}$, we just have to optimize $a,b,c$. The number of $S_4$’s is $a\binom{b+c}{3}+b\binom{a+c}{3}+c\binom{a+b}{3}$. Let us consider a fixed $a$, and choose $b$. The first term is a constant, the other terms are $b\binom{n-b}{3}+(n-a-b)\binom{a+b}{3}$. This is maximized at $b=(n-a)/2$, thus we have that $b$ and $c$ differ by at most one. Similarly $a$ differs from them by at most one, finishing the proof. \[regfor\] If $n\ge 3$, then ${\ensuremath{\mathrm{ex}}}(n,M_2,S_4)=n(n-3)/2$. The lower bound is given by any 2-regular graph, as we can pick an edge, and it has $n-3$ edges independent from it. We count every copy of $M_2$ twice this way. For the upper bound, observe that an $S_4$-free graph $G$ has at most $n$ edges, and if it has $n$ edges, then it is 2-regular. If $G$ has at most $n-1$ edges, then we can pick an edge at most $n-1$ ways, and another edge at most $n-2$ ways. This gives the upper bound $(n-1)(n-2)/2$, which is one larger than what we claimed. Thus we obtain the desired bound unless above we have equality everywhere, in particular $G$ has $n-1$ edges, and each is independent from all the $n-2$ other edges. But then $G=M_{n-1}$, thus has more than $n$ vertices, a contradiction. \[tegy\] Let $F$ be obtained from $K_r$ by adding a new vertex and connecting it to one of the vertices of the $K_r$. Let $H\neq K_r$ be a connected graph and $n$ be large enough. Then ${\ensuremath{\mathrm{ex}}}(n,H,F)={\ensuremath{\mathrm{ex}}}(n,H,K_r)$. On the other hand, we have ${\ensuremath{\mathrm{ex}}}(n,K_r,F)={{\mathcal N}}(K_r,D(r,n))=\lfloor n/r\rfloor$. Note first that ${\ensuremath{\mathrm{ex}}}(n,H,F)\ge {\ensuremath{\mathrm{ex}}}(n,H,K_r)$, as $K_r$ is a subgraph of $F$. Let $G$ be an $F$-free graph on $n$ vertices. If there is a $K_r$ in $G$, no other vertex is connected to its vertices. This shows the statement about ${\ensuremath{\mathrm{ex}}}(n,K_r,F)$. Assume first that $H$ has more than $r$ vertices. If there is a $K_r$ in $G$, then its edges cannot be in any copy of $H$. Thus, we can delete all the edges of every $K_r$ from $G$ to obtain a $K_r$-free graph $G'$ with ${{\mathcal N}}(H,G')={{\mathcal N}}(H,G)$. As ${{\mathcal N}}(H,G')\le {\ensuremath{\mathrm{ex}}}(n,H,K_r)$, this finishes the proof. Assume now $H\neq K_r$ has $p\le r$ vertices, then it has chromatic number at most $r-1$. Therefore, ${{\mathcal N}}(H,T_{r-1}(n))=\Omega(n^p)$, hence ${\ensuremath{\mathrm{ex}}}(n,H,F)=\Omega(n^p)$. If $p=1$, then the statement is trivial, hence we assume $p>1$ from now on. Let $G$ be an $F$-free graph on $n$ vertices and assume again that there is a $K_r$ in $G$. Again, no other vertex is connected to its vertices. Let $n$ be large enough in this case. Let $G'$ be the graph we obtain by deleting a copy of $K_r$. We can assume ${{\mathcal N}}(H,G')={\ensuremath{\mathrm{ex}}}(n-r,H,F)$, otherwise we could replace $G'$ with an extremal graph to obtain more than ${{\mathcal N}}(H,G)$ copies of $H$ on $n$ vertices . We have ${{\mathcal N}}(H,G)={{\mathcal N}}(H,G')+c$ for a constant $c={{\mathcal N}}(H,K_r)$. As ${\ensuremath{\mathrm{ex}}}(n,H,F)$ is super-linear and $n-r$ is large enough, there is a vertex $v$ of $G'$ appearing in more than $c$ copies of $H$. Then $v$ is not in any copy of $K_r$ (as in that case its component would be a $K_r$ with only $c$ copies of $H$). Let us add $r$ twins of $v$ to $G'$, i.e. $r$ new vertices connected to exactly the same vertices as $v$. We claim that the resulting graph $G_0$ is $F$-free. Indeed, assume there is an $F$ in $G_0$, and consider the $K_r$ in it, which we denote by $K$. If $K$ does not contain any new vertices, then the additional leaf is a new vertex, but it could be replaced by $v$ to find a copy of $F$ in $G$, a contradiction (recall that $v$ cannot be in $K$). If $K$ contains a new vertex $v'$, then it contains only one new vertex and does not contain $v$, as the new vertices with $v$ form an independent set. But then we could replace $v'$ with $v$ in $K$, to obtain a $K_r$ containing $v$ in $G'$, a contradiction. Observe that every new vertex $u$ is in more than $c$ copies of $H$ that contains only vertices from $V(G')\setminus\{v\}$ besides $u$. Therefore, ${{\mathcal N}}(H,G_0)\ge cr+{{\mathcal N}}(H,G')>{{\mathcal N}}(H,G)$, a contradiction. Using that $P_3$, $P_4$ and $C_4$ are 3-Turán-good by Corollary \[gpl2\], we have the following. \[tri1\] $P_3$, $P_4$ and $C_4$ are $T_1$-Turán-good. \[p4k4\] $P_4$ is $4$-Turán-good. Let $G$ be a $K_4$-free graph on $n$ vertices. We count the copies of $P_4$ by picking the first and last edge, which are two independent edges. There are at most ${\ensuremath{\mathrm{ex}}}(n,M_2,K_4)$ ways to do this, which is ${{\mathcal N}}(M_2,T_3(n))$ by Theorem \[matc\]. After picking these two edges, there are five possibilities for the subgraph of $G$ induced on the four vertices of the two edges picked. Either there is a $B_2$ on the four vertices, or a $C_4$, or a $T_1$, or a $P_4$, or an $M_2$. A $B_2$ contains 6 copies of $P_4$ and this way it is counted twice. A $C_4$ contains 4 copies, and is counted twice. A $T_1$ contains 2 copies and is counted once. A $P_4$ contains one copy and is counted once, while an $M_2$ contains no copy and is counted once. Let ${{\mathcal N}}^*(H,F)$ denote the number of induced copies of $H$ in $F$, and let $a={{\mathcal N}}^*(B_2,G)$, $b={{\mathcal N}}^*(C_4,G)$, $c={{\mathcal N}}^*(T_1,G)$, $d={{\mathcal N}}^*(P_4,G)$ and $e={{\mathcal N}}^*(M_2,G)$. Then by the above argument we have ${{\mathcal N}}(M_2,G)=2a+2b+c+d+e$, and ${{\mathcal N}}(P_4,G)=6a+4b+2c+d$, which implies ${{\mathcal N}}(P_4,G)\le 2{{\mathcal N}}(M_2,G)+2a$. Similar equations hold for $T_3(n)$, but no $T_1$, $P_4$ or $M_2$ are induced there, so we have ${{\mathcal N}}(M_2,T_3(n))=2{{\mathcal N}}^*(B_2,T_3(n))+2{{\mathcal N}}^*(C_4,T_3(n))$ and ${{\mathcal N}}(P_4,T_3(n))=6{{\mathcal N}}^*(B_2,T_3(n))+4{{\mathcal N}}^*(C_4,T_3(n))$ Observe that every $B_2$ is induced in a $K_4$-free graph, thus $a={{\mathcal N}}^*(B_2,G)={{\mathcal N}}(B_2,G)\le {\ensuremath{\mathrm{ex}}}(n,B_2,K_4)={{\mathcal N}}(B_2,T_3(n))$, where the last equality follows from Corollary \[gpl2\]. We have ${{\mathcal N}}(P_4,G)\le 2{{\mathcal N}}(M_2,G)+2a=2{{\mathcal N}}(M_2,G)+2{{\mathcal N}}(B_2,G)\le 2{{\mathcal N}}(M_2,T_3(n))+2{{\mathcal N}}(B_2,T_3(n))=6{{\mathcal N}}^*(B_2,T_3(n))+4{{\mathcal N}}^*(C_4,T_3(n))={{\mathcal N}}(P_4,T_3(n))$. Progressive induction ===================== The progressive induction was introduced by Simonovits [@sim]. It is a method to prove statements that hold only for $n$ large enough. In case of ordinary induction, one usually proves the base case easily, as it is on a very small graph, and the induction step is more complicated. However, in case the statement only holds for large $n$, even if the induction step can be proved, the base case might be more complicated. This is where progressive induction can be used. Let us describe it informally first. Assume we want to prove that an integer valued quantity $\alpha(G)$ on $n$-vertex graphs takes its maximum on a graph $G_n$ (or on a family of graphs). Ordinary induction assumes that this statement holds for some $n'$, and for larger $n$ it proves that $\alpha$ increases by at most $\alpha(G_n)-\alpha(G_{n'})$. Progressive induction does not have the assumption. In this case one has to prove that $\alpha$ increases by strictly less than $\alpha(G_n)-\alpha(G_{n'})$ (unless the $n$-vertex graph is $G_n$). This means that for small values of $n$, $\alpha(G)$ may be larger on an $n$-vertex graph than $\alpha(G_n)$, but this surplus starts decreasing after a while, and eventually vanishes. Now we state the key lemma more formally. The actual method works for more than just graphs, but for simplicity, we state the lemma only for graphs. \[progi\] Let ${{\mathcal A}}\supset {{\mathcal B}}$ be families of graphs. Let $f$ be a function on graphs in ${{\mathcal A}}$ such that $f(G)$ is a non-negative integer, and if $G$ is in ${{\mathcal B}}$, then $f(G)=0$. Assume there is an $n_0$ such that if $n>n_0$ and $G\in{{\mathcal A}}$ has $n$ vertices, then either $G\in{{\mathcal B}}$, or there exist an $n'$ and a $G'\in{{\mathcal A}}$ such that $n/2<n'<n$, $G'$ has $n'$ vertices and $f(G)<f(G')$. Then there exists $n_1$ such that every graph in ${{\mathcal A}}$ on more than $n_1$ vertices is in ${{\mathcal B}}$. We remark that typically here we want to maximize $\alpha$ on $F$-free graphs, and we conjecture that the extremal graphs belong to a family ${{\mathcal B}}_0$. Then ${{\mathcal A}}$ is the family of $F$-free graphs that maximize $\alpha$, ${{\mathcal B}}={{\mathcal A}}\cap {{\mathcal B}}_0$, and $f(G)=\alpha(G)-\alpha(H)$, where $H$ maximizes $\alpha$ in ${{\mathcal B}}_0$. We also use a simple result of Alon and Shikhelman [@ALS2016] and the removal lemma. \[as\] We have ${\ensuremath{\mathrm{ex}}}(n,H,F)=\Omega(n^{|V(H)|})$ if and only if $F$ is not a subgraph of a blow-up of $H$. If a graph $G$ contains $o(n^{|V(H)|})$ copies of $H$, then there are $o(n^2)$ edges of $G$, such that deleting them makes the resulting graph $H$-free. We also use a simple extension of Proposition \[gpl2.5\]. Recall that it states that for a $K_k$-free graph $G$ on $n$ vertices and a complete multipartite graph $H$, there is a complete $(k-1)$-partite $G'$ on $n$ vertices with ${{\mathcal N}}(H,G)\le {{\mathcal N}}(H,G')$. \[propi\] Let $G$ be a $K_k$-free graph on $n$ vertices, with an independent set $A$ of size $a$, and $H$ be a complete multipartite graph. Then there is a complete $(k-1)$-partite $G'$ on $n$ vertices with ${{\mathcal N}}(H,G)\le {{\mathcal N}}(H,G')$ such that one of the parts of $G'$ has size at least $a$. The proof goes similarly the proof of Proposition \[gpl2.5\] in [@gypl]. We apply the symmetrization process due to Zykov [@zykov]. Given two non-adjacent vertices $u$ and $v$ in $G$, we say that we symmetrize $u$ to $v$ if we delete all the edges incident to $u$, and then connect $u$ to the neighbors of $v$. It is well-known that the resulting graph is also $K_k$-free [@zykov], and either symmetrizing $u$ to $v$, or symmetrizing $v$ to $u$ does not decrease the number of copies of $H$ [@gypl], thus we can go through the pairs of non-adjacent vertices and symmetrize one to the other. It is also clear that if symmetrizing does not change anything, then non-adjacent vertices have the same neighborhood, thus $G$ is complete multipartite. To prove Proposition \[gpl2.5\], one only has to show that we arrive to such a situation after some symmetrizing, i.e. show that the process terminates after finitely many steps. This is done in [@gypl] by showing that either the number of copies of $H$, or the number of pairs with the exact same neighborhood increases. We will show that by choosing carefully the pairs to symmetrize, we can make sure $A$ is always independent, which will finish the proof. Let us apply the symmetrization first on pairs with both vertices in $A$. This way after finitely many steps we arrive to a graph $G_1$ where all the vertices in $A$ have the same neighborhood $B$. Then we apply symmetrization anywhere, with the additional condition, that we always symmetrize inside $A$, whenever two vertices of $A$ have different neighborhood. Indeed, it is possible that we symmetrize $u\in A$ to $v\in V\setminus A$, and this way after this step $u$ has a neighborhood that is different from the neighborhood of the other vertices in $A$. However, in this case $v$ is not connected to $u$, thus it is not connected to any vertex of $A$. This way we never add any edge inside $A$. \[flenbtwo\] Let $\gamma<1$, $F$ be a 3-chromatic graph with a critical edge, $G$ be an $F$-free graph on $n$ vertices, with an independent set $A$ of size $a<\gamma n$, and $H$ be a complete bipartite graph. Then there is a complete bipartite graph $G'$ on $n$ vertices with ${{\mathcal N}}(H,G)\le (1-o(1)){{\mathcal N}}(H,G')$ such that one of the parts of $G'$ has size at least $a$. $G$ contains $o(n^{3})$ triangles by Proposition \[as\], thus we can delete $o(n^2)$ edges to delete all the triangles in $G$ by the removal lemma. This way we removed $o(n^{|V(H)|})$ copies of $H$. Let $G_0$ be the resulting graph. Now we can apply Proposition \[propi\] to find a complete bipartite graph $G_1$ with at least ${{\mathcal N}}(H,G_0)$ copies of $H$, and a part of size at least $a$. Let $G'$ be either $G_1$, or $K_{a,n-a}$, the one with more copies of $H$. Then $ {{\mathcal N}}(H,G')=\Omega(n^{|V(H)|}$. Therefore, we have ${{\mathcal N}}(H,G)\le {{\mathcal N}}(H,G_0)+o(n^{|V(H)|})\le (1-o(1)){{\mathcal N}}(H,G')$. \[ccb2\] Let $H$ be a bipartite graph and $a_n<n/2$ be integers such that for every $n$ we have $a_n-a_{n-1}\le 1$. Let $G_n=K_{a_n,n-a_n}$ and assume that for every $t$ there is $n_t$ such that for $n>n_t$, ${\ensuremath{\mathrm{ex}}}(n,H,B_t)={{\mathcal N}}(H,G_n)$. Then for any 3-chromatic graph $F$ with a color-critical edge, if $n$ is large enough, we have ${\ensuremath{\mathrm{ex}}}(n,H,F)={{\mathcal N}}(H,G_n)$. Observe first that it is enough to prove the statement for $F=K_{s,t}^*$, which denotes $K_{s,t}$ with an edge added inside the part of size $s$. We will use induction on $s$. Note that $K_{2,t}^*=B_t$, thus the base case $s=2$ is the assumption in the statement. Assume now that $s>2$ and we know that the statement holds for $K_{s-1,t'}^*$ for any $t'$. Let us fix an integer $q$ that is large enough (depending on $s$, $t$ and $H$), and let $G$ be a $K_{s,t}^*$-free graph on $n$ vertices, where $n$ is large enough (depending on $s$, $t$, $q$ and $H$). If $G$ does not contain $K_{s-1,qt}^*$, then it contains at most ${{\mathcal N}}(H,G_n)$ copies of $H$ by the induction hypothesis and we are done. Let us assume there is a copy of $K$ of $K_{s-1,qt}^*$ in $G$. Observe that every other vertex $u$ is connected to at most $t-1$ of the vertices in the part of size $qt$ of $K$, otherwise $u$ with its $t$ neighbors in that part and the $s-1$ vertices on the other part would form a $K_{s,t}^*$. That means that there are at most $(n-s+1-qt)(s-1+t-1)$ edges from the other vertices to $K$. This implies that there is a vertex $v$ in $K$ that has degree at most $(s+t-2)n/qt$ in $G$. Thus, for any $\varepsilon>0$, we can choose a $q$ large enough so that $v$ is in at most $\varepsilon n^{|V(H)|-1}$ copies of $H$. Then we apply progressive induction. Let ${{\mathcal A}}$ denote the family of extremal graphs for ${\ensuremath{\mathrm{ex}}}(n,H,K_{s,t}^*)$, i.e. for every $n$, those $n$-vertex graphs which are $K_{s,t}^*$-free, and contain the most copies of $H$ among such graphs on $n$ vertices. Let ${{\mathcal B}}$ denote those elements of ${{\mathcal A}}$ that are also $K_3$-free and let $f(G):={{\mathcal N}}(H,G)-{{\mathcal N}}(H,G_n)$. Let $n'=n-1$ and $G'$ obtained by deleting $v$ from $G$. Let $G''$ be an $F$-free graph on $n-1$ vertices with ${\ensuremath{\mathrm{ex}}}(n-1,H,F)$ copies of $H$, thus $G''\in {{\mathcal B}}$. Then $f(G)-f(G'')\le f(G)-f(G')\le{{\mathcal N}}(H,G_{n-1})-{{\mathcal N}}(H,G_n)+ \varepsilon n^{|V(H)|-1}$. To apply Lemma \[progi\] and finish the proof, we need to show that this number is negative, i.e. every vertex in $G_n$ is in more than $\varepsilon n^{|V(H)|-1}$ copies of $H$ for some $\varepsilon>0$, finishing the proof (observe that we can obtain $G_{n-1}$ from $G_n$ by deleting a vertex). Indeed, every vertex in the same part of $G_n$ is in the same number of copies of $H$. If they are in $o(n^{|V(H)|}-1)$ copies, then there are $o(n^{|V(H)|})<{\ensuremath{\mathrm{ex}}}(n,H,T_2(n))$ copies of $H$ in $G_n$, a contradiction to our assumption that $G_n$ is the extremal graph for ${\ensuremath{\mathrm{ex}}}(n,H,F)$. Now we are ready to prove Theorem \[p3cc\], that we restate here for convenience. If $F$ is a 3-chromatic graph with a color-critical edge, then $P_3$ is $F$-Turán-good. By Lemma \[ccb2\], it is enough to prove the statement for $F=B_t$. Let $G$ be a $B_t$-free graph on $n$ vertices. First we show that the degrees in $G$ cannot be much larger than $n/2$. Let $c=0.51$ and assume there is a vertex with degree at least $cn$. Observe that every neighbor of $v$ is connected to at most $t-1$ neighbors of $v$. Let $G_0$ be the graph we obtain by deleting all the edges between neighbors of $v$. Then $G_0$ has an independent set of size $cn$. We can apply Corollary \[flenbtwo\] to show that $G_0$ has at most $(1+o(1)){{\mathcal N}}(P_3,K_{cn,(1-c)n})$ copies of $P_3$ (here we also use the fact that making the complete bipartite graph more unbalanced would decrease the number of copies of $P_3$, which follows from a simple calculation). Observe that $G$ has at most ${{\mathcal N}}(P_3,G)+O(n^2)$ copies of $P_3$, as the deleted edges all are in $O(n)$ copies of $P_3$. Therefore, ${{\mathcal N}}(P_3,G)\le (1+o(1)){{\mathcal N}}(P_3,K_{cn,(1-c)n})<{{\mathcal N}}(T_2(n))$. Assume now that $G$ contains a triangle with vertices $u$, $v$ and $w$. Observe that at most $t-2$ other vertices are connected to both $u$ and $v$, and similarly to both $u$ and $w$ or to both $v$ and $w$. Therefore, we have $d(u)+d(v)+d(w)\le n+3t-3$. Let $U$ be the set of the at most $3t-3$ vertices connected to more than one of $u$, $v$ and $w$ (thus $u,v,w\in U$). Let $G_1$ be the graph we obtain by deleting $u,v,w$. Let us examine the copies of $P_3$ in $G$. The number of copies containing none of $u,v,w$ is at most ${\ensuremath{\mathrm{ex}}}(n-3,P_3,B_t)$. There are 3 copies of $P_3$ inside the triangle. The other copies of $P_3$ have vertices in both $G_1$ and in the triangle. The number of those copies having their center in $V(G_1)\setminus U$ is at most twice the number of edges in $G_1$, as their endpoint has at most one neighbor among $u,v,w$. The number of copies having their center in $U$ and another vertex in the triangle is at most three times the number of edges incident to $U$, thus at most $(9t-9)n$. Finally, the number of copies having their center in the triangle and the other vertices in $G_1$ is $\binom{d(u)-2}{2}+\binom{d(v)-2}{2}+\binom{d(w)-2}{2}$. Now we will use progressive induction. ${{\mathcal A}}$ contains the extremal graphs for ${\ensuremath{\mathrm{ex}}}(n,P_3,B_t)$, i.e. for every $n$ the $B_t$-free graphs on $n$ vertices with the most number of copies of $P_3$. ${{\mathcal B}}$ consists of those elements of ${{\mathcal A}}$ that are $K_3$-free (note that this implies that they are also extremal graphs for ${\ensuremath{\mathrm{ex}}}(n,P_3,K_3)$). Let $f(G)={{\mathcal N}}(P_3,G)-{\ensuremath{\mathrm{ex}}}(n,P_3,K_3)$. As ${{\mathcal N}}(P_3,G)={\ensuremath{\mathrm{ex}}}(n,P_3,B_t)$, we have that $f(G)$ is a non-negative integer, and obviously $f(G)=0$ if $G\in {{\mathcal B}}$. Let $n'=n-3$ and $G'$ be a $B_t$-free graph on $n-3$ vertices with ${\ensuremath{\mathrm{ex}}}(n,P_3,B_2)\ge{{\mathcal N}}(P_3,G_1)$ copies of $P_3$. Then $f(G)-f(G')$ is at most the number of copies of $P_3$ containing $u$, $v$ or $w$, plus ${\ensuremath{\mathrm{ex}}}(n-3,P_3,K_3)-{\ensuremath{\mathrm{ex}}}(n,P_3,K_3)$. By the above, the number of copies of $P_3$ containing $u$, $v$ or $w$ is at most $$\label{eq1} 3+2|E(G')|+(9t-9)n+\binom{d(u)-2}{2}+\binom{d(v)-2}{2}+\binom{d(w)-2}{2}.$$ On the other hand, $$\label{eq2} {\ensuremath{\mathrm{ex}}}(n,P_3,K_3)-{\ensuremath{\mathrm{ex}}}(n-3,P_3,K_3)\ge 3\left(\binom{\lfloor n/2\rfloor}{2}+\lfloor (n-2)^2/4\rfloor-\lceil n/2\rceil\right).$$ Indeed, in the Turán graph that is extremal for ${\ensuremath{\mathrm{ex}}}(n,P_3,K_3)$, every vertex is in at least $\binom{\lfloor n/2\rfloor}{2}+\lfloor (n-2)^2/4\rfloor$ copies of $P_3$ and for three vertices, we count at most $3\lceil n/2\rceil$ copies of $P_3$ twice. We need to show that (\[eq1\]) is smaller than (\[eq2\]). Observe that by Theorem \[sim\] we have $|E(G')|\le \lfloor (n-3)^2/4\rfloor$, as $G'$ is $B_t$-free and $n$ is large enough. We will show that $\binom{d(u)-2}{2}+\binom{d(v)-2}{2}+\binom{d(w)-2}{2}<3\binom{\lfloor n/2\rfloor}{2}-3\lceil n/2\rceil-3-(9t-9)n$. Recall that each degree is at most $cn$, and $d(u)+d(v)+d(w)\le n+3t-3$. Thus $\binom{d(u)-2}{2}+\binom{d(v)-2}{2}+\binom{d(w)-2}{2}$ is maximized when the three degrees are distributed as unbalanced as possible, implying this sum is at most $2\binom{cn}{2}$, which is smaller than $<3\binom{\lfloor n/2\rfloor}{2}-3\lceil n/2\rceil-3-(9t-9)n$ if $n$ is large enough. This completes the proof. It is likely that the above proof can be slightly modified to show ${\ensuremath{\mathrm{ex}}}(n,H,B_t)={\ensuremath{\mathrm{ex}}}(n,H,K_3)$ for many other bipartite graphs $H$ in place of $P_3$. I believe it should hold for every complete bipartite graph $H=K_{a,b}$. However, in this case ${\ensuremath{\mathrm{ex}}}(n,H,K_3)={{\mathcal N}}(H,K_{m,n-m})$, where $n$ and $m$ might be far apart. When one counts the copies of $K_{a,b}$ having a vertex in the triangle $uvw$, one needs to count the copies of $K_{a-1,b}$ in $G_1$. But, if we use the bound ${{\mathcal N}}(K_{a-1,b},G_1)\le {\ensuremath{\mathrm{ex}}}(n-3,K_{a-1,b},B_t)$, as in the above proof, we need to deal with the problem, that the $B_t$-free graph with the most number of copies of $H$ might be a complete bipartite graph where the ratio of the parts is far from $m/(n-m)$. This makes the calculations much more complicated. Here we do not attempt to prove a general statement, but we need to deal with ${\ensuremath{\mathrm{ex}}}(n,S_4,B_2)$. The following result, combined with Corollary \[induc\] gives an exact result. \[s4b2\] If $F$ is 3-chromatic with a color-critical edge, then ${\ensuremath{\mathrm{ex}}}(n,S_4,F)={\ensuremath{\mathrm{ex}}}(n,S_4,K_3)$. We only give a sketch, and point out the differences to the proof of Theorem \[p3cc\]. First observe that it is enough to deal with the case $F=B_t$. Indeed, if ${\ensuremath{\mathrm{ex}}}(n,S_4,B_t)={\ensuremath{\mathrm{ex}}}(n,S_4,K_3)$, then there is a complete bipartite extremal graph by Proposition \[gpl2.5\], and then Lemma \[ccb2\] finishes the proof. By Corollary \[induc\], the complete bipartite graph with ${\ensuremath{\mathrm{ex}}}(n,S_4,K_3)$ copies of $S_4$ has two parts of size $(\frac{1}{2}+o(1))n$. Therefore, as in the proof of Theorem \[p3cc\], we can obtain that every degree is at most $cn$, for $c=0.51$. Again, we pick a triangle with vertices $u,v,w$ and obtain $G_1$ by deleting them. There is a set $U$ of at most $3t-3$ vertices connected to more than one of $u$, $v$ and $w$. The number of copies of $S_4$ is at most ${\ensuremath{\mathrm{ex}}}(n-3,S_4,B_t)$ in $G_1$ and at most ${{\mathcal N}}(P_3,G_1)+\binom{d(u)-2}{3}+\binom{d(v)-2}{3}+\binom{d(w)-2}{3}+O(n^2)$ additionally, where the $O(n^2)$ term contains those copies that have at least two vertices in $U\cup\{u,v,w\}$. Observe that ${{\mathcal N}}(P_3,G_1)\le {\ensuremath{\mathrm{ex}}}(n-3,P_3,B_t)=n^3/8+o(n^3)$ and $\binom{d(u)-2}{3}+\binom{d(v)-2}{3}+\binom{d(w)-2}{3}$ is again maximized if they are as unbalanced as possible, thus is at most $2\binom{cn}{3}$. We have ${\ensuremath{\mathrm{ex}}}(n,S_4,K_3)={{\mathcal N}}(S_4,K_{k,n-k})$ for some $k$ by Corollary \[induc\] and ${\ensuremath{\mathrm{ex}}}(n-3,S_4,K_3)={{\mathcal N}}(S_4,K_{\ell,n-3-\ell})$. It is easy to see that $\ell$ is either $k-1$ or $k-2$, thus there are three vertices $x,y,z$ of $K_{k,n-k}$ such that deleting them we obtain $K_{\ell,n-3-\ell}$. Hence ${\ensuremath{\mathrm{ex}}}(n,S_4,K_3)-{\ensuremath{\mathrm{ex}}}(n-3,S_4,K_3)$ is the number of copies of $S_4$ containing $x$, $y$ or $z$. For each of them, there are $3\binom{n/2}{3}+o(n^3)$ copies of $S_4$ where it is the center, and $n^3/16+o(n^3)$ where it is a leaf. There are $o(n^3)$ copies of $S_4$ that are counted multiple times, thus we have ${\ensuremath{\mathrm{ex}}}(n,S_4,K_3)-{\ensuremath{\mathrm{ex}}}(n-3,S_4,K_3)\ge 3\binom{n/2}{3}+n^3/16+o(n^3)$. We use progressive induction as in the proof of Theorem \[p3cc\]. It is again obvious that $f(G)<f(G')$, which finishes the proof. Concluding remarks ================== $\bullet$ We have studied generalized Turán problems for graphs having at most four vertices. In two cases, we were unable to determine even the order of magnitude of ${\ensuremath{\mathrm{ex}}}(n,H,F)$. However, in those cases it would be a major breakthrough in Combinatorics to find the order of magnitude, due to the connection to the Ruzsa-Szemerédi theorem. In some other cases, we could obtain the asymptotics, but not an exact result. One of them is the ordinary Turán problem for $C_4$, which has received a considerable attention, and the exact value of ${\ensuremath{\mathrm{ex}}}(n,C_4)$ has been found for infinitely many $n$, as we have mentioned. In case we forbid $C_4$ and count other graphs, we have obtained some exact results, where the friendship graph was the extremal one. This is not the case when counting $K_3$, $M_2$ or $P_4$. Still, one could hope that there is another $C_4$-free graph that has few edges (thus is not considered when dealing with ordinary Turán problems), but many copies of one of the above mentioned graphs. We show that this is not the case. We claim that if $G$ is $C_4$-free, then ${{\mathcal N}}(P_4,G)\le n|E(G)|/2$ and ${{\mathcal N}}(K_3,G)\le |E(G)|/3$. Indeed, let us choose an edge $uv$ and a vertex $w$. There is at most one common neighbor of $u$ and $w$ and another one of $v$ and $w$, and we count every copy of $P_4$ twice this way. Similarly, for an edge $uv$, $u$ and $v$ have at most one common neighbor. Proposition \[coun\] shows that ${{\mathcal N}}(M_2,G)\le |E(G)|^2$. On the other hand, we have shown ${\ensuremath{\mathrm{ex}}}(n,P_4,C_4)=(1+o(1))n{\ensuremath{\mathrm{ex}}}(n,C_4)/2$, ${\ensuremath{\mathrm{ex}}}(n,K_3,C_4)=(1+o(1)){\ensuremath{\mathrm{ex}}}(n,C_4)/3$ and ${\ensuremath{\mathrm{ex}}}(n,M_2,C_4)=(1+o(1)){\ensuremath{\mathrm{ex}}}(n,C_4)^2$. Thus in all these cases, for the extremal graph $G$ we have $|E(G)|=(1+o(1)){\ensuremath{\mathrm{ex}}}(n,C_4)$. It means determining ${\ensuremath{\mathrm{ex}}}(n,K_3,C_4)$, ${\ensuremath{\mathrm{ex}}}(n,P_4,C_4)$ or ${\ensuremath{\mathrm{ex}}}(n,M_2,C_4)$ exactly is likely as hard as determining ${\ensuremath{\mathrm{ex}}}(n,C_4)$. It is possible that one can obtain exact results for infinitely many $n$, using the same ideas as in the ordinary Turán case. $\bullet$ In each other case we have determined ${\ensuremath{\mathrm{ex}}}(n,H,F)$ for $n$ large enough. We did not deal with the case $n$ is small, but probably it is not very hard. Another way to extend these results is to determine all the extremal graphs. $\bullet$ Another possible direction of future research is to consider graphs on at most five vertices. There are 22 graphs without isolated vertices on five vertices, thus the $10\times 10$ table would be replaced by a $32\times 32$ table, with more than 10 times more entries. Also, all the graphs studied in this paper but $T_1$ belong to at least one well-studied class of graphs, with several results concerning them. There are more exceptions in case of graphs on five vertices, and presumably there are less known results concerning those graphs. $\bullet$ It is worth checking what graphs were extremal (or close to extremal) for a given forbidden graph, as they might be also extremal in case we count other graphs. For $K_2$, $P_3$ and $M_2$ there are not many graphs avoiding them. For $K_3$, the extremal graph was always a complete bipartite graph, and it was balanced with one exception. For $S_4$, the extremal graph was sometimes an arbitrary 2-regular graph, but in case of $K_3$ and $C_4$, the extremal graph consisted of vertex-disjoint copies of those graphs (thus it had 2-regular components and potentially some isolated vertices). For $P_4$, the extremal graph was either $S_n$ or $D(3,n)$. For $C_4$, the lower bound was given by either the well-known construction for the ordinary Turán problem concerning $C_4$, or the friendship graph $F(n)$ (in case of counting $S_4$, the lower bound was given by the star $S_n$, which is a subgraph of $F(n)$ and has the same number of copies of $S_4$). In case of $T_1$, the extremal graph was either $D(3,n)$ or a complete bipartite graph, which was balanced with one exception. In case of $B_2$, the lower bound was given by either the construction of Ruzsa and Szemerédi, where every edge is in exactly one triangle, or by a complete bipartite graph, which was again balanced with one exception. For $K_4$, in each case the extremal graph was the Turán graph $T_3(n)$. [99]{} N. Alon, C. Shikhelman. Many $T$ copies in $H$-free graphs. *Journal of Combinatorial Theory, Series B*, **121**, 146–172, 2016. B. Bollobás, E. Győri. Pentagons vs. triangles. *Discrete Mathematics*, **308**(19), 4332–4336, 2008. J. I. Brown, A. Sidorenko. The inducibility of complete bipartite graphs. *Journal of Graph Theory*, **18**(6), 629–645, 1994. D. Chakraborti, D.Q. Chen. Exact results on generalized Erdős-Gallai problems. arXiv:2006.04681, 2020. S. Cambie, R. de Verclos, R. Kang. Regular Turán numbers and some Gan–Loh–Sudakov-type problems. arXiv:1911.08452, 2019. Z. Chase. A Proof of the Gan-Loh-Sudakov Conjecture. arXiv:1911.08452, 2019. P. Erdős, T. Gallai. On maximal paths and circuits of graphs. *Acta Mathematica Academiae Scientiarum Hungaricae*, **10**, 337–356, 1959. P. Erdős, M. Simonovits. On a valence problem in extremal graph theory. *Discrete Mathematics*, **5**, 323–334, 1973. R. J. Faudree, R. H. Schelp. Path Ramsey numbers in multicolorings. *Journal of Combinatorial Theory, Series B*, **19**(2), 150–160, 1975. Z. Füredi. On the number of edges of quadrilateral-free graphs. *Journal of Combinatorial Theory, Series B* **68**, 1–6, 1996. Z. Füredi. New asymptotics for bipartite Turán numbers. *Journal of Combinatorial Theory, Series A*, **75**(1), 141–144, 1996. W. Gan, P. Loh, B. Sudakov. Maximizing the number of independent sets of a fixed size. *Combinatorics, Probability and Computing*, **24**, 521–527, 2015. D. Gerbner, E. Győri, A. Methuku, M. Vizer. Generalized Turán numbers for even cycles. *Journal of Combinatorial Theory, Series B*, **145**, 169–213, 2020. D. Gerbner, E. Győri, A. Methuku, M. Vizer. Induced generalized Turán numbers. *manuscript* D. Gerbner, A. Methuku, M. Vizer. Generalized Turán problems for disjoint copies of graphs. *Discrete Mathematics*, **342**(11), 3130–3141 2019. D. Gerbner, C. Palmer. Counting copies of a fixed subgraph of $F$-free graphs. *European Journal of Mathematics*, **82**, 103001, 2019. D. Gerbner, C. Palmer. Some exact results for generalized Turán problems. arXiv:2006.03756, 2020. L. Gishboliner, A. Shapira. A Generalized Turán Problem and its Applications. *Proceedings of STOC 2018 Theory Fest: 50th Annual ACM Symposium on the Theory of Computing June 25-29, 2018 in Los Angeles, CA,* 760–772, 2018. A. Grzesik. On the maximum number of five-cycles in a triangle-free graph. *Journal of Combinatorial Theory, Series B*, **102**(5), 1061–1066, 2012. E. Győri, J. Pach, M. Simonovits. On the maximal number of certain subgraphs in $K_r$-free graphs, *Graphs and Combinatorics*, **7**(1), 31–37, 1991. E. Győri, N. Salia, C. Tompkins, O. Zamora. The maximum number of $P_l$ copies in $P_k$-free graphs. *Acta Mathematica Universitatis Comenianae*, **88**3, 773–778, 2019. H. Hatami, J. Hladký, D. Kr' al, S. Norine, A. Razborov. On the number of pentagons in triangle-free graphs. *Journal of Combinatorial Theory, Series A*, **120**(3), 722–732, 2013. Jie Ma, Yu Qiu, Some sharp results on the generalized Turán numbers. *European Journal of Combinatorics*, **84**, 103026, 2018. I. Z. Ruzsa, E. Szemerédi. Triple systems with no six points carrying three triangles, *Combinatorics (Keszthely, 1976), Coll. Math. Soc. J. Bolyai* **18**, Volume II, 939–945, 1976. M. Simonovits. A method for solving extremal problems in graph theory, stability problems. *Theory of Graphs, Proc. Colloq., Tihany, 1966, Academic Press, New York*, 279–319, 1968. P. Turán. On an extremal problem in graph theory (in Hungarian). *Matematikai és Fizikai Lapok*, **48**, 436–452, 1941. J. Wang. The maximum number of cliques in graphs without large matchings. arXiv:1812.01832, 2018. A. A. Zykov. On some properties of linear complexes. *Matematicheskii sbornik*, **66**(2), 163–188, 1949. [^1]: Alfréd Rényi Institute of Mathematics, E-mail: `[email protected].` Research supported by the National Research, Development and Innovation Office – NKFIH under the grants FK 132060, KKP-133819, KH130371 and SNN 129364.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a CNN-based technique to estimate high-dynamic range outdoor illumination from a single low dynamic range image. To train the CNN, we leverage a large dataset of outdoor panoramas. We fit a low-dimensional physically-based outdoor illumination model to the skies in these panoramas giving us a compact set of parameters (including sun position, atmospheric conditions, and camera parameters). We extract limited field-of-view images from the panoramas, and train a CNN with this large set of input image–output lighting parameter pairs. Given a test image, this network can be used to infer illumination parameters that can, in turn, be used to reconstruct an outdoor illumination environment map. We demonstrate that our approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image. An extensive evaluation on both the panorama dataset and captured HDR environment maps shows that our technique significantly outperforms previous solutions to this problem.' author: - | Yannick Hold-Geoffroy^1\*^, Kalyan Sunkavalli^$\dagger$^, Sunil Hadap^$\dagger$^, Emiliano Gambaretto^$\dagger$^, Jean-François Lalonde^\*^\ Université Laval^\*^, Adobe Research^$\dagger$^\ [[email protected], {sunkaval,hadap,emiliano}@adobe.com, [email protected]]{}\ <http://www.jflalonde.ca/projects/deepOutdoorLight> bibliography: - 'main.bib' title: Deep Outdoor Illumination Estimation --- Acknowledgments =============== The authors would like to thank Marc-André Gardner for his help with the architecture and optimization. Parts of this work were done while Yannick Hold-Geoffroy was an intern at Adobe Research. This work was partially supported by the REPARTI Strategic Network, the FRQNT New Researcher Grant 2016NC189939 and the NSERC Discovery Grant RGPIN-2014-05314. We gratefully acknowledge the support of Nvidia with the donation of the GPUs used for this research.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the supersymmetric ${\cal O}(\alpha_s)$ QCD corrections to $e^+e^- \to \tilde q_i^{} \bar{\tilde q}_j^{}$ $(i,j = 1,2)$ and to $\tilde q_i^{}\to q''\tilde\chi^\pm_j\!$, $q\tilde\chi^0_k$ $(i,j=1,2;\,k=1\ldots4)$ within the Minimal Supersymmetric Standard Model. In particular we consider the squarks of the third generation $\tilde t_i$ and $\tilde b_i$ including the left–right mixing. In the on–shell scheme also the mixing angle has to be renormalized. We use dimensional reduction (which preserves supersymmetry) and compare it with the conventional dimensional regularization. A detailed numerical analysis is also presented.' address: | $^{1)}$ Institut für Theoretische Physik der Universität Wien, Vienna, Austria\ $^{2)}$ Institut für Hochenergiephysik der ÖAW, Vienna, Austria author: - 'A. Bartl$^{1)}$, H. Eberl$^{2)}$, S. Kraml$^{2)}$, $^{2)\dagger}$, W. Porod$^{1)}$' title: | SUSY–QCD corrections to squark production and decays\ in $e^+e^-$ annihilation --- UWThPh-1997-46\ HEPHY-PUB 679/97\ hep-ph/9711464 Introduction ============ In supersymmetry (SUSY) one has two types of scalar quarks (squarks), $\tilde q_L^{}$ and $\tilde q_R^{}$, corresponding to the left and right helicity states of a quark. $\tilde q_L^{}$ and $\tilde q_R^{}$, however, mix due to the Yukawa coupling to the Higgs bosons, which is proportional to the mass of the quark. One therefore expects large mixing in the case of the stop quarks so that one mass eigenstate ($m_{\tilde t_1}$) might be rather light and even reachable at present colliders. The sbottoms $\tilde b_L,\,\tilde b_R^{}$ may also strongly mix for large $\tan\beta$. The mass matrix in the basis ($\tilde q_L^{},\,\tilde q_R^{}$) is given by: $${\cal M}^2 = \left( \begin{array}{lr} m^2_{\tilde Q} + m^2_q + D_1^{} & m_q \left( A_q - \mu \left\{ {\cot\beta \atop \tan\beta} \right\} \right) \\ m_q \left( A_q - \mu \left\{ {\cot\beta \atop \tan\beta} \right\} \right) & m^2_{\tilde U\!,\tilde D} + m^2_q + D_2^{} \end{array} \right) \, . \label{eq:massmat}$$ Here $m_{\tilde Q}$, $m_{\tilde U}$, $m_{\tilde D}$, and $A_q$ are SUSY soft–breaking parameters, $\mu$ is the Higgsino mass parameter, and $\tan\beta = \frac{v_2}{v_1}$. $D_1^{}$ and $D_2^{}$ are the $D$ terms: $D_1^{} = m^2_q \cos 2\beta\, (I^{3{\rm L}}_q - e_q \sin^2\theta_W)$, $D_2^{} = m^2_Z \cos 2\beta\, e_q \sin^2\theta_W$, with $I^{3{\rm L}}_q$ the third component of the weak isospin of $q$. In the off-diagonal elements of Eq.(\[eq:massmat\]) $\cot\beta$ enters in the case of the stops and $\tan\beta$ in that of the sbottoms. Diagonalizing the matrix one gets the mass eigenstates $\tilde q_1^{} = \tilde q_L^{} \cos\theta_{\tilde q} +\tilde q_R^{} \sin\theta_{\tilde q}$, $\tilde q_2^{} = -\tilde q_L^{} \sin\theta_{\tilde q} +\tilde q_R^{} \cos\theta_{\tilde q}$ with the masses $m_{\tilde q_1}$, $m_{\tilde q_2}$ (with $m_{\tilde q_1} < m_{\tilde q_2}$) and the mixing angle $\theta_{\tilde q}$. Conventional QCD corrections to squark pair production $e^+e^- \to \tilde q_i^{} \bar{\tilde q}_j^{}$ $(i,j = 1,2)$ can be very large [@drees]. The SUSY–QCD corrections including squark and gluino exchange will be discussed here following closely ref. [@eberl]. These corrections were also treated in [@arhrib]. The new feature in the calculation of the SUSY–QCD corrections is that in the on–shell scheme a suitable renormalization condition has to be found for the mixing angle $\theta_{\tilde q}$ because the tree–level amplitude explicitly depends on it. We will explain this in detail below. The SUSY–QCD corrections to the squark decays into chargino or neutralino, $\tilde q_i^{} \to q'\tilde\chi^\pm_j$, $q\tilde\chi^0_k$ $(i,j=1,2;\:k=1\ldots4)$, have been calculated in [@djouadi; @kraml] and will also be discussed in the following. Here the dependence on the nature of the charginos/neutralinos (gaugino–like or higgsino–like) is particularly interesting. We work in the on–shell scheme and use dimensional reduction ($\overline{\rm DR}$) to regularize the integrals, which is necessary to preserve supersymmetry (at least up to two loops). We will comment on the differences between this and the dimensional regularization scheme used in the Standard Model. The Production Process $e^+e^- \to \tilde q_i^{} \bar{\tilde q}_j^{}$ ===================================================================== The cross section at tree level is given by: $$\sigma^0 \left( e^+e^- \to \tilde q_i^{} \bar{\tilde{q}}_j \right) = \frac{\pi\alpha^2}{s} \lambda^{3/2}_{ij} \left[ e^2_q\, \delta_{ij} - T_{\gamma Z}^{}\, e_q a_{ij} \delta_{ij} + T_{\!Z\!Z}^{}\, a^2_{ij} \right]$$ with $$\begin{aligned} T_{\gamma Z}^{} &=& \frac{v_e}{8\, c^2_W s^2_W} \, \frac{s(s-m^2_Z)}{\left[(s-m^2_Z)^2 + \Gamma_{\!Z}^2 m_Z^2\right]} \,,\\ T_{\!Z\!Z}^{} &=& \frac{(a^2_e + v^2_e)}{256\,s^4_W c^4_W} \, \frac{s^2}{(s-m^2_Z)^2 + \Gamma_{\!Z}^2 m_Z^2} \, .\end{aligned}$$ Here $\lambda_{ij} = (1 - \mu^2_i - \mu^2_j )^2 - 4 \mu^2_i \mu^2_j$ with $\mu^2_{i,j} = m^2_{\tilde q_{i,j}}/ s$. $e_q$ is the charge of the squarks (in units of $e$), $v_e = - 1 + 4 s^2_W$, $a_e = -1$, $s_W = \sin\theta_W$, $c_W = \cos\theta_W$. $a_{ij}$ are the relevant parts of the couplings $Z\tilde q_j^{} \tilde q_i^{\,*}$: $$\begin{aligned} a_{11} &=& 4\,(I^{3{\rm L}}_q \cos^2\theta_{\tilde q} - s^2_W e_q) \,, \quad a_{22}\;=\; 4\,(I^{3{\rm L}}_q \sin^2\theta_{\tilde q} - s^2_W e_q) \,, \nonumber \\ a_{12} &=& a_{21} \;=\; -2I^{3{\rm L}}_q \sin 2\theta_{\tilde q} \,.\end{aligned}$$ The SUSY–QCD corrections in ${\cal O}(\alpha_s)$ consist of the conventional QCD corrections [@drees] due to gluon exchange and real gluon radiaton, as well as of the corrections due to the exchange of a gluino and squarks, see Fig.1. Our input parameters are the physical masses $m_{\tilde q_1}$, $m_{\tilde q_2}$, $m_{\tilde q}$, $m_{\tilde g}$, and the mixing angle $\theta_{\tilde q}$. We use the on–shell sheme where the masses are fixed by the respective poles of the propagators. In renormalizing the lagrangian we follow the usual procedure: $${\cal L}_0 = {\cal L} + \delta{\cal L} \label{eq:renlag}$$ with $${\cal L} = -ee_q\,\delta_{ij}\, A^{\mu}\, \tilde q_i^{\,*} (\stackrel{\leftrightarrow}{i\partial_\mu})\, \tilde q_j^{} -\frac{e}{4 s_W c_W}\,a_{ij}\, Z^{\mu}\, \tilde q_i^{\,*} (\stackrel{\leftrightarrow}{i\partial_\mu})\, \tilde q_j^{}\,.$$ ${\cal L}_0$, the bare lagrangian, has the same form with the bare quantities: $$\begin{aligned} e_q^0\,\delta_{ij} &=& e_q\,\delta_{ij} + (\delta e_q)_{ij} \,, \\ a_{ij}^0 &=& a_{ij} + \delta a_{ij} \,,\\ \tilde q_i^{\,* 0} &=& (1 + {{\textstyle \frac{1}{2} }} \delta Z_{ii})\, \tilde q_i^{\,*} + \delta Z_{ii'}\, \tilde q_{i'}^{\,*} , \quad\; i\neq i' \,, \\ \tilde q_j^{\,0} &=& (1 + {{\textstyle \frac{1}{2} }} \delta Z_{jj})\, \tilde q_j^{} + \delta Z_{jj'}\, \tilde q_{j'}^{} , \quad j\neq j' . \label{eq:qjbare}\end{aligned}$$ Notice that because of $\theta_{\tilde q}^0 = \theta_{\tilde q} + \delta\theta_{\tilde q}$, $\delta a_{ij}$ is a function of $\delta\theta_{\tilde q}$. The total correction in ${\cal O}(\alpha_s)$ can be written as: $$\begin{aligned} \Delta a_{ij} &=& \delta a_{ij}^{(v )} + \delta a_{ij}^{(w)} + \delta a_{ij}^{(\tilde\theta)} \,, \label{eq:deltaa} \\ (\Delta e_q )_{ij} &=& (\delta e_q )_{ij}^{(v )} + (\delta e_q )_{ij}^{(w )} \,,\end{aligned}$$ where $(v)$ denotes the vertex corrections and $(w)$ the wave–function corrections. The contributions come from gluon, gluino, and squark exchange. $\delta a_{ij}^{(\tilde\theta)}$ is due to the shift from the bare to the on–shell couplings. As already mentioned, we use dimensional reduction [@siegel] instead of dimensional regularization. Up to first order this is achieved technically by taking $D = (4 - r\,\epsilon$) with $r \rightarrow 0$ (see section 4). In the case of $e^+e^- \to \tilde q_i^{} \bar{\tilde q}_j^{}$ there is, however, no difference between the two schemes as will be explained later. Let us first discuss the vertex corrections $\delta e^{(v)}_{ij}$ and $\delta a^{(v)}_{ij}$ coming from the exchange of SUSY particles. The gluino contribution due to the graph in Fig.1a is given by: $$\begin{aligned} \delta a_{ij}^{(v,\tilde{q})} &=& \frac{2}{3}\frac{\alpha_s}{\pi}\, \Big\{ 2m_{\tilde g} m_q v_q S^{\tilde q}_{ij}\, (2C^+_{ij} + C^0_{ij}) \nonumber\\ &+& v_q \delta_{ij} \big[ (2m_{\tilde g}^2 + 2m_q^2+m_{\tilde q_i}^2 + m_{\tilde q_j}^2)\,C^+_{ij} + 2 m_{\tilde g}^2\, C^0_{ij} + B^0 (s,m^2_q,m^2_q) \big] \nonumber \\ &+& a_q A^{\tilde q}_{ij} \big[ (2m_{\tilde g}^2 - 2m_q^2 + m_{\tilde q_i}^2 + m_{\tilde q_j}^2)\, C^+_{ij} + (m_{\tilde q_i}^2 - m_{\tilde q_j}^2)\, C^-_{ij} \nonumber \\ & & \hspace{15mm} + 2 m_{\tilde g}^2\,C^0_{ij} + B^0(s,m^2_q,m^2_q) \big] \Big\} \end{aligned}$$ and $$\begin{aligned} \delta {(e_q)}_{ij}^{(v, \tilde q)} &=& \frac{2}{3}\frac{\alpha_s}{\pi}\,e_q\, \Big\{ 2m_{\tilde g} m_q S^{\tilde q}_{ij}\, (2C^+_{ij}+ C^0_{ij})\,\\ &+& \delta_{ij}\big[ (2m_{\tilde g}^2 + 2m_q^2 + m_{\tilde q_i}^2 + m_{\tilde q_j}^2) C^+_{ij} + 2m_{\tilde g} C^0_{ij} + B^0(s,m^2_q,m^2_q) \big]\! \Big\} \nonumber\end{aligned}$$ with $v_q = 2I^{3{\rm L}}_q - 4 s_W^2 e_q$, $a_q = 2I^{3{\rm L}}_q$, $S^{\tilde q}_{11}=-\sin 2\theta_{\tilde q}=-S^{\tilde q}_{22} = A^{\tilde q}_{12} = A^{\tilde q}_{21}$, and $S^{\tilde q}_{12}=S^{\tilde q}_{21}=-\cos 2\theta_{\tilde q} = - A^{\tilde q}_{11} = A^{\tilde q}_{22}$. The functions $C^\pm_{ij}$ are defined by $$C^+ = \frac{C^1+C^2}{2}\,, \qquad C^- = \frac{C^1-C^2}{2} \,.$$ $B^0$ and $C^{0,1,2}$ are the usual two– and three–point functions as given, for instance, in [@denner]. The arguments of all C–functions are $(m_{\tilde q_i}^2, s, m_{\tilde q_j}^2, m_{\tilde g}^2, m_q^2, m_q^2)$. The squark exchange graph Fig.1c is proportional to the four–momentum of $Z^0$, and therefore does not contribute to the physical matrix element. The wave–function corrections (Figs.1b,d) can be written as, using Eqs. (\[eq:renlag\]) to (\[eq:qjbare\]) $(i\neq i'\!,\, j\neq j')$: $$\begin{aligned} \delta a^{(w)}_{ij} &=& {{\textstyle \frac{1}{2} }} (\delta Z_{ii} + \delta Z_{jj}) a_{ij} +\delta Z_{i'\!i}\, a_{i'\!j} + \delta Z_{j'\!j}\, a_{ij'} \,. \label{eq:dawave}\end{aligned}$$ An analogous formula holds for $\delta {(e_q)}_{ij}^{(w)}$ with $a_{ij} \to e_q \,\delta_{ij}$.\ One obtains from Fig.1b: $$\begin{aligned} \delta a_{ij}^{(w,\tilde g)} &=& -\mbox{Re} \Big\{ {{\textstyle \frac{1}{2} }}\big[ \Sigma_{ii}'^{(\tilde g)}(m_{\tilde q_i}^2) +\Sigma_{jj}'^{(\tilde g)}(m_{\tilde q_j}^2) \big]\, a_{ij} \nonumber\\ & & \hspace{11mm} +\frac{\Sigma_{i'\!i}^{(\tilde g)}(m_{\tilde q_i}^2)} {m_{\tilde q_i}^2-m_{\tilde q_{i'}}^2}\, a_{i'j} +\frac{\Sigma_{j'\!j}^{(\tilde g)}(m_{\tilde q_j}^2)} {m_{\tilde q_j}^2-m_{\tilde q_{j'}}^2}\, a_{ij'} \Big\} \label{eq:dawsg}\end{aligned}$$ and $$\begin{aligned} \delta (e_q)^{(w,\tilde g)}_{ii} &=& -e_q\, \mbox{Re} \left\{ \Sigma_{ii}'^{(\tilde g)}(m_{\tilde q_i}^2) \right\} \, , \\ \delta (e_q)^{(w,\tilde g)}_{12} &=& \frac{e_q}{m_{\tilde q_1}^2-m_{\tilde q_{2}}^2}\,\mbox{Re}\left\{ \Sigma_{12}^{(\tilde g)}(m_{\tilde q_2}^2) - \Sigma_{21}^{(\tilde g)}(m_{\tilde q_1}^2) \right\} \,,\end{aligned}$$ where $\Sigma^{(\tilde g)}_{ij} (m^2 )$ are self–energies and $\Sigma'^{(\tilde g)}_{ii} (m^2 ) = \partial\Sigma^{(\tilde g )}_{ii}(p^2 ) / \partial p^2 |_{p^{2} = m^{2}}$. Notice that $\delta (e_q )^{(w,\tilde q)}_{ij} = 0$ because the contributions with the squark loop attached at either external squark line in Fig.1d cancel each other. The wave–function correction $\delta a_{ij}^{(w,\tilde q)}$ due to Fig.1d plays an important r$\hat{\rm{o}}$le in the renormalization of the squark mixing angle $\theta_{\tilde q}$. Renormalization of the Mixing Angle $\theta_{\tilde q}$ ------------------------------------------------------- The total correction $\Delta a_{ij}$, Eq.(\[eq:deltaa\]), using Eq.(\[eq:dawave\]) can be written as $(i\neq i'\!,\, j\neq j')$ $$\Delta a_{ij} = \delta a^{(v)}_{ij} + {{\textstyle \frac{1}{2} }} (\delta Z_{ii} + \delta Z_{jj})\, a_{ij} + \delta Z_{i'\!i}\, a_{i'\!j} + \delta Z_{j'\!j}\, a_{ij'} + \delta a^{(\tilde{\theta})}_{ij} \, . \label{eq:daij}$$ Notice that the first part of the right–hand side, $\delta a^{(v)}_{ij} + \frac{1}{2} (\delta Z_{ii} + \delta Z_{jj})\, a_{ij}$, is already free of ultra–violet divergencies. Hence, the second part of Eq.(\[eq:daij\]) has to be finite, too. We therefore may require for $i = 1$ and $j = 2$ $$\delta a^{(\tilde{\theta})}_{12} = (a_{22} - a_{11}) \delta \theta_{\tilde q} = -\left( \delta Z_{21} a_{22} + \delta Z_{12} a_{11}\right) . \label{eq:datheta}$$ One can easily see that $\Delta a_{ij}$ is then also finite for all $i,\,j$. The condition, Eq.(\[eq:datheta\]), means that the [*non–diagonal*]{} self–energy graphs Fig.1b and 1d cancel the counterterm $\delta a^{(\tilde{\theta})}_{12}$ in Eq.(\[eq:daij\]). Notice also that the total squark contribution $\Delta a^{(\tilde q)}_{ij}$ is zero. Other authors used the same basic idea but took, for instance, the condition analogous to Eq.(\[eq:datheta\]) for $\delta a^{(\tilde{\theta})}_{11}$ or $\delta a^{(\tilde{\theta})}_{22}$, see [@djouadi], or a similar condition valid at a point $Q^2$, see ref. [@beenakker]. The differences between these schemes are, however, numerically very small. Total QCD Correction in ${\cal O} (\alpha_s )$ ---------------------------------------------- The total QCD correction $\Delta\sigma$ to the cross section is $$\Delta\sigma = \Delta\sigma^{(g)} + \Delta\sigma^{(\tilde g )} \, ,$$ as $\Delta\sigma^{(\tilde q)} = 0$ in our renormalization scheme of the squark mixing angle. The gluon contribution factorizes: $$\sigma^{(g)} = \sigma^0 \left[ \frac{4}{3}\frac{\alpha_S}{\pi} \Delta_{ij}\right]\,,$$ where $\Delta_{ij}$ is given in ref. [@eberl]. The total gluino contribution is given by: $$\begin{aligned} \Delta\sigma^{(\tilde g)} &=& \frac{\pi\alpha^2}{s}\,\lambda^{3/2}_{ij}\, \Big\{ 2e_q (\Delta e_q )^{(\tilde g)}_{ij} + 2 T_{\!Z\!Z}^{}\, a_{ij} \Delta a_{ij}^{(\tilde g)} \nonumber \\ & & \hspace{20mm} - T_{\gamma Z}^{} \big[ e_q \delta_{ij} \Delta a^{(\tilde g)}_{ij} + (\Delta e_q)^{(\tilde g)}_{ij} a_{ij} \big] \Big\} \end{aligned}$$ with $$\begin{aligned} \Delta a_{ij}^{(\tilde g)} &=& \delta a_{ij}^{(v,\tilde g)} - \mbox{Re}\,\Big\{ {{\textstyle \frac{1}{2} }}\big[ \Sigma_{ii}'(m_{\tilde q_i}^2) +\Sigma_{jj}'(m_{\tilde q_j}^2) \big] a_{ij} + \frac{4}{3}\frac{\alpha_s}{\pi} \frac{m_{\tilde g} m_q}{m_{\tilde q_1}-m_{\tilde q_2}}\,\delta_{ij} \nonumber \\ & & \cdot\,\Big[ B^0(m_{\tilde q_i}^2,m_{\tilde g}^2,m_q^2) \, \big[ (-1)^{i+1}\, 2 a_{ii'} \cos 2\theta_{\tilde q} - a_{i'i'} \sin 2\theta_{\tilde q} \big] \nonumber \\ & & \quad\; +\,B^0(m_{\tilde q_i}^2,m_{\tilde g}^2,m_q^2)\, a_{ii} \sin 2\theta_{\tilde q} \Big] \Big\} \end{aligned}$$ $(i\neq i')$ and $\Delta (e_q )_{ij}^{(\tilde g)} = (\delta e_q )^{(v)}_{ij} + (\delta e_q)_{ij}^{(w)}$. Discussion ---------- First, we have calculated the SUSY–QCD corrections to the cross section of $e^+e^- \to \tilde t_1 \bar{\tilde t}_1$ in the LEP energy range $\sqrt{s}\leq 200$ GeV. We have found that, whereas the conventional QCD correction may be rather large, the gluino correction is only about 1% of the tree–level cross section, quite independent of $m_{\tilde{t}_1}$. The correction due to gluino exchange is, however, not negligible ($2-8\%$) in the energy range of a linear $e^+e^-$ collider ($\sqrt{s} = 500 - 2000$ GeV). The $\sqrt{s}$–dependence of the SUSY–QCD corrections to the cross section $\sigma (e^+e^- \to \tilde t_1 \bar{\tilde t}_1)$ is shown in Fig.2 for $m_{\tilde t_1}=100$ GeV, $m_{\tilde t_2}= 400$ GeV, $m_{\tilde g}=300$ GeV, and $\cos\theta_{\tilde{t}} = 1/\sqrt{2}$. The peak at $\sqrt{s}=350$ GeV is due to the $t\bar t$ threshold. In Fig.3 we show the $\cos\theta_{\tilde{t}}$ dependence of the corrections for this process at $\sqrt{s} = 500$ GeV for the same masses of the stops and the gluino as in Fig.2. Whereas the gluon correction has the same behaviour in $\theta_{\tilde{t}}$ as the tree–level cross section, the gluino correction is different. In Figs. 4 and 5 we exhibit the corrections to $\sigma (e^+e^- \to \tilde t_1 \bar{\tilde t}_2$) and $\sigma (e^+e^- \to \tilde t_2 \bar{\tilde t}_2$), respectively, at $\sqrt{s}=2$ TeV for $m_{\tilde t_1}=400$ GeV, $m_{\tilde t_2}=800$ GeV, $m_{\tilde g}=600$ GeV. The gluino contributions can go up to about $-10$%. Fig.6 shows the dependence on the gluino mass. It is interesting to notice that the gluino correction decreases very slowly with the gluino mass. (57,50) (0,-7) \ [ SUSY–QCD corrections\ $\delta\sigma^g/\sigma^{tree}$ and $\delta\sigma^{\tilde g}/\sigma^{tree}$ for $e^+ e^- \to \tilde t_1 \bar{\tilde t}_1$ as a function of $\sqrt{s}$ for $\cos\theta_{\tilde t} = 1/\sqrt{2}$, $m_{\tilde t_1} = 100$ GeV, $m_{\tilde t_2} = 400$ GeV, and $m_{\tilde g} = 300$ GeV.]{} (57,50) (0,-6) \ [ SUSY–QCD corrections\ $\delta\sigma^g/\sigma^{tree}$ and $\delta\sigma^{\tilde g}/\sigma^{tree}$ for $e^+ e^- \to \tilde t_1 \bar{\tilde t}_1$ as a function of $\cos\theta_{\tilde t}$ for $\sqrt{s} = 500$ GeV, $m_{\tilde t_1} = 100$ GeV, $m_{\tilde t_2} = 400$ GeV, and $m_{\tilde g} = 300$ GeV.]{} \ (57,50) (0,-7) \ [ SUSY–QCD corrections $\delta\sigma^g$ and $\delta\sigma^{\tilde g}$ as a function of $\cos\theta_{\tilde t}$ for $e^+ e^- \to \tilde t_1 \bar{\tilde t}_2$, $\sqrt{s}=2$ TeV, $m_{\tilde t_1}=400$ GeV, $m_{\tilde t_2}=800$ GeV, and $m_{\tilde g}=600$ GeV.]{} (57,50) (0,-6) \ [ SUSY–QCD corrections\ $\delta\sigma^g/\sigma^{tree}$ and $\delta\sigma^{\tilde g}/\sigma^{tree}$ for $e^+ e^- \to \tilde t_2 \bar{\tilde t}_2$ as a function of $\cos\theta_{\tilde t}$ for $\sqrt{s}=2$ TeV, $m_{\tilde t_1}=400$ GeV, $m_{\tilde t_2}=800$ GeV, and $m_{\tilde g} = 600$ GeV.]{} (65,53) (0,0) [ Dependence of the SUSY–QCD corrections $\delta\sigma^g/\sigma^{tree}$ and $\delta\sigma^{g+\tilde g}/\sigma^{tree}$ on the gluino mass for $e^+ e^- \to \tilde t_1 \bar{\tilde t}_1$, for $\sqrt{s} = 500$ GeV, $m_{\tilde t_1} = 100$ GeV, $m_{\tilde t_2} = 400$ GeV, $\cos\theta_{\tilde t} = 1/\sqrt{2}$.]{}\ Squark Decays into Charginos and Neutralinos ============================================ In the following we discuss the SUSY–QCD corrections for the decays: $$\begin{aligned} \tilde t_i \;\to\; b\,\tilde\chi^+_j , & & \tilde b_i \;\to\; t\,\tilde\chi^-_j , \\ \tilde t_i \;\to\; t\,\tilde\chi^0_k \,, & & \tilde b_i \;\to\; b\,\tilde\chi^0_k \,, \end{aligned}$$ with $i,j = 1,2$ and $k = 1\ldots 4$. The supersymmetric QCD corrections were calculated for $m_q = 0$ and $\tilde\chi^0_1$ being a photino in ref. [@hikasa], and taking into account squark mixing, quark masses (i.e. Yukawa couplings), and general gaugino–higgsino mixing of charginos and neutralinos in refs. [@djouadi] and [@kraml]. The decay width at tree–level for $\tilde t_i \to b \tilde\chi^+_j$ is given by: $$\begin{aligned} \Gamma^0 (\tilde{t}_i \to b \tilde\chi^+_j ) &=& \frac{g^2 \kappa (m^2_{\tilde t_i},m^2_b,m^2_{\tilde\chi^+_j})} {16\pi m^3_{\tilde t_i}} \nonumber \\ & & \cdot \left( \big[ (\ell^{\,\tilde t}_{ij})^2 + (k^{\tilde t}_{ij})^2 \big]\, X - 4\,\ell^{\,\tilde t}_{ij} k^{\tilde t}_{ij} m_b m_{\tilde\chi^+_j} \right) \end{aligned}$$ with $X = m^2_{\tilde t_i} - m^2_b - m^2_{\tilde\chi^+_j}$ and $\kappa(x,y,z) = [(x-y-z)^2 - 4 y z]^{1/2}$. The $\tilde{t}_i^*$-$b$-$\tilde\chi^+_j$ couplings $\ell^{\,\tilde t}_{ij}$ and $k^{\tilde t}_{ij}$ read, for instance, for $\tilde{t}_1 \to b \tilde\chi^+_j$: $$\begin{aligned} \ell^{\,\tilde t}_{1j} &=& - V_{j1} \cos\theta_{\tilde t} + \frac{m_t}{\sqrt{2}\,m_W\sin\beta}\, V_{j2}\sin\theta_{\tilde t}\,,\\ k^{\tilde t}_{1j} &=& \frac{m_b}{\sqrt{2}\,m_W\cos\beta}\, U_{j2}\cos\theta_{\tilde{t}} \,,\end{aligned}$$ where $U$ and $V$ are the matrices diagonalizing the charged gaugino–higgsino mass matrix [@haber]. The ${\cal O}(\alpha_s)$ SUSY–QCD corrected decay width can be written as: $$\Gamma = \Gamma^0 + \delta\Gamma^{(v)} + \delta\Gamma^{(w)} + \delta\Gamma^{(c)} + \delta\Gamma^{({\rm real\,gluon})} , \label{eq:gammacorr}$$ where the superscript $v$ again denotes the vertex correction (Figs.7a,b) and $w$ the wave–function correction (Figs.7c-g). $\delta\Gamma^{(c)}$ corresponds to the shift from the bare to the on–shell couplings, taking into account the renormalization of the quark mass and the squark mixing angle. $\delta\Gamma^{({\rm real\,gluon})}$ is the correction due to real gluon bremsstrahlung and cancels the infrared divergencies.\ (100,75) (5,0) (11,72)[(0,0)\[bl\][**a)**]{}]{} (65,72)[(0,0)\[bl\][**b)**]{}]{} (6,34)[(0,0)\[bl\][**c)**]{}]{} (6,14)[(0,0)\[bl\][**d)**]{}]{} (43,34)[(0,0)\[bl\][**e)**]{}]{} (43,14)[(0,0)\[bl\][**f)**]{}]{} (80,26)[(0,0)\[bl\][**g)**]{}]{} \ [ Vertex and wave-function corrections to squark decays into charginos and neutralinos. ]{}\ The procedure of the calculation is completely analogous to that discussed just before in section 2. The complete formulae for the different correction parts in Eq.(\[eq:gammacorr\]) are given in ref. [@kraml]. We want to note that, contrary to the production process $e^+e^- \to \tilde q_i^{}\bar{\tilde q}_j$, the corrections to the decay widths of $\tilde q_i \to q' \tilde\chi^\pm_j$ and $\tilde q_i^{} \to q \tilde\chi^0_k$ are different in the dimensional regularization and in the dimensional reduction scheme. (At first order the difference is finite.) This is because of the quark wave–function correction due to gluon exchange, Fig.7c. The quark self–energy corresponding to Fig.7c is given by: $$\Pi^{(g)} (k^2 ) = \frac{\alpha_s}{3\pi} \left[\, 2/\hspace{-1.8mm}k B^1 + 2 (/\hspace{-1.8mm}k - 2m_q) B^0 -r(/\hspace{-1.8mm}k - 2m_q) \right] \label{eq:pik}$$ with $B^n = B^n (k^2,\lambda^2,m^2_q)$ and the gluon mass $\lambda\to 0$. This leads to the quark wave–function renormalization constants due to gluon exchange $$\delta Z^{L(g)} = \delta Z^{R(g)} = -\frac{2}{3} \frac{\alpha_s}{\pi} \left[ B^0 + B^1 - 2m^2_q ({\dot B}^0 - {\dot B}^1) -\frac{r}{2}\,\right] \label{eq:zet}$$ with $B^n = B^n (m^2_q,\lambda^2,m^2_q)$, ${\dot B}^n = {\dot B}^n (m^2_q,\lambda^2,m^2_q)$. $\delta Z^L$ and $\delta Z^R$ are defined by the usual relation between the unrenormalized quark field $q^0$ and the renormalized one, $q^0 = (1 + \frac{1}{2}\delta Z^L P_{\!L}^{} + \frac{1}{2}\delta Z^R P_{\!R}^{})\,q$. Note the dependence on $r$ in Eqs. (\[eq:pik\]) and (\[eq:zet\]), where $r=0$ in the dimensional reduction and $r=1$ in the dimensional regularization scheme. Note, however, that there is no such difference for the squark self–energy graph due to gluon exchange. Numerical Results ----------------- Let us first discuss the decay $\tilde t_1 \to b\tilde\chi^+_1$, where we take $m_{\tilde\chi_1^+}=100$ GeV, $\tan\beta=2$, $m_{\tilde t_2}=600$ GeV, $m_{\tilde b_1}=450$ GeV, $m_{\tilde b_2}=470$ GeV, and $\cos\theta_{\tilde b}=-0.9$. We study three cases: $M \ll |\mu|$ ($M= 95$ GeV, $\mu=-800$ GeV), $M \sim |\mu|$ ($M=100$ GeV, $\mu=-100$ GeV), and $M \gg |\mu|$ ($M=300$ GeV, $\mu= -89$ GeV). We use the GUT relations: $M' \simeq 0.5$ M, $m_{\tilde g} \simeq 3.5$ M. In Fig.8 the dependence of the SUSY–QCD corrections on the stop mass is exhibited for $\cos\theta_{\tilde{t}} = 0.6$. Notice the pronounced dependence on the nature of the chargino. The corrections are largest ($\sim -25\%$), if the chargino is higgsino–like ($|\mu| \ll M$) due to the large top Yukawa coupling. If $\tilde\chi^+_1$ is gaugino–like ($M \ll |\mu|$) the corrections are between $+20\%$ and $-10\%$. In Fig.9 we show the SUSY–QCD corrected widths together with the tree–level widths as a function of $\cos\theta_{\tilde{t}}$ for $m_{\tilde t_1}=200$ GeV and the other parameters as in Fig.8. Again, the corrections are biggest in the case of a higgsino–like chargino. The behaviour of the $\cos\theta_{\tilde t}$ dependence reflects the fact that if $\tilde t_1 \sim \tilde t_R$ $(\cos\theta_{\tilde t}\sim 0)$ it strongly couples to the higgsino component of $\tilde\chi^+_1$, and if $\tilde t_1 \sim \tilde t_L$ ($\cos\theta_{\tilde t}\sim\pm1$) it strongly couples to the gaugino component. In Fig.10 we show $\delta\Gamma /\Gamma^0$ \[%\] as a function of $m_{\tilde t_1}$ for $\tilde t_1 \to t\tilde\chi^0_1$, taking $m_{\tilde\chi^0_1}=80$ GeV, $\tan\beta = 2$, $m_{\tilde t_2}=600$ GeV, and $\cos\theta_{\tilde t}=0.6$. Again we observe that if $\tilde\chi^0_1$ is higgsino–like ($|\mu| \ll M$) the corrections are about $-20\%$. We have also studied the dependence on the gluino mass. In Fig.11 we show a plot where $\delta\Gamma /\Gamma^0$ is exhibited for $\tilde t_1 \to b\tilde\chi^+_1$ and $\tilde t_1 \to t\chi^0_1$ as a function of $m_{\tilde g}$ for $m_{\tilde t_1}=300$ GeV, $\cos\theta_{\tilde t}=0.6$, $\tan\beta = 2$, and $\mu=-100$ and $-800$ GeV. $M$ is fixed by $M\simeq 0.3\,m_{\tilde g}$. Notice that the SUSY–QCD corrections are still important for $m_{\tilde g}\sim 1$ TeV and no decoupling of the gluino mass can be seen. This is also the case if we relax the condition $M \simeq 0.3\,m_{\tilde g}$ and keep the chargino (neutralino) mass fixed. (50,54) (2,0) (2,51)[(0,0)\[bl\][[$\delta\Gamma/\Gamma^0$]{} \[%\]]{}]{} (31,0)[(0,0)\[tc\][[$m_{\tilde t_1}$]{} \[GeV\]]{}]{} (21.5,34.5)[(0,0)\[bl\][[$(95,-800)$]{}]{}]{} (28,18)[(0,0)\[bl\][[$(100,-100)$]{}]{}]{} (16,11)[(0,0)\[bl\][[$(300,-89)$]{}]{}]{} \ [ SUSY–QCD corrections to the width of $\tilde t_1^{} \to b \tilde\chi_1^+$ as a function of $m_{\tilde t_1}$, for $m_{\tilde \chi_1^+}=100$ GeV, $\cos\theta_{\tilde t}=0.6$, $\tan\beta=2$, and various $(M,\mu)$ \[GeV\] values.]{} (50,54) (3.5,1.8) (2.8,51.3)[(0,0)\[bl\][[$\Gamma$]{} \[GeV\]]{}]{} (30,0)[(0,0)\[tc\][$\cos\theta_{\tilde t_1}$]{}]{} (28,9)[(0,0)\[bl\][[$(95,-800)$]{}]{}]{} (39,22)[(0,0)\[br\][[$(100,-100)$]{}]{}]{} (34,37)[(0,0)\[bc\][[$(300,-89)$]{}]{}]{} \ [ Tree–level (dashed lines) and SUSY–QCD corrected (solid lines) decay widths of $\tilde t_1^{} \to b \tilde\chi_1^+$ as a function of $\cos\theta_{\tilde t}$, for $m_{\tilde t_1}=200$ GeV, $m_{\tilde \chi_1^+}=100$ GeV, $\tan\beta=2$, and various $(M,\mu)$ \[GeV\] values.]{} \ (50,50) (2,0) (2,51)[(0,0)\[bl\][[$\delta\Gamma/\Gamma^0$]{} \[%\]]{}]{} (31,0)[(0,0)\[tc\][[$m_{\tilde t_1}$]{} \[GeV\]]{}]{} (26,40)[(0,0)\[bl\][[$(158,-800)$]{}]{}]{} (47,31)[(0,0)\[tr\][[$(154,-150)$]{}]{}]{} (18,12)[(0,0)\[bl\][[$(300,-85)$]{}]{}]{} \ [ SUSY–QCD corrections to the width of $\tilde t_1^{} \to t \tilde\chi_1^0$ as a function of $m_{\tilde t_1}$, for $m_{\tilde \chi_1^0}=80$ GeV, $\cos\theta_{\tilde t}=0.6$, $\tan\beta=2$, and various $(M,\mu)$ \[GeV\] values.]{} (50,50) (2,-1.2) (2,51)[(0,0)\[bl\][[$\delta\Gamma/\Gamma^0$]{} \[%\]]{}]{} (31,0)[(0,0)\[tc\][[$m_{\tilde g}$]{} \[GeV\]]{}]{} (28.5,33)[(0,0)\[bl\][[$\mu=-800\,$GeV]{}]{}]{} (28.5,22)[(0,0)\[bl\][[$\mu=-100\,$GeV]{}]{}]{} (38,36)[(0,1)[3]{}]{} (37,36)[(-1,1)[6.2]{}]{} (39,21)[(0,-1)[7.3]{}]{} (38,21)[(-1,-2)[5]{}]{} \ [ SUSY–QCD corrections to the widths of $\tilde t_1^{} \to b \tilde\chi_1^+$ (solid lines) and $\tilde t_1^{} \to t \tilde\chi_1^0$ (dash-dotted lines) as a function of $m_{\tilde g}$, for $m_{\tilde t_1}=300$ GeV, $\cos\theta_{\tilde t}=0.6$, $\tan\beta=2$, $M\sim 0.3\,m_{\tilde g}$.]{} Dimensional Reduction Technique =============================== The regularization by dimensional reduction was proposed by [@siegel]. It means that only the space–time dimensions (the coordinates $x^\mu$ and momenta $p^\mu$) are continued to $D = 4-\epsilon$ dimensions, whereas the vector fields and spinors remain four–dimensional. Following [@capper] it is convenient to write the four–dimensional vector field $V_\mu$ as $V_\mu = (V_i,\,V_\sigma)$, where $V_i$ is a $D$–dimensional vector, and $V_\sigma$ is $\epsilon$–dimensional behaving as $\epsilon$ scalars. Moreover, one has $\gamma^\mu = (\gamma^i,\gamma^\sigma)$. Note that $x^\mu = (x^i, 0)$, $\partial^\mu = (\partial^i, 0)$, and $p^\mu = (p^i, 0)$. As a consequence the lagrangian ${\cal L}$ can be decomposed as ${\cal L} = {\cal L}^{(D)} + {\cal L}^{(\epsilon)}$, where ${\cal L}^{(D)}$ is the lagrangian of the conventional dimensional regularization. Therefore, to each interaction term of a vector field there is a corresponding “$\epsilon$ scalar” interaction term, except for the vector–scalar–scalar interaction because (and no $\epsilon$ term), with $\phi$ being a scalar field. Therefore, in this case there is no difference between dimensional regularization and dimensional reduction. There is, however, a difference in the case of the interaction of a fermion with a vector field. For instance, the fermion self–energy, Fig.7c, receives a contribution due to $\epsilon$ scalars in the loop of $\frac{\alpha_s}{3\pi}(/\hspace{-1.8mm}k-2m_q)$. This is just the expression which cancels the $r$–dependent term in Eq.(\[eq:pik\]) for $r=1$ in order to get the result of dimensional reduction ($r=0$). Thus at the one–loop level the “$\epsilon$–scalar” technique is equivalent to performing the algebra in the numerator of the integrand in four dimensions and making the integration in $D$ dimensions, or equivalently taking $D = 4 - r\epsilon$ with $r \to 0$, as we did in our calculations. Acknowledgements {#acknowledgements .unnumbered} ================ We are very grateful to Prof. J. Solà for the invitation to this interesting workshop. We also appreciated very much the smooth organization. In particular, we enjoyed the intimate and inspiring character of this workshop. This work was supported by the “Fonds zur Förderung der wissenschaftlichen Forschung” of Austria, project no. P10843–PHY. References {#references .unnumbered} ========== [99]{} M. Drees, K. Hikasa, [*Phys. Lett.*]{} B [**252**]{} (1990) 127; W. Beenakker, R. Höpker, P. M. Zerwas, [*Phys. Lett.*]{} B [**349**]{} (1995) 463. H. Eberl, A. Bartl, W. Majerotto, [*Nucl. Phys.*]{} B [**472**]{} (1996) 481. A. Arhrib, M. Capdequi-Peyranere, A. Djouadi, [*Phys. Rev.*]{} D [**52**]{} (1995) 1404. A. Djouadi, W. Hollik, C. Jünger, [*Phys. Rev.*]{} D [**54**]{} (1996) 5629; [*Phys. Rev.*]{} D [**55**]{} (1997) 6975. S. Kraml, H. Eberl, A. Bartl, W. Majerotto, W. Porod, [*Phys. Lett.*]{} B [**386**]{} (1996) 175. W. Siegel, [*Phys. Lett.*]{} B [**84**]{} (1979) 193. D.M. Capper, D.R.T. Jones, P. van Nieuwenhuizen, [*Nucl. Phys.*]{} B [**167**]{} (1980) 479; I. Jack, D.R.T. Jones, hep-ph/9707278. A. Denner, [*Fortschr. Phys.*]{} [**41**]{} (1993) 307. W. Beenakker, R. Höpker, T. Plehn, P.M. Zerwas, [*DESY*]{} [**96–178**]{}. K. Hikasa, Y. Nakamura, [*Z. Phys.*]{} C [**70**]{} (1996) 139. H.E. Haber, G.L. Kane, [*Phys. Rep.*]{} [**117**]{} (1985) 75; A. Bartl, H. Fraas, W. Majerotto, B. Mößlacher, [*Z. Phys.*]{} C [**55**]{} (1992) 257.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Extended spectroscopic datasets of several late-B stars of luminosity class Ia revealed the presence of similar peculiarities in their profiles, which might be interpreted as indications of deviation from spherically symmetric, smooth wind approximation. Surface structures due to non-radial pulsations or weak, large-scale, dipole magnetic fields might be responsible for creating wind structure in the envelopes of these stars.' author: - Nevena Markova - Haralambi Markov title: 'Wind structure in late-B supergiants' --- Introduction {#introduction .unnumbered} ============ The key limiting assumptions incorporated within current hot star model atmospheres include a globally stationary and spherically symmetric stellar wind with a smooth density stratification. Although these models are generally quite successful in describing the overall wind properties, there are numerous observational and theoretical studies, which indicate that hot star winds are certainly not smooth and stationary. Most of the time-dependent constraints refer however to O-stars and early B supergiants (SGs), while mid- and late-B candidates are currently under-represented in the sample of stars investigated to date. Indeed, theoretical predictions supported by observational results (Markova and Puls [@MP]) indicate that while winds in late-B SGs are significantly weaker than those in O SGs, there is no currently established reason to believe that weaker winds might be less structured than stronger ones. Results and discussion ====================== Long-term monitoring campaign of several late-B SGs, namely HD 199478 (Markova and Valchev [@MV], Markova et al. [@markova08]), HD 91619, HD 43085 and HD 96919 (Kaufer et al. [@kaufer96a; @kaufer96b; @Kaufer97], Israelian et al. [@Israel97]) revealed the presence of photometric and wind variability of quite similar signatures in their spectra. In particular, the wind variability, as traced by H$\alpha$, is characterised by extremely strong, double-peaked emission with V/R variations and occasional episodes of strong absorption with blue- and red-shifted features indicating simultaneous mass infall and outflow. (A typical example of such behaviour is given in Figure \[hd199478\]). Such line signatures cannot be reproduced in terms of the conventional (i.e. non-rotating, spherically symmetric, smooth) wind models, which instead predict profiles in absorption partly filled in by emission for SGs at this temperature regime (Markova et al. [@markova08]). Subsequently, axially symmetric, disc-like envelopes (Kaufer et al. [@kaufer96a], Markova and Valchev [@MV]) and episodic, azimuthally extended, density enhancements in the form of co-rotating spirals rooted in the photosphere (Kaufer et al. [@kaufer96b]) or closed magnetic loops similar to those in our Sun (Israelian et al. [@Israel97]) have been assumed to account for the peculiar behaviour of in these stars. In general, there are at least three possible ways to break the spherically symmetric wind geometry and create large-scale winds structure around hot stars: by fast rotation, by surface structures and by large-scale, dipole magnetic fields. #### Wind structure due to fast rotation Model calculations from the early 1990s (e.g. Bjorkman and Cassinelli [@BC93]) showed that $if$ the rotational rate of a hot star is above a given threshold determined by the ratio of its terminal wind velocity to the escape velocity, stellar rotation might converge the radiative driven wind flow towards the equator, creating a dense equatorial disc. However, observations indicate that even in fast rotating Be stars, this requirement is not fulfilled. In addition, our stars are not fast rotators: their rotational speeds are a factor of 3 to 5 lower than the corresponding critical values. Thus, the fast rotation hypothesis can be rejected as a possible cause for wind structures in late-B SGs. #### Surface structures Non-radial pulsations (NRPs) and magnetic fields might equally be responsible for driving the stellar surface into regions of different properties (Fullerton et al. [@Full96]). Results of 2D hydrodinamical simulations (Cranmer and Owocki [@CO96]) showed that “bright/dark" spots on the stellar surface can effectively enhance/reduce the radiative driving, leading to the formation of high/low-density, low/high-speed streams. Consequently, a specific wind structure, called Corotating Interaction Region (CIR) structure, forms where fast material collides with slow material giving rise to travelling features in various line diagnostics (e.g. Discrete Absorption Components in UV resonance lines of O stars, see e.g. Kaper et al. [@kaper96]). The CIR scenario for the case of a “bright" surface spot in a rotating O star is schematically illustrated in Figure \[CIR\]. Concerning the four late-B SGs considering here, non-radial pulsations due to $g$-modes oscillations have been suggested to explain absorption $lpv$ in their spectra (Kaufer et al. [@Kaufer97], Markova and Valchev [@MV], Markova et al. [@markova08]). This possibility is partially supported by results from recent quantitative spectral analyses, which indicate that on the HR diagram, and for parameters derived with FASTWIND (Puls et al. [@P05]), these stars fall exactly in the region occupied by known variable B SGs, for which $g$-modes instability was suggested (Markova et al. [@markova08]). Also, the photometric behaviour of some of our targets (e.g. HD 199478, Percy et al. [@PAM]) seems to be consistent with a possible origin in terms of $g$-mode oscillations. Thus, it seems very likely that these stars are non-radial pulsators and therefore, may create, at least theoretically, wind structures via the CIR scenario described above. This possibility however has to be observationally proven. In this respect, we note that no clear evidence of any causality between photospheric and wind (as traced by ) variability has been derived so far for any of our targets (Kaufer et al. [@Kaufer97], Markova et al. [@markova08]). Also, the variability patterns observed in their profiles do not give any evidence of migrating red-to-blue features, as those expected to originate from a CIR structure. #### Dipole magnetic fields The possibility that magnetic fields can be responsible for the appearance of large-scale wind structures in hot stars has been supported by recent magneto-hydrodynamical (MHD) simulations. Early results derived via such simulations (Babel and Montmerle [@Babel97], Donati et al. [@Donati01]) indicated that a co-rotating, equatorial disc can be created around [*non-rotating*]{}, hot, main sequence stars due to a relatively weak bipolar magnetic field (about several KGauss). In this model, called Magnetically Confined Wind Shock (MCWS) model, supersonic wind-streams from the two hemispheres are magnetically confined and directed towards the magnetic equatorial plane, where they collide and produce a strong shock giving rise to X-ray emission. The MCWS model has been questioned by more recent simulations (ud-Doula and Owocki [@DO02]) which showed that without any rotational support the material trapped within the magnetic loops would simply fall back along the field line to the loop foot-point, i.e. an infall of material in the form of dense knots, rather than an equatorial disc, would be generated. Additional MHD simulations for $rotating$ hot stars with a magnetic dipole aligned to the stellar rotation axis furthermore indicated that depending on the magnetic spin-up an equatorial compression dominated by radial infall and/or outflows, with no apparent tendency to form a steady, Keplerian disc, might be created (Owocki and ud-Doula [@OD03], ud-Doula et al. [@DOT08]). Due to their radiative envelopes normal (i.e. without any chemical peculiarities) B stars are expected to be non-magnetic objects. Nonetheless, during the last decade a growing number of direct observational evidence has been derived which indicates that relatively strong, stable, large-scale dipole magnetic fields do present in some B stars (e.g. SPB, Be, $\beta$ Cep) (Henrichs et al. [@Henrichs00], Neiner et al. [@Neiner01], Bychkov et al. [@Bychkov], Hurbig et al. [@Hurbig05; @Hurbig07]). From the above outlined it appears that in, at least some, hot stars magnetic fields can be an alternative source of wind perturbations and asymmetries. And although the four late-B SGs discussed here have not been recognised so far as magnetically active stars (except for HD 34085, where a magnetic field of about 130$\pm$20 G was detected by Severny [@severny]), the potential role of magnetic fields in these stars remains intriguing, especially because it might provide a clue to understand the puzzling problem of the simultaneous presence of red- and blue-shifted absorptions/emissions in their profiles. To test this possibility new MHD simulations for the case of mid/late B SGs have been recently initiated. The preliminary results (private communication, Asif ud-Doula) indicate that a pure dipole magnetic field of only a few tens of Gauss is required to obtain a $cool$ equatorial compression (with mass infall and outflow) around a rotating star with stellar and wind properties as derived with FASTWIND for HD 199478 (Markova and Puls [@MP]). Interestingly, few hundreds [*ksec*]{} after the onset of the magnetic field, the obtained density stratification in this late-B SGs model turned out to be qualitatively similar to that obtained for models with stellar and wind parameters typical for O stars and early B SGs (see Figure \[magn\_field\]). An obvious advantage of the model described above is that it allows to interprete, at least qualitatively, some of the peculiar characteristics of in our targets. In particular, the presence of red/blue-shifted absorptions might be explained if one assumes that, due to some reasons, the plasma in the infalling or outflowing zones of the compression or in both of them (during the High Velocity Absorption episodes) can become optically thick in the $Lyman$ continuum and L$_{\alpha}$. Then, will start to behave as a resonance line, i.e. to absorb and emit line photons (for more details see Markova et al. [@markova08]). The kinematic properties of the resulting absorption features is difficult to predict from simple qualitative considerations but it is in advance clear that these properties cannot be be dominated by stellar rotation (Townsend and Owocki [@TO05]). Concerning the interpretation of the peculiar emission, the situation is more complicated since such emission can originate from different parts of the envelope, under quite different physical conditions. More detailed quantitative analysis is required to check all possibilities and investigate them further. [*Acknowledgements:*]{} This work was in part supported by the National Scientific Foundation to the Bulgarian Ministry of Education and Science (F-1407/2004). Babel, J., Montmerle, T. 1997, [*A&A 323, 121*]{} Bychkov, V. D., Bychkova, L. V., Madej, J. 2003, [*A&A 407, 631*]{} Bjorkman, J. E., Cassinelli, J. P. 1993, [*ApJ 409, 429*]{} Cranmer, S. and Owocki, S. 1996, [*ApJ, 462, 469*]{} Donati, J.-F., Wade, G. A., Babel, J. et al. 2001, [*MNRAS 326, 1265*]{} Fullerton, A., Gies, D. R., Bolton, C. T. 1996, [*ApJS 103, 475*]{} Henrichs, H. F., de Jong, J. A., Donati, J.-F. et al. 2000, ASP Conf. Ser. 214, 324 Hurbig, S., Szeiifert, T., North, P. 2005, [*ASPC 337, 236*]{} Hurbig, S., Briquet, M., Scholler, M. et al. 2007, [*ASPC 361, 434*]{} Israelian, G. Chentsov, E. & Musaev, E. 1997, [*MNRAS 290, 521*]{} Kaper, L., Henrichs, H. F., Nichols, J. S. et al. 1996, [*A&AS 116, 257*]{} Kaufer, A., Stahl, O., Wolf, B. et al 1996a, [*A&A 305, 887*]{} Kaufer, A., Stahl, O., Wolf, B. et al 1996b, A&A 314, 599 Kaufer, A., Stahl, O., Wolf, B. et al 1997, [*A&A 320, 237*]{} Markova, N. & Valchev, T. 2000, [*A&A, 363, 995*]{} Markova, N., Prinja, R., Markov, H. et al. 2008, [*A&A 487, 211*]{} Markova, N., Puls, J. 2008, [*A&A 478, 823*]{} Neiner, C., Henrichs, H. F., Hubert, A.-M. 2001, [*ASP Conf. Ser. 248, 419*]{} Owocki, S., un-Douls, A. 2003, [*ASP Conf. Ser. 305, 350*]{} Percy, J., Palaniappan, R., Seneviratne, R. et al. 2008, [*PASP 120, 311*]{} Puls, J., Urbaneja, M. A., Venero, R. et al. 2005, [*A&A 435, 669*]{} Severny, A. 1970, [*ApJ 159, L73*]{} Townsend, R. H. D., Owocki, S. 2005, [*MNRAS, 357, 251*]{} ud-Doula, A., Owocki, S., 2002, [*ApJ, 576, 413*]{} ud Doula, A., Owocki, S., Townsend, R.H.D., 2008, [*MNRAS, 385, 97*]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we formulate the problem of packing unequal rectangles/squares into a fixed size circular container as a mixed-integer nonlinear program. Here we pack rectangles so as to maximise some objective (e.g. maximise the number of rectangles packed or maximise the total area of the rectangles packed). We show how we can eliminate a nonlinear maximisation term that arises in one of the constraints in our formulation. We indicate the amendments that can be made to the formulation for the special case where we are maximising the number of squares packed. A formulation space search heuristic is presented and computational results given for publicly available test problems involving up to 30 rectangles/squares. Our heuristic deals with the case where the rectangles are of fixed orientation (so cannot be rotated) and with the case where the rectangles can be rotated through ninety degrees.' author: - 'C.O. López[^1]' - 'J.E. Beasley[^2]' date: 'October 2017, Revised February 2018' title: Packing unequal rectangles and squares in a fixed size circular container using formulation space search --- *Keywords:* Formulation space search; Mixed-integer nonlinear program; Rectangle packing; Square packing Introduction ============ In this paper we consider the problem of packing non-identical rectangles (i.e. rectangles of different sizes) into a fixed size circular container. Since the circular container may not be large enough to accommodate all of the rectangles available to be packed there exists an element of choice in the problem. In other words we have to decide which of the rectangles will be packed, and moreover for those that are packed their positions within the container. The packing should respect the obvious constraints, namely that the packed rectangles do not overlap with each other and that each packed rectangle is entirely within the container. This packing should be such so as to maximise an appropriate objective (e.g. maximise the number of rectangles packed or maximise the total area of the rectangles packed). To illustrate the problem suppose we have ten rectangles with sizes as shown in Table \[table1\] to be packed into a fixed sized circular container. The rectangles shown in Table \[table1\] have been ordered into ascending area order. Rectangle Length Width ----------- -------- ------- 1 1.10 1.61 2 2.20 1.08 3 1.68 1.46 4 1.82 2.61 5 2.70 2.57 6 3.21 2.21 7 2.99 3.51 8 3.68 3.42 9 4.62 3.36 10 3.79 4.79 : Rectangle packing example, circular container radius 4.18[]{data-label="table1"} Regarding the rectangles as being of fixed orientation, i.e. they cannot be rotated, then: - If we are wish to maximise the number of rectangles packed Figure \[fig1\] shows the solution as derived by the approach presented in this paper. In that figure we can see that seven of the ten rectangles have been packed, three rectangles are left unpacked. - If we are wish to maximise the total area of the rectangles packed Figure \[fig2\] shows the solution as derived by the approach presented in this paper. In that figure we can see that five of the ten rectangles have been packed. ; (0,0) circle \[radius= 4.0000000000cm\]; ( 1.9932421266 , 2.5819847248) – ( 3.0458737055 , 2.5819847248) – ( 3.0458737055 , 1.0413148683) – ( 1.9932421266 , 1.0413148683) – ( 1.9932421266 , 2.5819847248) ; at ( 2.5195579161 , 1.8116497966) [ 1]{}; ( -1.3708925903 , -2.7238255537) – ( .7343705676 , -2.7238255537) – ( .7343705676 , -3.7573183766) – ( -1.3708925903 , -3.7573183766) – ( -1.3708925903 , -2.7238255537) ; at ( -.3182610114 , -3.2405719652) [ 2]{}; ( -3.3536964542 , .5177975242) – ( -1.7460409518 , .5177975242) – ( -1.7460409518 , -.8793316624) – ( -3.3536964542 , -.8793316624) – ( -3.3536964542 , .5177975242) ; at ( -2.5498687030 , -.1807670691) [ 3]{}; ( .2142232266 , 3.4840098845) – ( 1.9558500209 , 3.4840098845) – ( 1.9558500209 , .9864022290) – ( .2142232266 , .9864022290) – ( .2142232266 , 3.4840098845) ; at ( 1.0850366237 , 2.2352060568) [ 4]{}; ( 1.1232391583 , .9598781146) – ( 3.7069712157 , .9598781146) – ( 3.7069712157 , -1.4994520290) – ( 1.1232391583 , -1.4994520290) – ( 1.1232391583 , .9598781146) ; at ( 2.4151051870 , -.2697869572) [ 5]{}; ( -2.8739925679 , 2.7765491539) – ( .1977777670 , 2.7765491539) – ( .1977777670 , .6617166180) – ( -2.8739925679 , .6617166180) – ( -2.8739925679 , 2.7765491539) ; at ( -1.3381074004 , 1.7191328859) [ 6]{}; ( -1.7417140510 , .6371167752) – ( 1.1195299681 , .6371167752) – ( 1.1195299681 , -2.7217348994) – ( -1.7417140510 , -2.7217348994) – ( -1.7417140510 , .6371167752) ; at ( -.3110920414 , -1.0423090621) [ 7]{}; ; (0,0) circle \[radius= 4.0000000000cm\]; ( -3.8119666423 , 1.2119448296) – ( -1.7067034844 , 1.2119448296) – ( -1.7067034844 , .1784520066) – ( -3.8119666423 , .1784520066) – ( -3.8119666423 , 1.2119448296) ; at ( -2.7593350633 , .6951984181) [ 2]{}; ( 1.8149071928 , 2.0702592042) – ( 3.4225626952 , 2.0702592042) – ( 3.4225626952 , .6731300176) – ( 1.8149071928 , .6731300176) – ( 1.8149071928 , 2.0702592042) ; at ( 2.6187349440 , 1.3716946109) [ 3]{}; ( 1.8179796510 , .6730196307) – ( 3.5596064452 , .6730196307) – ( 3.5596064452 , -1.8245880248) – ( 1.8179796510 , -1.8245880248) – ( 1.8179796510 , .6730196307) ; at ( 2.6887930481 , -.5757841970) [ 4]{}; ( -1.7066640310 , 3.5266392263) – ( 1.8148670695 , 3.5266392263) – ( 1.8148670695 , .2539119536) – ( -1.7066640310 , .2539119536) – ( -1.7066640310 , 3.5266392263) ; at ( .0541015193 , 1.8902755899) [ 8]{}; ( -2.6031074615 , .1782446930) – ( 1.8179451701 , .1782446930) – ( 1.8179451701 , -3.0370663117) – ( -2.6031074615 , -3.0370663117) – ( -2.6031074615 , .1782446930) ; at ( -.3925811457 , -1.4294108093) [ 9]{}; If the rectangles can be rotated through ninety degrees then: - If we are wish to maximise the number of rectangles packed Figure \[fig1a\] shows the solution as derived by the approach presented in this paper. In that figure we can see that seven of the ten rectangles have been packed, three rectangles are left unpacked. - If we are wish to maximise the total area of the rectangles packed Figure \[fig2a\] shows the solution as derived by the approach presented in this paper. In that figure we can see that seven of the ten rectangles have been packed. In Figure \[fig1a\] and Figure \[fig2a\] the letter r after the rectangle number indicates that the rectangle has been rotated through ninety degrees. Comparing Figure \[fig1\] and Figure \[fig1a\] we can see that they both involve the packing of seven rectangles. Whilst allowing rotation through ninety degrees allows the possibility of a better solution as compared with the no rotation case this is by no means assured. Comparing Figure \[fig2\] and Figure \[fig2a\] we can see that in this particular case an improvement in the total area of the rectangles packed has been made by making use of rotation. ; (0,0) circle \[radius= 4.0000000000cm\]; ( 1.8712887942 , -1.1835039534) – ( 2.9239203731 , -1.1835039534) – ( 2.9239203731 , -2.7241738098) – ( 1.8712887942 , -2.7241738098) – ( 1.8712887942 , -1.1835039534) ; at ( 2.3976045836 , -1.9538388816) [ 1 ]{}; ( -2.6235913891 , -1.6145026574) – ( -1.0159358867 , -1.6145026574) – ( -1.0159358867 , -3.0116318440) – ( -2.6235913891 , -3.0116318440) – ( -2.6235913891 , -1.6145026574) ; at ( -1.8197636379 , -2.3130672507) [ 3 ]{}; ( -3.6064450445 , .8593985441) – ( -1.0227129871 , .8593985441) – ( -1.0227129871 , -1.5999315994) – ( -3.6064450445 , -1.5999315994) – ( -3.6064450445 , .8593985441) ; at ( -2.3145790158 , -.3702665277) [ 5 ]{}; ( -2.6445929091 , 2.9919662931) – ( .4271774259 , 2.9919662931) – ( .4271774259 , .8771337572) – ( -2.6445929091 , .8771337572) – ( -2.6445929091 , 2.9919662931) ; at ( -1.1087077416 , 1.9345500251) [ 6 ]{}; ( -.9992337899 , .5050789081) – ( 1.8620102292 , .5050789081) – ( 1.8620102292 , -2.8537727665) – ( -.9992337899 , -2.8537727665) – ( -.9992337899 , .5050789081) ; at ( .4313882196 , -1.1743469292) [ 7 ]{}; ( 2.0597582061 , .9321285414) – ( 3.0932510290 , .9321285414) – ( 3.0932510290 , -1.1731346165) – ( 2.0597582061 , -1.1731346165) – ( 2.0597582061 , .9321285414) ; at ( 2.5765046176 , -.1205030376) [ 2r ]{}; ( .4470410778 , 2.6948539495) – ( 2.9446487333 , 2.6948539495) – ( 2.9446487333 , .9532271552) – ( .4470410778 , .9532271552) – ( .4470410778 , 2.6948539495) ; at ( 1.6958449055 , 1.8240405524) [ 4r ]{}; ; (0,0) circle \[radius= 4.0000000000cm\]; ( -1.6459982931 , 3.6374193994) – ( .4592648648 , 3.6374193994) – ( .4592648648 , 2.6039265764) – ( -1.6459982931 , 2.6039265764) – ( -1.6459982931 , 3.6374193994) ; at ( -.5933667142 , 3.1206729879) [ 2 ]{}; ( -2.8395332087 , -.7018428058) – ( .2322371262 , -.7018428058) – ( .2322371262 , -2.8166753417) – ( -2.8395332087 , -2.8166753417) – ( -2.8395332087 , -.7018428058) ; at ( -1.3036480412 , -1.7592590738) [ 6 ]{}; ( -3.0626262509 , 2.5729737923) – ( .4589048496 , 2.5729737923) – ( .4589048496 , -.6997534805) – ( -3.0626262509 , -.6997534805) – ( -3.0626262509 , 2.5729737923) ; at ( -1.3018607006 , .9366101559) [ 8 ]{}; ( -.5635507165 , -2.8242854782) – ( .9771191400 , -2.8242854782) – ( .9771191400 , -3.8769170572) – ( -.5635507165 , -3.8769170572) – ( -.5635507165 , -2.8242854782) ; at ( .2067842117 , -3.3506012677) [ 1r ]{}; ( .6431945584 , 3.4222022955) – ( 2.0403237450 , 3.4222022955) – ( 2.0403237450 , 1.8145467931) – ( .6431945584 , 1.8145467931) – ( .6431945584 , 3.4222022955) ; at ( 1.3417591517 , 2.6183745443) [ 3r ]{}; ( .4079414892 , -.9825721177) – ( 2.9055491447 , -.9825721177) – ( 2.9055491447 , -2.7241989119) – ( .4079414892 , -2.7241989119) – ( .4079414892 , -.9825721177) ; at ( 1.6567453169 , -1.8533855148) [ 4r ]{}; ( 1.0480184995 , 1.7564082241) – ( 3.5073486430 , 1.7564082241) – ( 3.5073486430 , -.8273238333) – ( 1.0480184995 , -.8273238333) – ( 1.0480184995 , 1.7564082241) ; at ( 2.2776835713 , .4645421954) [ 5r ]{}; The structure of this paper is as follows. In Section \[Sec:Lit\] we review the literature relating to the packing of rectangles. We discuss application areas where rectangle packing problems arise. We also review the literature relating to the particular metaheuristic, formulation space search, used in this paper. In Section \[Sec:Formulation\] we formulate the problem of packing unequal rectangles/squares into a fixed size circular container as a mixed-integer nonlinear program. We show how we can eliminate a nonlinear maximisation term that arises in one of the constraints in our formulation. We also show how we can deal with the case where rectangles can be rotated through ninety degrees. We indicate the amendments that can be made to the formulation for the special case where we are maximising the number of squares packed. Section \[Sec:FSS\] gives details of the formulation space search heuristic that we use to solve the problem. Computational results are presented in Section \[Sec:Results\] for problems involving up to 30 rectangles/squares. In that section we give results both for maximising the number of rectangles/squares packed and for maximising the total area of the rectangles/squares packed. Finally in Section \[Sec:Conclusions\] we present our conclusions. Literature survey {#Sec:Lit} ================= In this section we first discuss the literature relating to the problem of packing rectangles and its applications. We then discuss the literature relating to the particular metaheuristic, formulation space search, we use to solve the rectangle packing problem considered in this paper. Rectangle packing ----------------- The majority of the work in the literature related to rectangle packing deals with packing rectangles/squares within a larger container that is either a square, or a rectangle, or a rectangular strip with one dimension fixed and the other dimension variable (e.g. fixed width, but variable length). A common feature of such work is that it is assumed that all of the smaller rectangles have to be packed into the larger container, which leads to an optimisation problem relating to minimising the dimension of the container. For example for a square container a natural optimisation problem is to minimise the side of the square container (which also minimises its perimeter and area). For a rectangular container one can examine minimising either its perimeter or its area. For a rectangular strip one can minimise the variable dimension. With respect to the packing of rectangles within a circular container then the natural optimisation problem is to minimise the radius of the container. In our literature survey below we focus principally on papers that take a packing approach. The reader may be aware that a closely related problem to packing is cutting e.g. cutting rectangles from a larger stock rectangle. There has been a substantial amount of work presented in the literature dealing with cutting. However much of that work involves additional restrictions with regard to the cuts that are made. One such restriction might be that the cuts are guillotine cuts, a guillotine cut on a rectangle being a cut from one edge of the rectangle to the opposite edge which is parallel to the two remaining edges. Another such restriction might be to limit the cutting to a number of stages, where at each stage guillotine cuts are made, but in a direction opposite to that adopted in the previous stage. So for example in the first stage guillotine cuts are made parallel to the $y$-axis, then in the second stage guillotine cuts are made parallel to the $x$-axis, etc. Since the primary focus of the work presented in this paper is packing rectangles within a *circular container* we, for space reasons, exclude detailed consideration of work focused on cutting rectangles from rectangular containers from the literature survey presented below. Unless otherwise stated all of the work considered below deals with orthogonal packing, so rectangles/squares are packed without rotation. Li and Cheng [@Li89] show that the problem of determining whether a set of squares can be packed into a larger rectangle is strongly NP-complete. In addition they show that the problem of determining whether a set of rectangles can be packed into a square is NP-complete. Leung et al. [@Leung1990] show that the problem of determining whether a set of squares can be packed into a square is strongly NP-complete. Picouleau [@Picouleau1996] considered the worst-case analysis of three fast heuristics for packing squares into a square container so as to minimise the size of the square. Murata et al. [@Murata1996] present a simulated annealing algorithm for the problem for packing rectangles into a rectangular container so as to minimise the size (area) of the container. Liu and Teng [@Liu1999] present a genetic algorithm for the problem of packing a set of rectangles into a strip of fixed width using minimum height. Wu et al. [@Wu2002] present a heuristic attempting to pack every member of a set of rectangles inside a fixed size rectangular container. Caprara et al. [@Caprara2006] discuss absolute worst-case performance ratios for lower bounds on packing rectangles/squares into a square container so as to minimise the size of the square container. They consider the case where the rectangles have fixed orientation and the case where they can be rotated through ninety degrees. Huang et al. [@Huang2007] present a heuristic approach to packing rectangles within a fixed size rectangular container so as to maximise the total area of the rectangles packed where the rectangles can be rotated through ninety degrees. Birgin et al. [@Birgin2010a] consider packing the maximal number of identically sized rectangles inside a rectangular container. Their approach is based upon recursive partitioning and allows the rectangles to be rotated through ninety degrees. Korf et al. [@Korf2010] consider the problem of packing a set of rectangles (with and without ninety degree rotation allowed) in a rectangular container of minimal area. They adopt a constraint satisfaction approach to the problem. Maag et al. [@Maag2010] consider the problem of packing a set of rectangles in a rectangular container of minimal area. Their approach is based on relaxing the constraint on rectangle overlap. Huang and Korf [@Huang2012] consider the same problem as [@Korf2010] but adopt an approach based on first deciding $x$-coordinate values for each rectangle. Bortfeldt [@Bortfeldt2013] presents a number of heuristic approaches (based on solution methods for two-dimension knapsack and two-dimension strip packing) for packing rectangles into a rectangular container so as to minimise the size (area) of the container. Martello and Monaci [@Martello2015] consider the problem of packing rectangles/squares into a square container so as to minimise the size of the container. They present a linear integer programming formulation and an exact approach based on a two-dimensional packing algorithm as well as a metaheuristic. They deal with the case where the rectangles have fixed orientation and also the case where they can be rotated through ninety degrees. Delorme et al. [@Delorme2017] present a Benders’ decomposition approach to the problem of packing a set of rectangles (with ninety degree rotation allowed) into a strip of fixed width using minimum height. Their approach (as they discuss) can be easily applied to the problem of packing rectangles/squares into a square container of minimal size. It is important to note here that a number of the approaches given in the literature for the problem of packing rectangles within a rectangular container utilise the fact that rectangle position coordinates can be taken from a finite discrete set (e.g. by packing rectangles so that they are positioned at their lowest bottom-left position). For example see [@Delorme2017; @Martello2015]. However in this paper we consider a circular container, and *the lack of rectangular sides to the container render such discretisation approaches invalid for the problem we consider*. As far as we are aware the problem considered in this paper of packing unequal rectangles/squares into a circular container has only been considered by just a few papers in the literature previously. Li et al. [@Li2014] consider the problem of packing orthogonal unequal rectangles in a circular container with an additional constraint related to mass balance. Their objective function is to minimise the radius of the container. A heuristic algorithm is presented. Hinostroza et al. [@Hinostroza2013] consider the problem of cutting rectangular boards from a log, regarded as a circular container. They present a nonlinear formulation of the problem (based on [@Birgin2006]), and two heuristics, one based on ordering the rectangles and the other on simulated annealing. Note here that, in our judgement, their formulation is flawed. Work has been presented in the literature relating to packing rectangles/squares into arbitrary convex regions, and such work can be applied to a circular container. We discuss this work below. Birgin at al [@Birgin2006a] introduce the concept of sentinels sets, which are finite subsets of the items to be packed such that, when two items are superposed, at least one sentinel of one item is in the interior of the other item. Using these sentinel sets they consider packing identical rectangles within both convex regions and a rectangular container, with and without rectangle rotation (both ninety degree rotation and arbitrary rotation). Birgin et al. [@Birgin2006] consider packing rectangles (with and without ninety degree rotation). Their objective is to feasibly pack all rectangles. Iteratively increasing the number of rectangles enables one to maximise the number of (identical) rectangles placed. Their approach is based on nonlinear optimisation. Birgin and Lobato [@Birgin2010] consider packing identical rectangles within an arbitrary convex region where a common rotation of $\theta$ degrees (not restricted to $\theta=90$) of all the rectangles is allowed. In addition a rectangle can be rotated through ninety degrees before a rotation of $\theta$ is applied. Their solution method is a combination of branch and bound and active-set strategies for bound-constrained minimization of smooth functions. Cassioli and Locatelli [@Cassioli2011] present a heuristic approach based on iterated local search for the problem of packing the maximum number of rectangles of the same size within a convex region (where rectangle rotation through ninety degrees is allowed). Andrade and Birgin [@Andrade2013] present symmetry breaking constraints for two problems relating to packing identical rectangles (with or without ninety degree rotation) in a polyhedron. They consider packing as many identical rectangles as possible within a given polyhedron as well as finding the smallest polyhedron of a specified type that accommodates a fixed number of identical rectangles. More generally Birgin [@Birgin2016] considers the application of nonlinear programming in packing problems. They note that nonlinear programming formulations and methods have been successfully applied to a wide range of packing problems. In particular we in this paper, as in the formulation presented below, use a nonlinear model. Applications ------------ The problem of packing rectangular objects into a larger container (equivalently cutting rectangular objects from a larger container) appears in a number of practical situations. As noted in Dowsland and Dowsland [@Dowsland1992] the earliest applications were in glass and metal industries where smaller rectangular objects had to be cut from larger (typically rectangular) stock pieces. A further application they discuss occurs in pallet loading where rectangular boxes have to be packed onto a wooden pallet for transport. Sweeney and Paternoster [@Sweeney1992] present an application-orientated research bibliography that lists some of the early work related to packing. Lodi et al. [@lodi2002] present a literature survey relating to two-dimensional packing and solution approaches. They mention a number of practical applications relating to rectangle cutting/packing. These include the arrangement of articles and advertisements on newspaper pages and in the wood and glass industry cutting rectangular items from larger sheets of material. They also mention the placement of goods on shelves in warehouses. Wascher et al. [@wascher2007] also mention some practical applications (such as pallet loading) in their work presenting a typology of cutting and packing problems. In relation to the specific problem considered in this paper of packing unequal rectangles/squares into a fixed size circular container we are aware of a number of practical applications. For example in the forestry/lumber industry consider the cutting of rectangular wooden boards from timber logs made from trees that have been felled. Here, by approximating the shape of the timber log by a circle of known radius, we have the problem considered in this paper, namely which of the rectangles (of known sizes) that we desire to cut should be cut from the circular log [@Hinostroza2013]. A further practical example relates to the problem considered in [@Li2014] which was concerned with packing orthogonal unequal rectangles in a circular container with an additional constraint related to mass balance. Here the container was a satellite and the rectangular objects related to items comprising the satellite payload. The mass balance constraint considered in [@Li2014] was a single nonlinear constraint that involved the (mass weighted) centres of each rectangle. Since, as will become apparent below, our formulation space search approach for packing rectangles into a fixed size circular container is based on a mixed-integer nonlinear program it is trivial to introduce into our approach a single additional nonlinear constraint (such as a mass balance constraint). Formulation space search ------------------------ When solving nonlinear non-convex problems with the aid of a solver, Mladenović et al. [@Mladenovic2005] observed that different formulations of the same problem may have different characteristics. Hence a natural way to proceed is by swapping between formulations. Under this framework Mladenović et al. [@Mladenovic2005] use formulation space search (henceforth FSS) for the circle packing problem considering two formulations of the problem: one in a Cartesian coordinate system, the other in a Polar coordinate system. Their algorithm solves the problem with one formulation at a time and when the solution is the same for all formulations the algorithm terminates. They consider packing identical circles into the unit circle and the unit square. In Mladenović et al. [@Mladenovic2007] they improve on [@Mladenovic2005] by considering a mixed formulation of the problem. They set a subset of the circles in the Cartesian system whilst the rest of the circles were in the Polar system. López and Beasley [@BL2011] use FSS for the problem of packing equally sized circles inside a variety of containers. They present computational results which show that their approach improves upon previous results based on FSS presented in the literature. For some of the containers considered they improve on the best result previously known. López and Beasley [@BL2013] use FSS to solve the packing problem with non-identical circles in different shaped containers. They present computational results which were compared with benchmark problems and also proposed some new instances. López and Beasley [@BL2016] use FSS to solve the problem of packing non-identical circles in a fixed size container. Essentially FSS exploits the fact that: because of the nature of the solution process in nonlinear optimisation we often fail to obtain a globally optimum solution from a single formulation; and so perturbing/changing the formulation and then resolving the nonlinear program may lead to an improved solution. Given the above it is a simple matter to construct iterative schemes that move between formulations in a systematic manner. FSS has been applied to a few problems additional to circle packing (e.g. timetabling [@Kochetov2008]). In [@BL2014] FSS was used to solve some benchmark mixed-integer nonlinear programming problems. In a more general sense an adaptation to FSS was presented in [@Brimberg2014] for solving continuous location problems. More discussion as to FSS can be found in Hansen et al. [@Hansen2010]. A related approach is variable space search, which has been applied to graph colouring (Hertz et al. [@Hertz2008; @Hertz2009]). Other related approaches are variable formulation search which has been applied to the cutwidth minimisation problem [@Pardo2013; @Duarte2016] and variable objective search which has been applied to the maximum independent set problem [@butenko13]. As noted in Pardo et al. [@Pardo2013] variable space search, variable formulation search and variable objective search contain similar ideas as originally expounded using FSS. At a slightly more general level FSS can be regarded as a variant of variable neighbourhood search, for example see [@Amirgaliyeva2017; @Hansen2017]. Formulation {#Sec:Formulation} =========== In this section we first present our basic formulation for the problem of packing unequal rectangles in a fixed size circular container as a mixed-integer nonlinear program (MINLP). We then show how we can eliminate a nonlinear maximisation term that arises in one of the constraints in our formulation. We indicate how we can deal with the case where rectangles can be rotated through ninety degrees. For the special case where we are maximising the number of squares packed we present the amendments that can be made to the formulation. Basic formulation ----------------- The problem we consider is to find the maximal weighted packing of $n$ unequal rectangles in a fixed size circular container. Here we have the option, for each unequal rectangle, of choosing to pack it or not. We can formulate this problem as follows. Let the fixed size circular container be of radius $R$ and, without loss of generality, let it be centred at the origin of the Euclidean plane. We have $n$ rectangles from which to construct a packing, where rectangle $i$ has a horizontal side of length $L_i$ and a vertical side of width $W_i$, and value (if packed) $V_i$. In our basic formulation we do not allow any rotation when packing rectangles so that rectangles are packed with their horizontal (length) edges parallel to the $x$-axis, their vertical (width) edges parallel to the $y$-axis. Clearly if we are dealing with packing squares then $L_i=W_i$. Here we label the rectangles so that they are ordered in increasing size (area) order (i.e. $L_iW_i \leq L_{i+1}W_{i+1}~i=1,\ldots,n-1$). Using a value $V_i$ here for each rectangle $i$ enables us to consider a number of different problems within the same formulation. For example if we take $V_i = 1~i=1,\ldots,n$ then we have the problem of maximising the number of rectangles packed. If we take $V_i = L_iW_i~i=1,\ldots,n$ then we have the problem of maximising the total area of the rectangles packed. Alternatively the $V_i~i=1,\ldots,n$ can be assigned arbitrary values. Then the variables are: $\alpha_i = 1$ if rectangle $i$ is packed, 0 otherwise; $i=1, \ldots, n$ $(x_i,y_i)$ the position of the centre of rectangle $i$; $i=1, \ldots, n$ With regard to the positioning (so $(x_i,y_i)$) of any unpacked rectangle $i$ (for which $\alpha_i=0$) our formulation forces all unpacked rectangles to be positioned at the origin. Let $Q$ be the set of all rectangle pairs $[(i,j) \ | \ i=1,...,n; \ j=1,...,n; \ j > i ]$. The formulation is: $$\begin{aligned} \max & \hspace{.3cm} \sum_{i=1}^{n} \alpha_iV_i & & & \label{e1} \\ \notag \text{subject to}& & & & \\ & -\alpha_i(\sqrt{(R^2-W_i^2/4)}-L_i/2) \leq x_i \leq \alpha_i(\sqrt{(R^2-W_i^2/4)}-L_i/2) & & i=1,\ldots,n \label{e4} \\ & -\alpha_i(\sqrt{(R^2-L_i^2/4)}-W_i/2) \leq y_i \leq \alpha_i(\sqrt{(R^2-L_i^2/4)}-W_i/2) & & i=1,\ldots,n \label{e5} \\ & (x_i+L_i/2)^2 + (y_i+W_i/2)^2 \leq \alpha_iR^2 +(1-\alpha_i)(L_i^2/4+W_i^2/4) & & i=1,\ldots,n \label{e2a} \\ & (x_i+L_i/2)^2 + (y_i-W_i/2)^2 \leq \alpha_iR^2 +(1-\alpha_i)(L_i^2/4+W_i^2/4) & & i=1,\ldots,n \label{e2b} \\ & (x_i-L_i/2)^2 + (y_i+W_i/2)^2 \leq \alpha_iR^2 +(1-\alpha_i)(L_i^2/4+W_i^2/4) & & i=1,\ldots,n \label{e2c} \\ & (x_i-L_i/2)^2 + (y_i-W_i/2)^2 \leq \alpha_iR^2 +(1-\alpha_i)(L_i^2/4+W_i^2/4) & & i=1,\ldots,n \label{e2d} \\ & \alpha_i\alpha_j[\max\{|x_i-x_j| - (L_i+L_j)/2, |y_i-y_j| - (W_i+W_j)/2\}] \geq 0 & & \forall (i,j) \in Q \label{e3} \\ & \alpha_i \in \{0,1\} & & i=1,\ldots,n \label{e6} \end{aligned}$$ The objective function, Equation (\[e1\]), maximises the value of the rectangles packed. Equation (\[e4\]) ensures that if a rectangle is packed (i.e. $\alpha_i=1$) its $x$-coordinate lies in $[-(\sqrt{(R^2-W_i^2/4)}-L_i/2),+(\sqrt{(R^2-W_i^2/4)}-L_i/2)]$. These limits can be easily deduced from geometric considerations, e.g. consider the centre $x$-coordinate value associated with a rectangle placed with its centre on the $x$-axis and with two of its corners just touching the circular container. The key feature of Equation (\[e4\]) is that if the rectangle is not packed (i.e. $\alpha_i=0$) then the $x$-coordinate is forced to be zero. Equation (\[e5\]) is the equivalent constraint to Equation (\[e4\]) for the $y$-coordinate. Equations (\[e2a\])-(\[e2d\]) ensure that if a rectangle is packed (so for rectangle $i$ with $\alpha_i=1$) its centre is appropriately positioned such that the entire rectangle lies inside the circular container. To achieve this we need to ensure that all four corners of the rectangle lie inside the circular container. These four corners are $(x_i \pm L_i/2, y_i \pm W_i/2)$ and Equations (\[e2a\])-(\[e2d\]) ensure that the (squared) distance from the origin to each these corners is no more than the (squared) radius of the container. Note that if the rectangle is packed (so $\alpha_i=1$) the left-hand side of Equations (\[e2a\])-(\[e2d\]) is $R^2$. If the rectangle is not packed (so $\alpha_i=0$) then from Equations (\[e4\]),(\[e5\]) the rectangle is positioned at the origin (so has $x_i=y_i=0$). In that case the left-hand side of Equations (\[e2a\])-(\[e2d\]) becomes $L_i^2/4 + W_i^2/4$, as does the right-hand side, and so the constraints are automatically satisfied. Equation (\[e3\]) guarantees that any two rectangles $i$ and $j$ which are both packed (so $\alpha_i=\alpha_j=1$) do not overlap each other. This constraint is derived from that given previous in [@Chr74]. It states that two rectangles of size $[L_i,W_i]$ and $[L_j,W_j]$ do not overlap provided that the difference between their centre $x$-coordinates is at least $(L_i+L_j)/2$ or that the difference between their centre $y$-coordinates is at least $(W_i+W_j)/2$ (or both). If one or other of the rectangles is not packed the left-hand side of Equation (\[e3\]) becomes zero due to the product term ($\alpha_i\alpha_j$) which means that the constraint is automatically satisfied. Equation (\[e6\]) is the integrality constraint. As discussed above our formulation positions any unpacked rectangle at the origin. For unpacked rectangle $i$ the inclusion of an appropriate $\alpha_i$ term on the left-hand side of Equation (\[e3\]) ensures that this unpacked rectangle, although positioned at the origin, does not actively participate in the overlap constraint which must apply between all packed rectangles. Our formulation (Equations (\[e1\])-(\[e6\])) is a mixed-integer nonlinear program (MINLP). Computationally MINLPs are recognised to be very demanding, involving as they do both an element of combinatorial choice and solution of an underlying continuous nonlinear program. For the problem considered in this paper the combinatorial choice relates to the choice of the set of rectangles to be packed, and the underlying continuous nonlinear program relates to deciding where to feasibly position within the circular container the rectangles that are packed. Elimination of the maximisation term ------------------------------------ The overlap constraint (Equation (\[e3\])) contains the expression $\max\{|x_i-x_j| - (L_i+L_j)/2, |y_i-y_j| - (W_i+W_j)/2\}$. For the particular problem considered in this paper this maximisation term can be eliminated, albeit by enlarging the size of the MINLP to be solved. Introduce additional continuous variables $\beta_{ij},~\forall (i,j) \in Q,$ defined by: $$\begin{aligned} & 0 \leq \beta_{ij} \leq 1 & & \forall (i,j) \in Q \label{eb1} \end{aligned}$$ Then we can replace Equation (\[e3\]) by: $$\begin{aligned} & \alpha_i\alpha_j \big[ \beta_{ij}[|x_i-x_j| - (L_i+L_j)/2] + (1-\beta_{ij})[ |y_i-y_j| - (W_i+W_j)/2] \big] \geq 0 & & \forall (i,j) \in Q \label{eb2} \end{aligned}$$ The logic here is that the $\alpha_i\alpha_j$ term ensures that the Equation (\[eb2\]) is always satisfied when either $\alpha_i=0$ or $\alpha_j=0$ (as indeed it does in Equation (\[e3\])). It only remains to check therefore the validity of replacing Equation (\[e3\]) with Equation (\[eb2\]) in the case $\alpha_i=\alpha_j=1$. When $\alpha_i=\alpha_j=1$ Equation (\[eb2\]) becomes $\beta_{ij}[|x_i-x_j| - (L_i+L_j)/2] + (1-\beta_{ij})[ |y_i-y_j| - (W_i+W_j)/2] \geq 0 $. Now the weighted sum on the left-hand side of this constraint *can only be non-negative provided that at least one of the two terms in it is itself non-negative*. In other words Equation (\[eb2\]) will ensure that one (or both) of $[|x_i-x_j| - (L_i+L_j)/2]$ and $[ |y_i-y_j| - (W_i+W_j)/2]$ will be non-negative. Since one or both of these terms are non-negative it is therefore true that the maximisation term in Equation (\[e3\]), $\max\{|x_i-x_j| - (L_i+L_j)/2, |y_i-y_j| - (W_i+W_j)/2\}$, must also be non-negative. This in turn implies that Equation (\[e3\]) is satisfied. Therefore it is valid to replace Equation (\[e3\]) by Equation (\[eb2\]). Note here that it is also valid to replace Equation (\[e3\]) by Equation (\[eb2\]) if we define $\beta_{ij}$ as binary (zero-one) variables. However we might well expect there to be computational benefit in defining these variables as continuous, rather than binary, variables. Rotation -------- As is common in the literature (e.g. [@Birgin2010a; @Caprara2006; @Hinostroza2013; @Huang2012; @Korf2010; @lodi2002; @Maag2010; @Martello2015]) in the basic formulation presented above we did not allow any rotation when packing, so that the items to be packed (rectangles/squares) were packed with their horizontal (length) edges parallel to the $x$-axis, their vertical (width) edges parallel to the $y$-axis. If rotation of any item is allowed (which might be dependent on the practical problem being modelled) then the situation becomes more complex, although obviously rotation might enable a better solution to be found. In the literature rotation through ninety degrees is the most common situation modelled (e.g. [@Birgin2010a; @Caprara2006; @Delorme2017; @Huang2007; @Huang2012; @Korf2010; @Li2014; @Martello2015; @Murata1996; @Wu2002]). Clearly rotation through ninety degrees is irrelevant when we are packing squares (as they are the same under ninety degree rotation) and only relevant when we are dealing with unequally sized rectangles. Our formulation can be extended to deal with rotation through ninety degrees as discussed below. Rotation through an arbitrary angle cannot be dealt with by our approach. If the rectangles can be rotated through ninety degrees then this is easily incorporated into our formulation. Suppose that rectangle $i$ can be rotated through ninety degrees. Then create a new rectangle ($j$ say) that represents rectangle $i$ if it is rotated, so that we have $L_j=W_i$, $W_j=L_i$, $V_j=V_i$. Add to the formulation: $$\begin{aligned} & \alpha_i + \alpha_j \leq 1 \label{rot1}\end{aligned}$$ Equation (\[rot1\]) ensures that we cannot use both the original rectangle $i$ and its rotated equivalent $j$. Dealing with rectangle rotation therefore requires creating a new rectangle for each original rectangle that can be rotated and adding a single constraint to the formulation. In terms of the effect on the formulation then if all $n$ rectangles can be rotated this only directly adds $n$ constraints (Equation (\[rot1\])) to the formulation. However the creation of an additional $n$ rotated rectangles doubles the number of rectangles to be considered for packing. This means that the number of linear constraints associated with Equations (\[e4\]),(\[e5\]) doubles, as does the number of nonlinear constraints associated with Equations (\[e2a\])-(\[e2d\]). The more significant effect is that the number of nonlinear constraints associated with Equation (\[e3\]) increases from $n(n-1)/2$ to $2n(2n-1)/2$ (so approximately increases by a factor of 4). This increase in the number of nonlinear constraints associated with Equation (\[e3\]) also carries through to increase the number of $\beta_{ij}$ variables (Equation (\[eb1\])) that need to be considered by an (approximate) factor of 4. For this reason we would expect that, computationally, dealing with a problem with $n$ rectangles with fixed orientation becomes much more challenging if all $n$ rectangles can be rotated. Maximising the number of squares packed {#Sec:opt} --------------------------------------- From our previous work [@BL2016] we know that when we are considering a packing problem where all the items to be packed can be ordered such that item $i$ fits inside item $j$ for all $j>i$ then, in the case where we are maximising the number of items packed, the optimal solution consists of the first $K$ items, for some $K$. Clearly items can be ordered to fit inside each other if we are considering packing squares, i.e. order the squares in increasing size (length) order, but such an ordering is unlikely to be possible if we are packing rectangles. Hence we shall just consider square packing here. In the case of square packing therefore, when we are maximising the number of squares packed, we can impose the additional constraints: $$\begin{aligned} & \alpha_{i-1} \geq \alpha_i & i=2,\ldots,n \label{e9} \\ & \alpha_k = 0 & \text{if}~\sum_{i=1}^{k} L_i^2 > \pi R^2 & & k=1,\ldots,n \label{e10}\end{aligned}$$ Equation (\[e9\]) ensures that if $\alpha_i$ is one (so square $i$ is packed) then $\alpha_{i-1}$ must also be one (so square $i-1$ is packed). If square $i$ is not packed ($\alpha_i=0$) then the right-hand side of this constraint is zero, so the constraint is always satisfied whatever the value for $\alpha_{i-1}$. Collectively the $(n-1)$ inequalities represented in Equation (\[e9\]) ensure that the optimal solution consists of the first $K$ squares, for some $K$. In Equation (\[e10\]) we have that if we were to pack square $k$ then we would have to pack all squares up to and including square $k$. If this packing exceeds the area of the container then clearly square $k$ cannot be packed. Aside from these additional constraints we can amend the overlap constraint, Equation (\[eb2\]). Note that Equation (\[eb2\]) includes a $\alpha_i\alpha_j$ term and applies for $(i,j) \in Q$, where $Q$ is defined to have $j>i$. Now if $\alpha_j=1$ we automatically know that $\alpha_i=1$ (since $j>i$) and hence that $\alpha_i\alpha_j=1$. If $\alpha_j=0$ then it is irrelevant what value $\alpha_i$ takes since we must have $\alpha_i\alpha_j=0$. In other words the $\alpha_i\alpha_j$ term in Equation (\[eb2\]) can be replaced by $\alpha_j$ so that the overlap constraint, Equation (\[eb2\]), becomes: $$\begin{aligned} & \alpha_j \big[ \beta_{ij}[|x_i-x_j| - (L_i+L_j)/2] + (1-\beta_{ij})[ |y_i-y_j| - (W_i+W_j)/2] \big] \geq 0 & & \forall (i,j) \in Q \label{eb2a} \end{aligned}$$ Note here that we have used $W_i$ and $W_j$ in Equation (\[eb2a\]) for clarity of comparison with Equation (\[eb2\]). Obviously since we are just considering square packing here we have $W_i=L_i~i=1,\ldots,n$. FSS algorithm {#Sec:FSS} ============= In this section we present our FSS algorithm for the problem. For simplicity we present our approach using the basic formulation of the problem, Equations (\[e1\])-(\[e6\]), before the amendments as discussed above (i.e. elimination of the maximisation term and adaptions for packing squares). We discuss at the end of this section how we incorporate a number of other constraints, presented in this section, into our approach. Algorithm --------- Consider the formulation, Equations (\[e1\])-(\[e6\]), given above. Letting $\delta$ be a small positive constant replace the integrality requirement, Equation (\[e6\]), by: $$\begin{aligned} & \sum_{i=1}^{n} \alpha_i(1-\alpha_i) \leq \delta \label{e7} \\ & 0 \leq \alpha_i \leq 1 & i=1,\ldots,n \label{e8} \end{aligned}$$ If $\delta$ was zero these equations would force $[\alpha_i,~i=1,\ldots,n]$ to assume zero-one values. However given the capabilities of nonlinear optimisation software simply replacing an explicit integrality condition by Equations (\[e7\]),(\[e8\]) would not be computationally successful, since we would be hoping to generate a (globally optimal) solution to a continuous nonlinear optimisation problem with a very tight inequality constraint. Note that if $\delta$ is zero then Equation (\[e7\]) is effectively an equality constraint as the left-hand side is non-negative. Accordingly we adopt a heuristic approach and have $\delta > 0$. Hence our original MINLP, Equations (\[e1\])-(\[e6\]), has now become a continuous nonlinear optimisation problem, since we have relaxed the integrality requirement using Equations (\[e7\]),(\[e8\]). This nonlinear optimisation problem is optimise Equation (\[e1\]) subject to Equations (\[e4\])-(\[e3\]),(\[e7\]),(\[e8\]). We refer to this problem as the *continuous FSS relaxation* of the problem. If we solve this nonlinear problem the $[\alpha_i,~i=1,\ldots,n]$ can deviate (albeit only slightly, if $\delta$ is small) from their ideal zero-one values, but we can round them to their nearest integer value to recover an integer set of values. Given an integer set of values for $[\alpha_i,~i=1,\ldots,n]$ then the original formulation (Equations (\[e1\])-(\[e6\])) becomes a nonlinear feasibility problem. This nonlinear feasibility problem is to find positions $(x_i,y_i)$ for each rectangle $i$ that we have chosen to pack (so with $\alpha_i=1$ in the rounded solution). Note here that this is a feasibility problem as the objective function, Equation (\[e1\]), is purely a function of the zero-one variables (and these have been fixed by rounding). With just a single value for $\delta$ we have just a single nonlinear problem: optimise Equation (\[e1\]) subject to Equations (\[e4\])-(\[e3\]),(\[e7\]),(\[e8\]). However changing $\delta$ is a systematic fashion creates a series of different problems that can be given to an appropriate nonlinear solver in an attempt to generate new and improved solutions to our original MINLP. The idea here is that altering $\delta$ perturbs the nonlinear formulation and hence, given the nature of any nonlinear solution software, might lead to a different solution. The pseudocode for our FSS algorithm for the rectangle packing problem considered in this paper is presented in Algorithm \[fssalg\]. In this pseudocode let $P$ denote the original MINLP (here optimise Equation (\[e1\]) subject to Equations (\[e4\])-(\[e6\])) and $P^*$ denote the continuous FSS relaxation (here optimise Equation (\[e1\]) subject to Equations (\[e4\])-(\[e3\]),(\[e7\]),(\[e8\])). We first initialise values, here $Z_{best}$ is the best feasible solution found and $t$ is an iteration counter. We then solve the original MINLP $P$. In this pseudocode all attempts to solve a nonlinear problem (e.g. $P$ or $P^*$) must be subject to a time limit, since otherwise the computation time consumed could become extremely high. For this reason we always terminate the solution process after a predefined time limit, returning the best feasible solution found (if one has been found). In the computational results reported later below this time limit was set to $10n$ seconds. SCIP is capable of solving our MINLP formulation $P$ to proven global optimality because SCIP restricts the type of nonlinear expression allowed [@bussieck14; @vigerske17; @vigerske16]. However with regard to our computational results for all the problem instances considered below this never occurred within the time limit imposed. Note here that even if solving $P$ or $P^*$ returns a feasible solution we have no guarantee that this is an optimal solution, since a better solution might have been found had we increased the time limit. The iterative process in the pseudocode is to update the iteration counter and solve the continuous FSS relaxation $P^*$. If a feasible solution for $P^*$ has been found then we round that solution and solve the resulting feasibility problem (again subject to the predefined time limit). The best feasible solution found (if any) is updated and provided we have not reached the termination condition we reduce $\delta$ by a factor $\gamma$ and repeat. We terminate when $\delta$ is small ($\leq 10^{-5}$) or we have performed a number of consecutive iterations (three iterations) without improving the value of the best solution found. We reduce $\delta$ by a factor $\gamma=0.5$ at each iteration and replicate (repeat) our heuristic a number of times (five replications were performed in the computational results reported below). The values for these factors were set based on our previous computational experience with FSS. *Initialisation:*  $\delta \leftarrow 0.05$  $Z_{best} \leftarrow -\infty$  $t \leftarrow 0$\ Solve $P$ and update $Z_{best}$ if a feasible solution for $P$ has been found\ *Iterative process:*\ Update the iteration counter $t \leftarrow t+1$\ Solve $P^*$\ Round the $[\alpha_i,~i=1,\ldots,n]$ values in the $P^*$ solution and solve the resulting feasibility problem\ Update $Z_{best}$ using the solution to the feasibility problem if a feasible solution for that problem has been found\ If  $\delta \leq 10^{-5}$ or $Z_{best}$ has not improved in the last three iterations stop\ Update $\delta \leftarrow \gamma\delta$\ \[fssalg\] Constraints ----------- There are a number of general constraints that apply whatever the objective adopted. Recall that $Z_{best}$ is the value of the best feasible solution encountered during our FSS heuristic. Let the set of rectangles that are packed in this best feasible solution be denoted by $F$. Then the general constraints that apply are: $$\begin{aligned} & \alpha_i + \alpha_j \leq 1 & \forall (i,j) \in Q \hspace{.2cm} \textrm{min}(L_i+L_j,W_i+W_j)>2R \label{e12} \\ & \sum_{i=1}^{n} \alpha_i L_iW_i \leq \pi R^2 \label{e13} \\ & \sum_{i=1}^{n} \alpha_iV_i \geq Z_{best} \label{e14} \\ & \sum_{i \not\in F} \alpha_i + \sum_{i \in F} (1-\alpha_i) \geq 1 \label{e15}\end{aligned}$$ Equation (\[e12\]) says that if the minimum of the sum of the sides of any two rectangles is greater than the container diameter then we cannot pack both rectangles. Equation (\[e13\]) ensures that the total area of the rectangles packed cannot exceed the area of the container. Equation (\[e14\]) ensures that the value of any solution found is at least that of the best feasible solution known. Equation (\[e15\]) is a feasible solution exclusion constraint and ensures that whatever solution is found must differ from the best known solution (of value $Z_{best}$ with packed rectangles $F$) by at least one rectangle. The effect of Equations (\[e14\]) and (\[e15\]) is to seek an improved feasible solution. Note here that although these constraints may be redundant in the original MINLP they may not be redundant in any relaxation of the problem, in particular here the continuous FSS relaxation when we drop the requirement that the $[\alpha_i,~i=1,\ldots,n]$ are zero-one. Summary ------- We have presented a considerable number of constraints above and so here (for clarity) we specify the constraints that are involved with $P$ (the original MINLP) and $P^*$ (the continuous FSS relaxation) that are used in the statement of our FSS heuristic given above (see Algorithm \[fssalg\]). - $P$ is optimise Equation (\[e1\]) subject to Equations (\[e4\])-(\[e2d\]),(\[e6\])-(\[eb2\]),(\[e12\])-(\[e15\]) - $P^*$ is optimise Equation (\[e1\]) subject to Equations (\[e4\])-(\[e2d\]),(\[eb1\]),(\[eb2\]),(\[e7\])-(\[e15\]) When considering just the packing of squares, so as to maximise the number of squares packed, we add Equations (\[e9\]),(\[e10\]) to $P$ and $P^*$ and replace Equation (\[eb2\]) by Equation (\[eb2a\]). When rectangles can be rotated through ninety degrees we amend the problem in the manner discussed above when we considered Equation (\[rot1\]). Results {#Sec:Results} ======= The computational results presented below (Windows 2.50GHz pc, Intel i5-2400S processor, 6Gb memory) are for our formulation space search heuristic as coded in FORTRAN. We used SCIP (Solving Constraint Integer Programs, version 4.0.1) [@Achterberg2009; @maher17; @scip] as the mixed-integer nonlinear solver. For a technical explanation as to how SCIP solves MINLPs see Vigerske and Gleixner [@vigerske16]. To input our formulation into SCIP we made use of the modelling language ZIMPL (Zuse Institute Mathematical Programming Language), and to solve continuous nonlinear problems we used Ipopt (Interior Point OPTimizer, version 3.12.8), both of which are included within SCIP. We generated a number of test problems involving $n=10,20,30$ rectangles/squares, with rectangle/square dimensions being randomly generated (to two decimal places) from $[1,5]$. For each test problem we considered three different container radii, where the container radii $R$ were set so that the area of the container ($ \pi R^2$) was approximately $\frac{1}{3}$, $\frac{1}{2}$ and $\frac{2}{3}$ of the total area ($\sum_{i=1}^{n} L_iW_i $) of the $n$ rectangles/squares. All of the randomly generated test problems considered in this paper are publicly available from OR-Library [@Beasley90], see http://people.brunel.ac.uk/${\sim}$mastjjb/jeb/orlib/rspackinfo.html. Rectangle packing, no rotation ------------------------------ Table \[table2\] shows the results obtained for the rectangle packing test problems considered (where the rectangles have fixed orientation, so no rotation is allowed). In that table we show the value of $n$ and the value of the container area fraction. For the two objectives considered (maximise the number of rectangles packed, maximise the total area of the rectangles packed) we give the value of the best solution achieved. We also show the replication at which we first encountered the best solution shown, as well as the total time (in seconds) over all five replications. In Table \[table2\] for a fixed $n$ (and so a fixed set of rectangles to be packed) we can see that, as we would expect, as the container area fraction increases (so the container is of larger radius and we can hence pack more of the rectangles) the solution value also increases. As an illustration of the results obtained Figure \[fig3\] and Figure \[fig4\] show the solutions in Table \[table2\] for the two problems in that table with $n=30$ and the largest container area fraction. In Figure \[fig3\] we can see that the solution consists of rectangles 1-18, together with rectangle 23. Recalling that rectangles are ordered in increasing size (area) order this packing is as we would expect, in that many of the smaller rectangles are used in a solution that aims to maximise the number of rectangles packed. In Figure \[fig4\] we can see that the solution consists of a mix of rectangles. The first six smallest rectangles, rectangles 1-6, together with rectangles 8,11-13,17,19,22-24,28. This figure contains 16 rectangles in total, compared with the 19 rectangles used in Figure \[fig3\]. ------------------ --------------- ---------- ------------- ---------- -- ---------- ------------- ---------- Number Container of area Best Replication Total Best Replication Total rectangles ($n$) fraction solution time (s) solution time (s) 10 $\frac{1}{3}$ 5 2 3058 18.4441 1 3292 $\frac{1}{2}$ 6 1 2862 28.9390 1 2992 $\frac{2}{3}$ 7 1 2966 37.6878 2 4754 20 $\frac{1}{3}$ 7 1 6278 43.3885 1 7227 $\frac{1}{2}$ 10 5 4530 63.1643 1 9791 $\frac{2}{3}$ 11 1 7311 84.4446 2 10601 30 $\frac{1}{3}$ 13 5 11514 60.3570 4 14011 $\frac{1}{2}$ 16 5 10029 85.2113 5 19786 $\frac{2}{3}$ 19 5 6966 103.4802 5 19470 ------------------ --------------- ---------- ------------- ---------- -- ---------- ------------- ---------- : Computational results: rectangle packing, no rotation[]{data-label="table2"} Square packing -------------- Table \[table3\] shows the results obtained for the square packing test problems considered. This table has the same format as Table \[table2\]. As an illustration of the results obtained Figure \[fig5\] and Figure \[fig6\] show the solutions in Table \[table3\] for the two problems in that table with $n=30$ and the largest container area fraction. For Figure \[fig5\], since we are maximising the number of squares packed, the solution must consist of the first $K$ squares, for some $K$ (the squares being ordered in increasing size order). In Figure \[fig5\] we can see that all squares up to and including square $K=23$ are packed. Visually whether square 24, which must be at least as large as square 23 (and possibly larger), can also be packed into the circular container through judicious rearrangement of all of the currently positioned squares is unclear. In Figure \[fig6\] we can see that the packing consists of a mix of squares. Squares 1-19, which are the 19 smallest squares, are all packed along with the two of the larger squares, squares 22 and 28. This figure contains 21 squares in total, compared with the 23 squares used in Figure \[fig5\]. ------------------ --------------- ---------- ------------- ---------- -- ---------- ------------- ---------- Number Container of area Best Replication Total Best Replication Total rectangles ($n$) fraction solution time (s) solution time (s) 10 $\frac{1}{3}$ 4 1 1123 22.9485 1 2762 $\frac{1}{2}$ 5 1 2761 36.7126 1 3402 $\frac{2}{3}$ 6 1 2275 51.7583 3 4593 20 $\frac{1}{3}$ 11 5 5450 54.1054 5 9412 $\frac{1}{2}$ 12 1 6465 85.2107 4 11304 $\frac{2}{3}$ 14 1 6995 109.8363 5 7636 30 $\frac{1}{3}$ 16 2 13552 54.4941 5 16629 $\frac{1}{2}$ 20 2 13457 77.5814 4 14808 $\frac{2}{3}$ 23 5 10427 103.0963 5 15145 ------------------ --------------- ---------- ------------- ---------- -- ---------- ------------- ---------- : Computational results: square packing[]{data-label="table3"} Rectangle packing, rotation allowed ----------------------------------- Table \[table2a\] shows the results obtained for the rectangle packing test problems considered in Table \[table2\], but where rotation through ninety degrees is allowed. That table has the same format as Table \[table2\]. As an illustration of the results obtained Figure \[fig3a\] and Figure \[fig4a\] show the solutions in Table \[table2\] for the two problems in that table with 30 rectangles and the largest container area fraction. In those figures the letter r after the rectangle number indicates that the rectangle has been rotated through ninety degrees. Comparing Table \[table2a\] with Table \[table2\] we can see that the solution value where rotation is allowed is greater than (or equal to) the solution value with no rotation for all but two of the 18 test problems considered. In the discussion above as to how to extend our formulation to deal with rotation through ninety degrees we noted the increase in the consequent size of the formulation, both with respect to the number of linear and nonlinear constraints and with respect to the number of variables. Comparing the computation times in Table \[table2a\] with those in Table \[table2\] does indeed indicate that dealing with a problem where rectangles can be rotated is much more challenging computationally than dealing with a problem where the rectangles have fixed orientation. ------------ --------------- ---------- ------------- ---------- -- ---------- ------------- ---------- Number Container of area Best Replication Total Best Replication Total rectangles fraction solution time (s) solution time (s) 10 $\frac{1}{3}$ 5 1 9836 19.6702 1 8771 $\frac{1}{2}$ 6 1 10332 29.5041 1 16093 $\frac{2}{3}$ 7 1 12409 37.9687 2 15526 20 $\frac{1}{3}$ 8 3 22759 43.6850 2 50558 $\frac{1}{2}$ 10 1 30682 63.5279 1 50013 $\frac{2}{3}$ 12 4 30823 84.7008 3 63350 30 $\frac{1}{3}$ 14 1 49724 57.9328 5 69565 $\frac{1}{2}$ 17 1 45857 84.3715 1 82101 $\frac{2}{3}$ 20 1 57427 110.3253 3 39564 ------------ --------------- ---------- ------------- ---------- -- ---------- ------------- ---------- : Computational results: rectangle packing, rotation allowed[]{data-label="table2a"} Comment ------- As with many heuristic algorithms presented in the literature it is difficult to draw firm conclusions as to the quality of the results obtained without knowing either the optimal solutions of the test problems solved, or the results obtained by other heuristic algorithms by other authors on the same set of test problems. For the problem considered in this paper we are not aware of any appropriate publicly available test problems which could be used to provide direct insight into the quality of our heuristic. We would stress here however that all of the test problems used in this paper are publicly available for use by future workers to see if they can develop approaches that perform better than the formulation space heuristic presented in this paper. Despite this lack of appropriate test problems it is possible to gain some insight into the quality of our heuristic by taking test problems associated with a slightly different (but similar) problem. This problem is the problem of packing $n$ unit squares within a circle of small (ideally minimal) radius. Here, unlike the problem we consider, all squares must be packed (whereas our heuristic is particularised for the case where one or more squares need not be packed). We used our heuristic to maximise the number of unit squares packed into a circular container of known radius utilising the test problems given by Friedman [@Friedman2018]. For these problems [@Friedman2018] gives the best solution known for the minimum radius circle within which it is possible to pack all $n$ unit squares. Some of these best known solutions involve arbitrary rotation (which our heuristic cannot deal with) and so we only considered problems which did not involve rotation. Note also here that, as far as we aware, the results given in [@Friedman2018] were found by varying authors using varying approaches (including, we believe, results based on human intervention). This contrasts with our results produced by a single algorithmic heuristic approach that does not involve any human intervention. The results are shown in Table \[tablef\]. In that table we show the number of unit squares ($n$) and the value of best solution (maximum number of unit squares packed) as found by our heuristic. We also show the replication at which we first encountered the best solution shown, as well as the total time (in seconds) over all five replications. Considering Table \[tablef\] we can see that for 13 of the 16 problems considered our heuristic succeeds in finding the best known solution by packing all $n$ unit squares into the given circular container. -------------------- --------------------------------- ------------- ---------------- Number of Best solution Replication Total time (s) unit squares ($n$) (number of unit squares packed) 1 1 1 0.0 2 2 1 0.2 3 3 1 0.3 4 4 1 0.7 5 5 1 3.6 7 7 1 140.2 9 9 1 49.2 10 10 1 158.5 11 10 1 2619.5 12 12 1 120.2 14 14 1 460.6 16 16 5 4800.7 18 18 2 3153.6 21 21 5 4961.4 26 23 1 8756.0 30 27 1 18984.5 -------------------- --------------------------------- ------------- ---------------- : Computational results: unit square packing[]{data-label="tablef"} Conclusions and future work {#Sec:Conclusions} =========================== In this paper we have formulated the problem of packing unequal rectangles/squares into a fixed size circular container as a mixed-integer nonlinear program. We showed how we can eliminate a nonlinear maximisation term that arises in one of the constraints in our formulation and indicated the amendments that can be made to the formulation when considering packing squares so as to maximise the number of squares packed. We discussed how to amend our formulation to deal with the case where unequal rectangles can be rotated through ninety degrees. A formulation space search heuristic was presented and computational results given for test problems involving up to 30 rectangles/squares, with these test problems being made publicly available for future workers. In terms of future work we plan to investigate changes to our formulation, for example by making use of McCormick cuts to replace products of variables. ; (0,0) circle \[radius= 4.0000000000cm\]; ( -3.7096403558 , -.4538739448) – ( -2.8517016371 , -.4538739448) – ( -2.8517016371 , -1.3173836941) – ( -3.7096403558 , -1.3173836941) – ( -3.7096403558 , -.4538739448) ; at ( -3.2806709965 , -.8856288195) [ 1]{}; ( .4237081360 , -.8756024043) – ( 1.1312290273 , -.8756024043) – ( 1.1312290273 , -1.9285271954) – ( .4237081360 , -1.9285271954) – ( .4237081360 , -.8756024043) ; at ( .7774685816 , -1.4020647999) [ 2]{}; ( 1.1341336498 , -.7308883810) – ( 1.8249414492 , -.7308883810) – ( 1.8249414492 , -1.8729496624) – ( 1.1341336498 , -1.8729496624) – ( 1.1341336498 , -.7308883810) ; at ( 1.4795375495 , -1.3019190217) [ 3]{}; ( .2334830750 , -2.5839516405) – ( 1.3476892031 , -2.5839516405) – ( 1.3476892031 , -3.3304697463) – ( .2334830750 , -3.3304697463) – ( .2334830750 , -2.5839516405) ; at ( .7905861390 , -2.9572106934) [ 4]{}; ( 1.4156759167 , .6695458990) – ( 2.7638653317 , .6695458990) – ( 2.7638653317 , .0511614979) – ( 1.4156759167 , .0511614979) – ( 1.4156759167 , .6695458990) ; at ( 2.0897706242 , .3603536984) [ 5]{}; ( 1.3696208549 , -2.2174513363) – ( 2.3612643090 , -2.2174513363) – ( 2.3612643090 , -3.2202368516) – ( 1.3696208549 , -3.2202368516) – ( 1.3696208549 , -2.2174513363) ; at ( 1.8654425820 , -2.7188440939) [ 6]{}; ( -1.8093054342 , -1.6201055717) – ( -.2717009774 , -1.6201055717) – ( -.2717009774 , -2.3276264631) – ( -1.8093054342 , -2.3276264631) – ( -1.8093054342 , -1.6201055717) ; at ( -1.0405032058 , -1.9738660174) [ 7]{}; ( -2.9216206652 , 2.7251311734) – ( -1.7517042307 , 2.7251311734) – ( -1.7517042307 , 1.7112035968) – ( -2.9216206652 , 1.7112035968) – ( -2.9216206652 , 2.7251311734) ; at ( -2.3366624479 , 2.2181673851) [ 8]{}; ( .9848750858 , -.1265125474) – ( 3.4194154758 , -.1265125474) – ( 3.4194154758 , -.6947576728) – ( .9848750858 , -.6947576728) – ( .9848750858 , -.1265125474) ; at ( 2.2021452808 , -.4106351101) [ 9]{}; ( .5963384613 , 1.6166460329) – ( 2.1339429182 , 1.6166460329) – ( 2.1339429182 , .6974259772) – ( .5963384613 , .6974259772) – ( .5963384613 , 1.6166460329) ; at ( 1.3651406897 , 1.1570360051) [ 10]{}; ( -.2697592050 , -.1744623059) – ( .4210485945 , -.1744623059) – ( .4210485945 , -2.4920110524) – ( -.2697592050 , -2.4920110524) – ( -.2697592050 , -.1744623059) ; at ( .0756446948 , -1.3332366791) [ 11]{}; ( -1.6521713097 , -2.5562339823) – ( -.0477144852 , -2.5562339823) – ( -.0477144852 , -3.5757325895) – ( -1.6521713097 , -3.5757325895) – ( -1.6521713097 , -2.5562339823) ; at ( -.8499428974 , -3.0659832859) [ 12]{}; ( -.9178102520 , .6952737631) – ( 1.1490421157 , .6952737631) – ( 1.1490421157 , -.1236677410) – ( -.9178102520 , -.1236677410) – ( -.9178102520 , .6952737631) ; at ( .1156159319 , .2858030111) [ 13]{}; ( -3.6102829286 , .6929869857) – ( -1.0420378033 , .6929869857) – ( -1.0420378033 , -.1203834879) – ( -3.6102829286 , -.1203834879) – ( -3.6102829286 , .6929869857) ; at ( -2.3261603660 , .2863017489) [ 14]{}; ( 1.8277993625 , -.6983470946) – ( 3.3375486661 , -.6983470946) – ( 3.3375486661 , -2.2025253676) – ( 1.8277993625 , -2.2025253676) – ( 1.8277993625 , -.6983470946) ; at ( 2.5826740143 , -1.4504362311) [ 15]{}; ( -3.6196305670 , 1.6972157268) – ( -.9566779207 , 1.6972157268) – ( -.9566779207 , .7000012421) – ( -3.6196305670 , .7000012421) – ( -3.6196305670 , 1.6972157268) ; at ( -2.2881542439 , 1.1986084845) [ 16]{}; ( -2.8195899443 , -.1229780119) – ( -1.8112333983 , -.1229780119) – ( -1.8112333983 , -2.8360699339) – ( -2.8195899443 , -2.8360699339) – ( -2.8195899443 , -.1229780119) ; at ( -2.3154116713 , -1.4795239729) [ 17]{}; ( -.9541285408 , 3.5013216232) – ( .0597990358 , 3.5013216232) – ( .0597990358 , .7548035174) – ( -.9541285408 , .7548035174) – ( -.9541285408 , 3.5013216232) ; at ( -.4471647525 , 2.1280625703) [ 18]{}; ( .0624519736 , 3.3676310334) – ( 2.1571594945 , 3.3676310334) – ( 2.1571594945 , 1.6183274122) – ( .0624519736 , 1.6183274122) – ( .0624519736 , 3.3676310334) ; at ( 1.1098057340 , 2.4929792228) [ 23]{}; ; (0,0) circle \[radius= 4.0000000000cm\]; ( .9088831876 , 1.4584689091) – ( 1.7668219063 , 1.4584689091) – ( 1.7668219063 , .5949591598) – ( .9088831876 , .5949591598) – ( .9088831876 , 1.4584689091) ; at ( 1.3378525469 , 1.0267140344) [ 1]{}; ( .4874594449 , -.3298385613) – ( 1.1949803363 , -.3298385613) – ( 1.1949803363 , -1.3827633524) – ( .4874594449 , -1.3827633524) – ( .4874594449 , -.3298385613) ; at ( .8412198906 , -.8563009569) [ 2]{}; ( -.1387332466 , -1.8064702814) – ( .5520745528 , -1.8064702814) – ( .5520745528 , -2.9485315628) – ( -.1387332466 , -2.9485315628) – ( -.1387332466 , -1.8064702814) ; at ( .2066706531 , -2.3775009221) [ 3]{}; ( .8204467519 , 3.4786869205) – ( 1.9346528800 , 3.4786869205) – ( 1.9346528800 , 2.7321688146) – ( .8204467519 , 2.7321688146) – ( .8204467519 , 3.4786869205) ; at ( 1.3775498160 , 3.1054278676) [ 4]{}; ( -2.1482918228 , .1167806288) – ( -.8001024078 , .1167806288) – ( -.8001024078 , -.5016037723) – ( -2.1482918228 , -.5016037723) – ( -2.1482918228 , .1167806288) ; at ( -1.4741971153 , -.1924115718) [ 5]{}; ( -2.0690504767 , -.5258576968) – ( -1.0774070227 , -.5258576968) – ( -1.0774070227 , -1.5286432121) – ( -2.0690504767 , -1.5286432121) – ( -2.0690504767 , -.5258576968) ; at ( -1.5732287497 , -1.0272504544) [ 6]{}; ( -1.5577106395 , 3.5802664894) – ( -.3877942049 , 3.5802664894) – ( -.3877942049 , 2.5663389128) – ( -1.5577106395 , 2.5663389128) – ( -1.5577106395 , 3.5802664894) ; at ( -.9727524222 , 3.0733027011) [ 8]{}; ( .2148130976 , 2.4975667791) – ( .9056208970 , 2.4975667791) – ( .9056208970 , .1800180326) – ( .2148130976 , .1800180326) – ( .2148130976 , 2.4975667791) ; at ( .5602169973 , 1.3387924059) [ 11]{}; ( 1.1621506497 , 2.6819252186) – ( 2.7666074743 , 2.6819252186) – ( 2.7666074743 , 1.6624266114) – ( 1.1621506497 , 1.6624266114) – ( 1.1621506497 , 2.6819252186) ; at ( 1.9643790620 , 2.1721759150) [ 12]{}; ( 1.2915022243 , -.7867065239) – ( 3.3583545919 , -.7867065239) – ( 3.3583545919 , -1.6056480281) – ( 1.2915022243 , -1.6056480281) – ( 1.2915022243 , -.7867065239) ; at ( 2.3249284081 , -1.1961772760) [ 13]{}; ( -.7968619713 , 1.2032321938) – ( .2114945747 , 1.2032321938) – ( .2114945747 , -1.5098597282) – ( -.7968619713 , -1.5098597282) – ( -.7968619713 , 1.2032321938) ; at ( -.2926836983 , -.1533137672) [ 17]{}; ( .5641138320 , -1.6138444828) – ( 2.5808269239 , -1.6138444828) – ( 2.5808269239 , -3.0511703881) – ( .5641138320 , -3.0511703881) – ( .5641138320 , -1.6138444828) ; at ( 1.5724703779 , -2.3325074354) [ 19]{}; ( -3.7671460247 , .8005831732) – ( -2.1515471389 , .8005831732) – ( -2.1515471389 , -1.3386925928) – ( -3.7671460247 , -1.3386925928) – ( -3.7671460247 , .8005831732) ; at ( -2.9593465818 , -.2690547098) [ 22]{}; ( -2.2523767364 , -1.5498963437) – ( -.1576692155 , -1.5498963437) – ( -.1576692155 , -3.2991999649) – ( -2.2523767364 , -3.2991999649) – ( -2.2523767364 , -1.5498963437) ; at ( -1.2050229760 , -2.4245481543) [ 23]{}; ( -3.0558099829 , 2.5416326818) – ( -.8496818492 , 2.5416326818) – ( -.8496818492 , .8201842138) – ( -3.0558099829 , .8201842138) – ( -3.0558099829 , 2.5416326818) ; at ( -1.9527459161 , 1.6809084478) [ 24]{}; ( 1.7702059005 , 1.6120274523) – ( 3.6587852877 , 1.6120274523) – ( 3.6587852877 , -.7779446926) – ( 1.7702059005 , -.7779446926) – ( 1.7702059005 , 1.6120274523) ; at ( 2.7144955941 , .4170413799) [ 28]{}; ; (0,0) circle \[radius= 4.0000000000cm\]; ( 1.1240872451 , 3.5982473001) – ( 1.7306680748 , 3.5982473001) – ( 1.7306680748 , 2.9916664704) – ( 1.1240872451 , 2.9916664704) – ( 1.1240872451 , 3.5982473001) ; at ( 1.4273776599 , 3.2949568852) [ 1]{}; ( -.9987322003 , 1.0913705409) – ( -.3349267640 , 1.0913705409) – ( -.3349267640 , .4275651046) – ( -.9987322003 , .4275651046) – ( -.9987322003 , 1.0913705409) ; at ( -.6668294822 , .7594678228) [ 2]{}; ( -2.4075410268 , .2389884344) – ( -1.7151232872 , .2389884344) – ( -1.7151232872 , -.4534293052) – ( -2.4075410268 , -.4534293052) – ( -2.4075410268 , .2389884344) ; at ( -2.0613321570 , -.1072204354) [ 3]{}; ( .0928802237 , .7592135089) – ( .8768573338 , .7592135089) – ( .8768573338 , -.0247636012) – ( .0928802237 , -.0247636012) – ( .0928802237 , .7592135089) ; at ( .4848687787 , .3672249538) [ 4]{}; ( -.0260879301 , -.0310581519) – ( .9009506965 , -.0310581519) – ( .9009506965 , -.9580967785) – ( -.0260879301 , -.9580967785) – ( -.0260879301 , -.0310581519) ; at ( .4374313832 , -.4945774652) [ 5]{}; ( .9170567289 , -2.5347852949) – ( 1.9013199621 , -2.5347852949) – ( 1.9013199621 , -3.5190485281) – ( .9170567289 , -3.5190485281) – ( .9170567289 , -2.5347852949) ; at ( 1.4091883455 , -3.0269169115) [ 6]{}; ( -.7014888347 , -2.9262494663) – ( .3056642411 , -2.9262494663) – ( .3056642411 , -3.9334025421) – ( -.7014888347 , -3.9334025421) – ( -.7014888347 , -2.9262494663) ; at ( -.1979122968 , -3.4298260042) [ 7]{}; ( -1.6077258591 , -1.6568080075) – ( -.5605155587 , -1.6568080075) – ( -.5605155587 , -2.7040183079) – ( -1.6077258591 , -2.7040183079) – ( -1.6077258591 , -1.6568080075) ; at ( -1.0841207089 , -2.1804131577) [ 8]{}; ( -2.7139556888 , -.4539034160) – ( -1.6152432424 , -.4539034160) – ( -1.6152432424 , -1.5526158623) – ( -2.7139556888 , -1.5526158623) – ( -2.7139556888 , -.4539034160) ; at ( -2.1645994656 , -1.0032596391) [ 9]{}; ( 1.5721596421 , 2.9774040252) – ( 2.6708720884 , 2.9774040252) – ( 2.6708720884 , 1.8786915788) – ( 1.5721596421 , 1.8786915788) – ( 1.5721596421 , 2.9774040252) ; at ( 2.1215158652 , 2.4280478020) [ 10]{}; ( -3.8974526949 , .3787977691) – ( -2.7243482600 , .3787977691) – ( -2.7243482600 , -.7943066658) – ( -3.8974526949 , -.7943066658) – ( -3.8974526949 , .3787977691) ; at ( -3.3109004775 , -.2077544483) [ 11]{}; ( -3.5053765188 , 1.8843399161) – ( -2.3208271626 , 1.8843399161) – ( -2.3208271626 , .6997905599) – ( -3.5053765188 , .6997905599) – ( -3.5053765188 , 1.8843399161) ; at ( -2.9131018407 , 1.2920652380) [ 12]{}; ( -2.2836941851 , 1.4869636662) – ( -1.0361977616 , 1.4869636662) – ( -1.0361977616 , .2394672427) – ( -2.2836941851 , .2394672427) – ( -2.2836941851 , 1.4869636662) ; at ( -1.6599459734 , .8632154544) [ 13]{}; ( -2.8557399830 , -1.5530890834) – ( -1.6082435595 , -1.5530890834) – ( -1.6082435595 , -2.8005855069) – ( -2.8557399830 , -2.8005855069) – ( -2.8557399830 , -1.5530890834) ; at ( -2.2319917713 , -2.1768372952) [ 14]{}; ( -.2473599232 , 3.8667163343) – ( 1.0058589610 , 3.8667163343) – ( 1.0058589610 , 2.6134974501) – ( -.2473599232 , 2.6134974501) – ( -.2473599232 , 3.8667163343) ; at ( .3792495189 , 3.2401068922) [ 15]{}; ( 1.8559504373 , 1.8783243604) – ( 3.2064511527 , 1.8783243604) – ( 3.2064511527 , .5278236451) – ( 1.8559504373 , .5278236451) – ( 1.8559504373 , 1.8783243604) ; at ( 2.5312007950 , 1.2030740028) [ 16]{}; ( 2.4528795813 , .1860678199) – ( 3.8205476786 , .1860678199) – ( 3.8205476786 , -1.1816002774) – ( 2.4528795813 , -1.1816002774) – ( 2.4528795813 , .1860678199) ; at ( 3.1367136300 , -.4977662287) [ 17]{}; ( -.5599979747 , -1.4456810157) – ( .9163968751 , -1.4456810157) – ( .9163968751 , -2.9220758654) – ( -.5599979747 , -2.9220758654) – ( -.5599979747 , -1.4456810157) ; at ( .1781994502 , -2.1838784405) [ 18]{}; ( .9186998882 , -1.0122484334) – ( 2.4408744233 , -1.0122484334) – ( 2.4408744233 , -2.5344229685) – ( .9186998882 , -2.5344229685) – ( .9186998882 , -1.0122484334) ; at ( 1.6797871557 , -1.7733357009) [ 19]{}; ( .9114939769 , .5274566178) – ( 2.4508358939 , .5274566178) – ( 2.4508358939 , -1.0118852992) – ( .9114939769 , -1.0118852992) – ( .9114939769 , .5274566178) ; at ( 1.6811649354 , -.2422143407) [ 20]{}; ( -1.6046812847 , .1617622537) – ( -.0367270644 , .1617622537) – ( -.0367270644 , -1.4061919666) – ( -1.6046812847 , -1.4061919666) – ( -1.6046812847 , .1617622537) ; at ( -.8207041745 , -.6222148564) [ 21]{}; ( -.2710381259 , 2.6077724767) – ( 1.5715942060 , 2.6077724767) – ( 1.5715942060 , .7651401448) – ( -.2710381259 , .7651401448) – ( -.2710381259 , 2.6077724767) ; at ( .6502780400 , 1.6864563107) [ 22]{}; ( -2.1543038971 , 3.3701279201) – ( -.2716143406 , 3.3701279201) – ( -.2716143406 , 1.4874383636) – ( -2.1543038971 , 1.4874383636) – ( -2.1543038971 , 3.3701279201) ; at ( -1.2129591188 , 2.4287831418) [ 23]{}; ; (0,0) circle \[radius= 4.0000000000cm\]; ( -2.2686252086 , 3.2802816032) – ( -1.6620443789 , 3.2802816032) – ( -1.6620443789 , 2.6737007735) – ( -2.2686252086 , 2.6737007735) – ( -2.2686252086 , 3.2802816032) ; at ( -1.9653347937 , 2.9769911884) [ 1]{}; ( -.5708802493 , 1.5321560821) – ( .0929251871 , 1.5321560821) – ( .0929251871 , .8683506458) – ( -.5708802493 , .8683506458) – ( -.5708802493 , 1.5321560821) ; at ( -.2389775311 , 1.2002533639) [ 2]{}; ( -.4220350272 , 3.2944574933) – ( .2703827124 , 3.2944574933) – ( .2703827124 , 2.6020397537) – ( -.4220350272 , 2.6020397537) – ( -.4220350272 , 3.2944574933) ; at ( -.0758261574 , 2.9482486235) [ 3]{}; ( -1.4112045662 , 1.9914838227) – ( -.6272274560 , 1.9914838227) – ( -.6272274560 , 1.2075067125) – ( -1.4112045662 , 1.2075067125) – ( -1.4112045662 , 1.9914838227) ; at ( -1.0192160111 , 1.5994952676) [ 4]{}; ( -3.5895689525 , 1.7633945342) – ( -2.6625303259 , 1.7633945342) – ( -2.6625303259 , .8363559076) – ( -3.5895689525 , .8363559076) – ( -3.5895689525 , 1.7633945342) ; at ( -3.1260496392 , 1.2998752209) [ 5]{}; ( -1.0916952483 , .8602959263) – ( -.1074320151 , .8602959263) – ( -.1074320151 , -.1239673069) – ( -1.0916952483 , -.1239673069) – ( -1.0916952483 , .8602959263) ; at ( -.5995636317 , .3681643097) [ 6]{}; ( -.4372634338 , 2.5477549264) – ( .5698896420 , 2.5477549264) – ( .5698896420 , 1.5406018506) – ( -.4372634338 , 1.5406018506) – ( -.4372634338 , 2.5477549264) ; at ( .0663131041 , 2.0441783885) [ 7]{}; ( -2.6610445314 , 2.6248359795) – ( -1.6138342310 , 2.6248359795) – ( -1.6138342310 , 1.5776256790) – ( -2.6610445314 , 1.5776256790) – ( -2.6610445314 , 2.6248359795) ; at ( -2.1374393812 , 2.1012308293) [ 8]{}; ( 2.8519405801 , .4794214330) – ( 3.9506530264 , .4794214330) – ( 3.9506530264 , -.6192910133) – ( 2.8519405801 , -.6192910133) – ( 2.8519405801 , .4794214330) ; at ( 3.4012968032 , -.0699347901) [ 9]{}; ( 1.9409375217 , 2.5991089089) – ( 3.0396499680 , 2.5991089089) – ( 3.0396499680 , 1.5003964625) – ( 1.9409375217 , 1.5003964625) – ( 1.9409375217 , 2.5991089089) ; at ( 2.4902937449 , 2.0497526857) [ 10]{}; ( -1.6123233497 , 3.3179468180) – ( -.4392189148 , 3.3179468180) – ( -.4392189148 , 2.1448423831) – ( -1.6123233497 , 2.1448423831) – ( -1.6123233497 , 3.3179468180) ; at ( -1.0257711323 , 2.7313946006) [ 11]{}; ( -3.7825010330 , .8332563403) – ( -2.5979516768 , .8332563403) – ( -2.5979516768 , -.3512930160) – ( -3.7825010330 , -.3512930160) – ( -3.7825010330 , .8332563403) ; at ( -3.1902263549 , .2409816621) [ 12]{}; ( -2.4160072315 , -.3403811696) – ( -1.1685108081 , -.3403811696) – ( -1.1685108081 , -1.5878775931) – ( -2.4160072315 , -1.5878775931) – ( -2.4160072315 , -.3403811696) ; at ( -1.7922590198 , -.9641293814) [ 13]{}; ( -3.6646236130 , -.3544730669) – ( -2.4171271895 , -.3544730669) – ( -2.4171271895 , -1.6019694904) – ( -3.6646236130 , -1.6019694904) – ( -3.6646236130 , -.3544730669) ; at ( -3.0408754012 , -.9782212787) [ 14]{}; ( -.0800679186 , -2.5695771892) – ( 1.1731509655 , -2.5695771892) – ( 1.1731509655 , -3.8227960734) – ( -.0800679186 , -3.8227960734) – ( -.0800679186 , -2.5695771892) ; at ( .5465415234 , -3.1961866313) [ 15]{}; ( -2.5549389999 , 1.0359434382) – ( -1.2044382846 , 1.0359434382) – ( -1.2044382846 , -.3145572771) – ( -2.5549389999 , -.3145572771) – ( -2.5549389999 , 1.0359434382) ; at ( -1.8796886422 , .3606930805) [ 16]{}; ( .5715846835 , 3.1207760762) – ( 1.9392527808 , 3.1207760762) – ( 1.9392527808 , 1.7531079789) – ( .5715846835 , 1.7531079789) – ( .5715846835 , 3.1207760762) ; at ( 1.2554187322 , 2.4369420276) [ 17]{}; ( -1.1674036069 , -.1463676618) – ( .3089912429 , -.1463676618) – ( .3089912429 , -1.6227625116) – ( -1.1674036069 , -1.6227625116) – ( -1.1674036069 , -.1463676618) ; at ( -.4292061820 , -.8845650867) [ 18]{}; ( 1.0763940666 , 1.4980540617) – ( 2.5985686016 , 1.4980540617) – ( 2.5985686016 , -.0241204733) – ( 1.0763940666 , -.0241204733) – ( 1.0763940666 , 1.4980540617) ; at ( 1.8374813341 , .7369667942) [ 19]{}; ( -1.9318017849 , -1.6478489806) – ( -.0891694530 , -1.6478489806) – ( -.0891694530 , -3.4904813125) – ( -1.9318017849 , -3.4904813125) – ( -1.9318017849 , -1.6478489806) ; at ( -1.0104856189 , -2.5691651465) [ 22]{}; ( .3100788768 , -.0264747193) – ( 2.8508514090 , -.0264747193) – ( 2.8508514090 , -2.5672472515) – ( .3100788768 , -2.5672472515) – ( .3100788768 , -.0264747193) ; at ( 1.5804651429 , -1.2968609854) [ 28]{}; ; (0,0) circle \[radius= 4.0000000000cm\]; ( .1906390049 , 2.7467864217) – ( .8814468044 , 2.7467864217) – ( .8814468044 , 1.6047251403) – ( .1906390049 , 1.6047251403) – ( .1906390049 , 2.7467864217) ; at ( .5360429047 , 2.1757557810) [ 3 ]{}; ( .2199199479 , -1.4292973034) – ( 1.2115634019 , -1.4292973034) – ( 1.2115634019 , -2.4320828187) – ( .2199199479 , -2.4320828187) – ( .2199199479 , -1.4292973034) ; at ( .7157416749 , -1.9306900610) [ 6 ]{}; ( .1894056969 , 3.7617943428) – ( 1.3593221314 , 3.7617943428) – ( 1.3593221314 , 2.7478667662) – ( .1894056969 , 2.7478667662) – ( .1894056969 , 3.7617943428) ; at ( .7743639142 , 3.2548305545) [ 8 ]{}; ( -2.2460346569 , 3.3097056244) – ( .1885057331 , 3.3097056244) – ( .1885057331 , 2.7414604990) – ( -2.2460346569 , 2.7414604990) – ( -2.2460346569 , 3.3097056244) ; at ( -1.0287644619 , 3.0255830617) [ 9 ]{}; ( -2.5999859631 , 2.7410148394) – ( -1.0623815062 , 2.7410148394) – ( -1.0623815062 , 1.8217947837) – ( -2.5999859631 , 1.8217947837) – ( -2.5999859631 , 2.7410148394) ; at ( -1.8311837346 , 2.2814048115) [ 10 ]{}; ( -1.9965866279 , 1.0974240742) – ( -1.3057788284 , 1.0974240742) – ( -1.3057788284 , -1.2201246723) – ( -1.9965866279 , -1.2201246723) – ( -1.9965866279 , 1.0974240742) ; at ( -1.6511827282 , -.0613502991) [ 11 ]{}; ( .9526169807 , 1.8441567633) – ( 2.5570738052 , 1.8441567633) – ( 2.5570738052 , .8246581560) – ( .9526169807 , .8246581560) – ( .9526169807 , 1.8441567633) ; at ( 1.7548453930 , 1.3344074596) [ 12 ]{}; ( .8838049572 , 2.6634622769) – ( 2.9506573249 , 2.6634622769) – ( 2.9506573249 , 1.8445207727) – ( .8838049572 , 1.8445207727) – ( .8838049572 , 2.6634622769) ; at ( 1.9172311411 , 2.2539915248) [ 13 ]{}; ( .9523179121 , .8243351445) – ( 3.5205630375 , .8243351445) – ( 3.5205630375 , .0109646710) – ( .9523179121 , .0109646710) – ( .9523179121 , .8243351445) ; at ( 2.2364404748 , .4176499078) [ 14 ]{}; ( 1.2119692034 , -1.4269515126) – ( 2.7217185070 , -1.4269515126) – ( 2.7217185070 , -2.9311297856) – ( 1.2119692034 , -2.9311297856) – ( 1.2119692034 , -1.4269515126) ; at ( 1.9668438552 , -2.1790406491) [ 15 ]{}; ( -2.4434279873 , -1.4280680143) – ( .2195246589 , -1.4280680143) – ( .2195246589 , -2.4252824990) – ( -2.4434279873 , -2.4252824990) – ( -2.4434279873 , -1.4280680143) ; at ( -1.1119516642 , -1.9266752566) [ 16 ]{}; ( -1.3011216585 , 1.7603180301) – ( -.2927651125 , 1.7603180301) – ( -.2927651125 , -.9527738919) – ( -1.3011216585 , -.9527738919) – ( -1.3011216585 , 1.7603180301) ; at ( -.7969433855 , .4037720691) [ 17 ]{}; ( -3.3705357935 , 1.8212656096) – ( -2.3566082169 , 1.8212656096) – ( -2.3566082169 , -.9252524963) – ( -3.3705357935 , -.9252524963) – ( -3.3705357935 , 1.8212656096) ; at ( -2.8635720052 , .4480065567) [ 18 ]{}; ( .9530443117 , .0106703559) – ( 2.9697574037 , .0106703559) – ( 2.9697574037 , -1.4266555493) – ( .9530443117 , -1.4266555493) – ( .9530443117 , .0106703559) ; at ( 1.9614008577 , -.7079925967) [ 19 ]{}; ( -.2919069923 , 1.3029313232) – ( .9504328406 , 1.3029313232) – ( .9504328406 , -1.4268736907) – ( -.2919069923 , -1.4268736907) – ( -.2919069923 , 1.3029313232) ; at ( .3292629241 , -.0619711837) [ 20 ]{}; ( .3471338321 , -2.4343027495) – ( 1.2106435814 , -2.4343027495) – ( 1.2106435814 , -3.2922414682) – ( .3471338321 , -3.2922414682) – ( .3471338321 , -2.4343027495) ; at ( .7788887068 , -2.8632721088) [ 1r]{}; ( -2.3550624466 , 1.8074128521) – ( -1.3021376555 , 1.8074128521) – ( -1.3021376555 , 1.0998919607) – ( -2.3550624466 , 1.0998919607) – ( -2.3550624466 , 1.8074128521) ; at ( -1.8286000510 , 1.4536524064) [ 2r]{}; ( -1.3866483132 , -2.4281276269) – ( -.6401302074 , -2.4281276269) – ( -.6401302074 , -3.5423337550) – ( -1.3866483132 , -3.5423337550) – ( -1.3866483132 , -2.4281276269) ; at ( -1.0133892603 , -2.9852306910) [ 4r]{}; ( -.2728065072 , -2.4335318168) – ( .3455778939 , -2.4335318168) – ( .3455778939 , -3.7817212318) – ( -.2728065072 , -3.7817212318) – ( -.2728065072 , -2.4335318168) ; at ( .0363856933 , -3.1076265243) [ 5r]{}; ( -3.1513382182 , -.9257329555) – ( -2.4438173268 , -.9257329555) – ( -2.4438173268 , -2.4633374123) – ( -3.1513382182 , -2.4633374123) – ( -3.1513382182 , -.9257329555) ; at ( -2.7975777725 , -1.6945351839) [ 7r]{}; ; (0,0) circle \[radius= 4.0000000000cm\]; ( .3721021898 , 3.3057146847) – ( 1.2300409085 , 3.3057146847) – ( 1.2300409085 , 2.4422049354) – ( .3721021898 , 2.4422049354) – ( .3721021898 , 3.3057146847) ; at ( .8010715491 , 2.8739598100) [ 1 ]{}; ( -.3921890695 , 2.7403462992) – ( .3153318218 , 2.7403462992) – ( .3153318218 , 1.6874215081) – ( -.3921890695 , 1.6874215081) – ( -.3921890695 , 2.7403462992) ; at ( -.0384286239 , 2.2138839036) [ 2 ]{}; ( 1.2682401892 , 3.2012540707) – ( 2.3824463174 , 3.2012540707) – ( 2.3824463174 , 2.4547359649) – ( 1.2682401892 , 2.4547359649) – ( 1.2682401892 , 3.2012540707) ; at ( 1.8253432533 , 2.8279950178) [ 4 ]{}; ( -.5772985576 , 3.9245494195) – ( .7708908575 , 3.9245494195) – ( .7708908575 , 3.3061650184) – ( -.5772985576 , 3.3061650184) – ( -.5772985576 , 3.9245494195) ; at ( .0967961500 , 3.6153572189) [ 5 ]{}; ( -2.1980591087 , 3.3265241580) – ( -.6604546519 , 3.3265241580) – ( -.6604546519 , 2.6190032666) – ( -2.1980591087 , 2.6190032666) – ( -2.1980591087 , 3.3265241580) ; at ( -1.4292568803 , 2.9727637123) [ 7 ]{}; ( -2.7681367577 , 2.2836543403) – ( -1.5982203232 , 2.2836543403) – ( -1.5982203232 , 1.2697267637) – ( -2.7681367577 , 1.2697267637) – ( -2.7681367577 , 2.2836543403) ; at ( -2.1831785404 , 1.7766905520) [ 8 ]{}; ( .4632962526 , 2.4417576989) – ( 2.8978366426 , 2.4417576989) – ( 2.8978366426 , 1.8735125736) – ( .4632962526 , 1.8735125736) – ( .4632962526 , 2.4417576989) ; at ( 1.6805664476 , 2.1576351362) [ 9 ]{}; ( .7713044023 , 1.8730830391) – ( 2.3089088591 , 1.8730830391) – ( 2.3089088591 , .9538629834) – ( .7713044023 , .9538629834) – ( .7713044023 , 1.8730830391) ; at ( 1.5401066307 , 1.4134730113) [ 10 ]{}; ( -2.0432547364 , -.1648159916) – ( -.4387979119 , -.1648159916) – ( -.4387979119 , -1.1843145988) – ( -2.0432547364 , -1.1843145988) – ( -2.0432547364 , -.1648159916) ; at ( -1.2410263242 , -.6745652952) [ 12 ]{}; ( -2.3277695664 , -1.1961972270) – ( -.2609171987 , -1.1961972270) – ( -.2609171987 , -2.0151387312) – ( -2.3277695664 , -2.0151387312) – ( -2.3277695664 , -1.1961972270) ; at ( -1.2943433826 , -1.6056679791) [ 13 ]{}; ( -.3285471311 , .9534326585) – ( 2.2396979943 , .9534326585) – ( 2.2396979943 , .1400621849) – ( -.3285471311 , .1400621849) – ( -.3285471311 , .9534326585) ; at ( .9555754316 , .5467474217) [ 14 ]{}; ( -3.9208966420 , .7135869409) – ( -2.4111473384 , .7135869409) – ( -2.4111473384 , -.7905913321) – ( -3.9208966420 , -.7905913321) – ( -3.9208966420 , .7135869409) ; at ( -3.1660219902 , -.0385021956) [ 15 ]{}; ( -2.6088095747 , -2.0261244829) – ( .0541430715 , -2.0261244829) – ( .0541430715 , -3.0233389676) – ( -2.6088095747 , -3.0233389676) – ( -2.6088095747 , -2.0261244829) ; at ( -1.2773332516 , -2.5247317252) [ 16 ]{}; ( -1.4079792775 , 2.6002441046) – ( -.3940517009 , 2.6002441046) – ( -.3940517009 , -.1462740013) – ( -1.4079792775 , -.1462740013) – ( -1.4079792775 , 2.6002441046) ; at ( -.9010154892 , 1.2269850516) [ 18 ]{}; ( -3.4932893709 , -1.0760804855) – ( -2.3512280896 , -1.0760804855) – ( -2.3512280896 , -1.7668882850) – ( -3.4932893709 , -1.7668882850) – ( -3.4932893709 , -1.0760804855) ; at ( -2.9222587303 , -1.4214843852) [ 3r]{}; ( -2.4109560457 , 1.0270660988) – ( -1.4081705304 , 1.0270660988) – ( -1.4081705304 , .0354226448) – ( -2.4109560457 , .0354226448) – ( -2.4109560457 , 1.0270660988) ; at ( -1.9095632880 , .5312443718) [ 6r ]{}; ( -1.3360063095 , -3.0778872641) – ( .9815424370 , -3.0778872641) – ( .9815424370 , -3.7686950636) – ( -1.3360063095 , -3.7686950636) – ( -1.3360063095 , -3.0778872641) ; at ( -.1772319362 , -3.4232911638) [ 11r ]{}; ( -.3938376836 , .1396201047) – ( 2.3192542384 , .1396201047) – ( 2.3192542384 , -.8687364413) – ( -.3938376836 , -.8687364413) – ( -.3938376836 , .1396201047) ; at ( .9627082774 , -.3645581683) [ 17r ]{}; ( 2.3194623454 , .6434938171) – ( 3.7567882507 , .6434938171) – ( 3.7567882507 , -1.3732192748) – ( 2.3194623454 , -1.3732192748) – ( 2.3194623454 , .6434938171) ; at ( 3.0381252981 , -.3648627288) [ 19r ]{}; ( .0688138787 , -.8702316402) – ( 1.8236885305 , -.8702316402) – ( 1.8236885305 , -3.0763597739) – ( .0688138787 , -3.0763597739) – ( .0688138787 , -.8702316402) ; at ( .9462512046 , -1.9732957070) [ 25r ]{}; **Acknowledgments** {#acknowledgments .unnumbered} =================== The first author has a grant support from the programme UNAM-DGAPA-PAPIIT-IA106916 [13]{} Achterberg T. SCIP: Solving constraint integer programs. Mathematical Programming Computation 2009;1(1):1–41. Amirgaliyeva Z, Mladenović N, Todosijević R, Urosević D. Solving the maximum min-sum dispersion by alternating formulations of two different problems. European Journal of Operational Research 2017;260(2):444–459. Andrade R, Birgin EG. Symmetry-breaking constraints for packing identical rectangles within polyhedra. Optimization Letters 2013;7(2):375–405. Beasley JE. OR-Library: distributing test problems by electronic mail. Journal of the Operational Research Society 1990;41(11):1069–1072. Birgin EG. Applications of nonlinear programming to packing problems. In Anderssen RS, Broadbridge P, Fukumoto Y, Kajiwara K, Takagi T, Verbitskiy E, Wakayama M (eds). Applications + Practical Conceptualization + Mathematics = fruitful Innovation. Mathematics for Industry, vol 11. Springer, Tokyo, pp 31–39 (2016). Birgin EG, Lobato RD. Orthogonal packing of identical rectangles within isotropic convex regions. Computers & Industrial Engineering 2010;59(4):595–602. Birgin EG, Lobato RD, Morabito R. An effective recursive partitioning approach for the packing of identical rectangles in a rectangle. Journal of the Operational Research Society 2010;61(2):306–320. Birgin EG, Martínez JM, Mascarenhas WF, Ronconi DP. Method of sentinels for packing items within arbitrary convex regions. Journal of the Operational Research Society 2006;57(6):735–756. Birgin EG, Martínez JM, Nishihara FH, Ronconi DP. Orthogonal packing of rectangular items within arbitrary convex regions by nonlinear optimization. Computers & Operations Research 2006;33(12):3535–3548. Bortfeldt A. A reduction approach for solving the rectangle packing area minimization problem. European Journal of Operational Research 2013;224(3):486–496. Brimberg J, Drezner Z, Mladenović N, Salhi S. A new local search for continuous location problems. European Journal of Operational Research 2014;232(2):256–265. Bussieck MR, Vigerske S. MINLP solver software. In J.J. Cochran, L.A. Cox Jr., P. Keskinocak, J.P. Kharoufeh and J.C. Smith (editors), Wiley Encyclopaedia of Operations Research and Management Science, Wiley, New York, 2011. Updated version available from http://www2.mathematik.hu-berlin.de/${\sim}$stefan/minlpsoft.pdf Last accessed February 15 2018. Butenko S, Yezerska O, Balasundaram B. Variable objective search. Journal of Heuristics 2013;19(4):697–709. Caprara A, Lodi A, Martello S, Monaci M. Packing into the smallest square: worst-case analysis of lower bounds. Discrete Optimization 2006;3(4):317–326. Cassioli A, Locatelli M. A heuristic approach for packing identical rectangles in convex regions. Computers & Operations Research 2011;38(9):1342–1350. Christofides N. Optimal cutting of two-dimensional rectangular plates. In CAD 74, Proceedings of the International Conference on Computers in Engineering and Building Design, 25-27 September 1974, Imperial College, London, UK 1974;1–10. Delorme M, Iori M, Martello S. Logic based Benders’ decomposition for orthogonal stock cutting problems. Computers & Operations Research 2017;78:290–298. Dowsland KA, Dowsland WB. Packing problems. European Journal of Operational Research 1992;56(1):2–14. Duarte A, Pantrigo JJ, Pardo EG, Sánchez-Oro J. Parallel variable neighbourhood search strategies for the cutwidth minimization problem. IMA Journal of Management Mathematics 2016;27(1):55–73. Friedman E. Squares in circles. http://www2.stetson.edu/${\sim}$efriedma/squincir/ Last accessed February 15 2018. Hansen P, Mladenović N, Brimberg J, Perez JAM. Variable neighborhood search, in Gendreau M. and Potvin J.-Y. (eds), Handbook of Metaheuristics, Springer. International Series in Operations Research & Management Science 2010;146:61–86. Hansen P, Mladenović N, Todosijević T, Hanafi S. Variable neighborhood search: basics and variants. EURO Journal on Computational Optimization 2017;5(3):423–454. Hertz A, Plumettaz M, Zufferey N. Variable space search for graph coloring. Discrete Applied Mathematics 2008;156(13):2551–2560. Hertz A, Plumettaz M, Zufferey N. Corrigendum to “Variable space search for graph coloring” \[Discrete Appl. Math. 156 (2008) 2551-2560\]. Discrete Applied Mathematics 2009;157(7):1335–1336. Hinostroza I, Pradenas L, Parada V. Board cutting from logs: optimal and heuristic approaches for the problem of packing rectangles in a circle. International Journal of Production Economics 2013;145(2):541–546. Huang E, Korf KE. Optimal rectangle packing: an absolute placement approach. Journal of Artificial Intelligence Research 2012;46:47–87. Huang WQ, Chen DB, Xu RC. A new heuristic algorithm for rectangle packing. Computers & Operations Research 2007;34(11):3270–3280. Kochetov Y, Kononova P, Paschenko M. Formulation space search approach for the teacher/class timetabling problem. Yugoslav Journal of Operations Research 2008;18(1):1–11. Korf KE, Moffitt MD, Pollack ME. Optimal rectangle packing. Annals of Operations Research 2010;179:261–295. Leung JYT, Tam TW, Wong CS, Young GH, Chin FYL. Packing squares into a square. Journal of Parallel and Distributed Computing 1990;10(3):271–275. Li K, Cheng KH. Complexity of resource allocation and job scheduling problems in partitionable mesh connected systems. Proceedings of the First Annual IEEE Symposium of Parallel and Distributed Processing. IEEE Computer Society, Silver Spring, MD, 1989, pp. 358—365. Li ZQ, Wang XF, Tan JY, Wang YS. A quasiphysical and dynamic adjustment approach for packing the orthogonal unequal rectangles in a circle with a mass balance: satellite payload packing. Mathematical Problems in Engineering Volume 2014 (2014), Article ID 657170. Liu DQ, Teng HF. An improved BL-algorithm for genetic algorithm of the orthogonal packing of rectangles. European Journal of Operational Research 1999;112(2):413–420. Lodi A, Martello S, Monaci M. Two-dimensional packing problems: A survey. European Journal of Operational Research 2002;141(2):241–252. López CO, Beasley JE. A heuristic for the circle packing problem with a variety of containers. European Journal of Operational Research 2011;214(3):512–525. López CO, Beasley JE. Packing unequal circles using formulation space search. Computers & Operations Research 2013;40(5):1276–1288. López CO, Beasley JE. A note on solving MINLP’s using formulation space search. Optimization Letters 2014;8(3):1167–1182. López CO, Beasley JE. A formulation space search heuristic for packing unequal circles in a fixed size circular container. European Journal of Operational Research 2016;251(1):65–73. Maag V, Berger M, Winterfeld A, Kufer KH. A novel non-linear approach to minimal area rectangular packing. Annals of Operations Research 2010;179(1):243–260. Maher SJ, Fischer T, Gally T, Gamrath G, Gleixner A, Gottwald RL, Hendel G, Koch T, Lübbecke ME, Miltenberger M, Müller B, Pfetsch ME, Puchert C, Rehfeldt D, Schenker S, Schwarz R, Serrano F, Shinano Y, Weninger D, Witt JT, Witzig J. The SCIP Optimization Suite 4.0. ZIB Report 17-12 (March 2017, Revised September 2017). Available from\ https://opus4.kobv.de/opus4-zib/files/6217/scipoptsuite-401.pdf Last accessed February 15 2018. Martello S, Monaci M. Models and algorithms for packing rectangles into the smallest square. Computers & Operations Research 2015;63:161–171. Mladenović N, Plastria F, Urošević D. Reformulation descent applied to circle packing problems. Computers & Operations Research 2005;32(9):2419–2434. Mladenović N, Plastria F, Urošević D. Formulation space search for circle packing problems. In “Engineering Stochastic Local Search Algorithms. Designing, Implementing and Analyzing Effective Heuristics”, Proceedings of the International Workshop, SLS 2007, Brussels, Belgium, September 6-8, 2007. Lecture Notes in Computer Science volume 4638, 2007, pages 212–216. Murata H, Fujiyoshi K, Nakatake S, Kajitani Y. VLSI module placement based on rectangle-packing by the sequence-pair. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 1996;15(12):1518–1524. Pardo EG, Mladenović N, Pantrigo JJ, Duarte A. Variable formulation search for the cutwidth minimization problem. Applied Soft Computing 2013;13(5):2242–2252. Pedroso JP, Cunha S, Tavares JN. Recursive circle packing problems. International Transactions in Operational Research 2016;23(1-2):355–368. Picouleau C. Worst-case analysis of fast heuristics for packing squares into a square. Theoretical Computer Science 1996;164(1-2):59–72. SCIP: Solving constraint integer programs. Available from http://scip.zib.de/ Last accessed February 15 2018. Sweeney PE, Paternoster ER. Cutting and packing problems: a categorized, application-orientated research bibliography. Journal of the Operational Research Society 1992;43(7):691–706. Vigerske S. Private communication, April 2017. Vigerske S, Gleixner A. SCIP: Global optimization of mixed-integer nonlinear programs in a branch-and-cut framework. ZIB Report 16-24 (May 2016). Available from\ https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/5937 Last accessed February 15 2018. Wascher G, Haußner H, Schumann H. An improved typology of cutting and packing problems. European Journal of Operational Research 2002;183(3):1109–1130. Wu YL, Huang WQ, Lau SC, Wong CK, Young GH. An effective quasi-human based heuristic for solving the rectangle packing problem. European Journal of Operational Research 2007;141(2):341–358. [^1]: [email protected] [^2]: [email protected]; [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the results of a global fit to precision data in supersymmetric models. We consider both gravity- and gauge-mediated models. As the superpartner spectrum becomes light, the global fit to the data typically results in larger values of $\chi^2$. We indicate the regions of parameter space which are excluded by the data. We discuss the additional effect of the $B(B\rightarrow X_s\gamma)$ measurement. Our analysis excludes chargino masses below $M_Z$ in the simplest gauge-mediated model with $\mu>0$, with stronger constraints for larger values of $\tan\beta$.' address: - 'Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309, USA' - 'Institute for Particle Physics, University of California, Santa Cruz, California 95064, USA' author: - 'Damien M. Pierce and Jens Erler' title: '$\chi^2$ Analysis of Supersymmetric Models[^1]' --- Introduction ============ Low energy measurements can serve as useful probes of higher energy scales, because the virtual effects of heavy particles influence low energy observables. Hence, low energy measurements can constrain possible new physics scenarios. The most striking example of this effect was the “virtual top-quark discovery”. When the top-quark mass was first measured through direct production at the Tevatron, the precision electroweak data had already constrained the mass with about the same central value and uncertainty [@DMP:top]. In the standard model some observables are sensitive to the square of the top-quark mass and the logarithm of the Higgs boson mass. Hence, both these masses can be constrained by precision data. In supersymmetric models, some observables are sensitive to the supersymmetric masses. Just as with $m_t$ and $M_H$, there are values of supersymmetric parameters in conflict with measurements. We will discuss the results of a global fit to precision data in supersymmetric models. Before discussing the supersymmetric case, we review the results of a global fit within the standard model. This serves as a useful barometer for comparison with the supersymmetric models. Also, the supersymmetric models reduce to the standard model as the supersymmetric mass scale becomes large. The supersymmetric corrections decouple as $M_Z^2/M_{\rm SUSY}^2$ (or faster). The only remnant of supersymmetry in the large $M_{\rm SUSY}$ limit is the light Higgs boson mass (which remains a prediction of the model). The observables =============== ‘?=? ------------------------------------------------------------- ------------------------ ---------- --------- Measurement Standard Pull Model $M_Z$ \[GeV\] 91.1863 $\pm$ 0.0019 91.1862   0.0   $\Gamma_Z$ \[MeV\] 2494.7 $\pm$ 2.6 2496.9 $-$0.9 $\sigma_h$ \[nb\] 41.489 $\pm$ 0.055 41.467 0.4 $R_e$ 20.756 $\pm$ 0.029 20.757 0.0 $R_\mu$ 20.795 $\pm$ 0.029 20.757 1.0 $R_\tau$ 20.831 $\pm$ 0.029 20.802 0.5 $A_{\rm FB}^e$ 0.0161 $\pm$ 0.0010 0.0162 0.0 $A_{\rm FB}^\mu$ 0.0165 $\pm$ 0.0010 0.0162 0.2 $A_{\rm FB}^\tau$ 0.0204 $\pm$ 0.0010 0.0162 2.3 ${\cal A}_\tau(\tau)$ 0.1401 $\pm$ 0.0067 0.1469 $-$1.0 ${\cal A}_e(\tau)$ 0.1382 $\pm$ 0.0076 0.1469 $-$1.1 $\sin^2\theta^{\rm lept}_{\rm eff} (\langle Q_{FB}\rangle)$ 0.2322 $\pm$ 0.0010 0.2315 0.7 $R_b$ 0.2177 $\pm$ 0.0011 0.2158 1.7 $R_c$ 0.1722 $\pm$ 0.0053 0.1723 0.0 $A_{\rm FB}^b$ 0.0985 $\pm$ 0.0022 0.1030 $-$2.1 $A_{\rm FB}^c$ 0.0735 $\pm$ 0.0048 0.0736 0.0 ${\cal A}_b$ 0.897 $\pm$ 0.047 0.935 $-$0.8 ${\cal A}_c$ 0.623 $\pm$ 0.085 0.667 $-$0.5 $A_{LR}$ 0.1548 $\pm$ 0.0033 0.1469 2.4 ${\cal A}_\mu$ 0.102 $\pm$ 0.034 0.147 $-$1.3 ${\cal A}_\tau$ 0.195 $\pm$ 0.034 0.147 1.4 $Q_W$(Cs) $-$72.11 $\pm$ 0.93 $-$73.11 1.1 $Q_W$(Tl) $-$114.77 $\pm$ 3.65 $-$116.7 0.5 $M_W$ \[GeV\] 80.402 $\pm$ 0.076 80.375 0.4 $m_t$ \[GeV\] 175.6 $\pm$ 5.0 173.0 0.5 $\Delta\alpha_{\rm had}$ 0.028037$\pm$ 0.000654 0.02797 0.1 ------------------------------------------------------------- ------------------------ ---------- --------- In the standard model we take as inputs the muon decay constant, $G_\mu$, the $Z$-boson mass, $M_Z$, the top quark mass, $m_t$, the Higgs boson mass, $M_H$, the electromagnetic coupling, $\alpha$, and the strong coupling, $\alpha_s$. The last two couplings are taken in the  scheme at the $Z$-scale. Given these six inputs[^2] we have predictions for all other observables in the standard model. We consider the list of observables below, and find the values of the five inputs[^3] which minimize the total $\chi^2$. In this way we find the best fit values of the input parameters and the standard model predictions for all the observables. The observables we include in our $\chi^2$ are - Line-shape and lepton asymmetries[^4]. These are the $Z$-mass, the $Z$-width, the peak hadronic cross-section, the ratio of the hadronic width to the leptonic widths, and the leptonic forward-backward asymmetries: $M_Z$, $\Gamma_Z$, $\sigma_{\rm had}$, $R_{e,\mu,\tau}$, $A^{FB}_{e,\mu,\tau}$. - $\tau$ polarization. The $\tau$ decay analysis yields measurements of the $\tau$ and $e$ left-right asymmetries: ${\cal A}_{\tau,e}(\tau)$. - Light quark charge asymmetry, $\langle Q_{FB}\rangle$, which yields a measurement of $\sin^2\!\theta^{\rm lept}_{\rm eff}$. - $b$ and $c$ quark results. These are the ratios of the heavy quark widths to the total hadronic width, and the heavy quark forward-backward asymmetries (polarized at the SLC): $R_{b,c}$, $A^{FB}_{b,c}$, ${\cal A}_{b,c}$. - Leptonic left-right asymmetries (total and forward-backward): $A_{LR}$, ${\cal A}_{\mu,\tau}$. - Atomic parity violation weak charges: $Q_W$(Cs) and $Q_W$(Tl). - $W$-boson mass, top-quark mass, and the light quark contribution to $\alpha$: $M_W$, $m_t$ and $\Delta\alpha_{\rm had}$. There are 26 observables included here. With 5 parameters in the fit, we are left with 21 degrees of freedom. The standard model fit ====================== In Table 1 we list the results of the standard model fit. We list the measured values with the errors[^5] [@DMP:LEPEWWG; @DMP:SLD; @DMP:APV; @DMP:mt; @DMP:mw; @DMP:EJ], the standard model predictions, and the pull, which is defined to be the difference between the measured value and the standard model prediction, divided by the error. If we sum the squares of the pulls we obtain the $\chi^2$ of the best fit, which in this case is 29.57. With 21 degrees of freedom, this corresponds to a probability of 10.1%. This means that if the standard model does describe the data, in a random variate (a set of measured values with central values distributed randomly in a Gaussian fashion) the probability that the $\chi^2$ is greater than 29.57 is 10.1%. This 10.1% probability may sound low, but we consider it to be reasonable. Remember that the “best” we could hope for is 50% (higher than that would suggest that the data fits the predictions “too well”). In the days when $R_b$ was 3.5$\sigma$ off, the goodness of the fit was much less than 1%. The best fit values of some inputs are $$m_t = 173\pm5\ {\rm GeV}\ ,$$ $$\alpha_s = 0.122 \pm 0.003\ ,$$ $$M_H = 93 ^{+104}_{-57}\ {\rm GeV}\ .$$ We show contours of constant $\chi^2$ in the $m_t-M_H$ plane in Fig. 1. The contours correspond to 68% and 90% confidence. We see that $M_H$ is constrained to be less than 400 GeV at 95% CL. The supersymmetric analysis =========================== There are two approaches to $\chi^2$ analyses in supersymmetric models which one might consider. In the first approach one tries to repair the discrepancies seen between the data and the standard model [@DMP:prec.mssm]. For example, the standard model prediction for $R_b$ used to be 3.5$\sigma$ off. One could find regions of supersymmetric parameter space where this discrepancy was repaired [@DMP:Rb]. Now, however, there are no large ($3\sigma$) discrepancies, and the largest deviations cannot be repaired by supersymmetry. This approach is no longer useful, since the supersymmetric corrections cannot reduce the $\chi^2$ significantly, and one has to pay the price of the smaller number of degrees of freedom. In the other approach, one notices that there are significant regions of parameter space where the supersymmetric corrections make the fit worse. Consider the plot[^6] of $\chi^2$ vs. $M_{\rm SUSY}$ in Fig. 2. At large $M_{\rm SUSY}$ the $\chi^2$ approaches the standard model value (with the light Higgs mass determined as a function of the supersymmetric parameters). At smaller $M_{\rm SUSY}$, the $\chi^2$ typically rises; the unsuppressed supersymmetric radiative corrections can result in a terrible fit. Here we focus on this approach, elucidating the regions of parameter space where the $\chi^2$ is so large that those points can be ruled out. Notice there are two competing effects which will continue to determine the utility of this approach. As the data becomes more precise, the smaller errors will lead to larger values of $\chi^2$ (assuming the discrepancies are real). At the same time, as more data is accumulated, the limits on the supersymmetric mass spectrum are increased. One must then consider larger values of the supersymmetric masses, and this leads to smaller values of $\chi^2$. Three supersymmetric models --------------------------- We will show the results of the $\chi^2$ analysis in three supersymmetric models. The “minimal supergravity” model [@DMP:msugra] is perhaps the most commonly considered high scale model. Here we take as boundary conditions at the GUT scale (the scale where $g_1=g_2$) a universal scalar mass $M_0$, a universal gaugino mass $M_{1/2}$, and a universal trilinear scalar coupling $A_0$. We also take as an input the ratio of vacuum expectation values of the two Higgs doublets, $\tan\beta\equiv v_2/v_1$, and the sign of the $\mu$-term. Consider two assumptions which are necessary in order to arrive at the “minimal supergravity” model. First, one must assume a flat Kähler metric. Second, one assumes the universality of the scalar masses is maintained from the Planck scale to the GUT scale. Both of these assumptions seem artificial and unrealistic, especially the latter. However, they are useful for simplicity and economy, and in particular they guarantee that the model is free of potentially disastrous FCNC problems. Also, the running of the parameters between the Planck scale and the GUT scale is model dependent, so it makes sense that the minimal model assumes no running. Models with gauge-mediated (GM) supersymmetry breaking [@DMP:gmsb] comprise a class which is automatically free of FCNC problems. Here, the supersymmetry breaking is communicated from the hidden sector to the visible sector through the interactions of the gauge fields and the messenger fields. In the minimal model the superpotential contains a singlet which acquires both a vev $X$ and an $F$-term $F_X$. The singlet is coupled to the messenger fields, which are in a vector-like representation under the standard model gauge group. The supersymmetric spectrum is proportional to $\Lambda\equiv F_X/X$, with dependence on the messenger field representation. We consider two models, a model with a [ ]{}messenger sector and a model with a [ ]{}messenger sector. The boundary conditions for the soft masses are applied at the messenger mass scale, $M$. Again, we take $\tan\beta$ and the sign of $\mu$ as inputs. In both the supergravity and gauge-mediated models we impose radiative electroweak symmetry breaking [@DMP:ewsb]. Starting with a common positive Higgs boson mass-squared at the GUT scale or the messenger scale, we evolve the Higgs masses down to the weak-scale using the renormalization group equations (RGE’s) [@DMP:rge's]. Because of the large top-quark Yukawa coupling, the mass-squared of the Higgs which couples to the top, $m_{H_2}^2$, is driven negative in the vicinity of the electroweak scale. This signals the breaking of electroweak symmetry, and requiring this to occur allows us to solve for the heavy Higgs boson mass and the Higgsino mass as a function of the input parameters: $$\begin{aligned} m_A^2 &=& {1\over\cos2\beta}\left(m_{H_2}^2-m_{H_1}^2\right)-M_Z^2 \\ \mu^2 &=& {1\over2} \biggl[ \tan2\beta\left( m_{H_2}^2\tan\beta - m_{H_1}^2 \cot\beta \right) \\ && \qquad\qquad\qquad\qquad\qquad- M_Z^2\biggr]\nonumber\end{aligned}$$ In the gauge-mediated models we implicitly assume that whatever mechanism is responsible for the generation of the $B$ and $\mu$ terms does not give rise to contributions to the soft scalar masses. The determination of $\chi^2$ {#DMP:proc} ----------------------------- The overview of the $\chi^2$ analysis is as follows: 1. Pick starting values for $(M_Z,\ m_t,\ \alpha,\ \alpha_s)$. 2. Pick a random point in supersymmetric parameter space. 3. For fixed $(M_Z,\ m_t,\ \alpha,\ \alpha_s)$ solve the supersymmetry model by iteration. Here we have two-sided boundary conditions. We know the gauge and Yukawa couplings and $\tan\beta$ at the weak scale and the soft parameters at the high scale. We include full one-loop corrections in the evaluation of the gauge and Yukawa couplings, and in the Higgs sector (both the light Higgs boson mass and electroweak symmetry breaking). 4. Compute $\chi^2$. Here we include the full one-loop supersymmetric corrections to every observable, with two caveats: - We use the oblique approximation for the evaluation of atomic parity violation weak charges - The SUSY box diagrams are neglected in $Z$-pole observables 5. Minimize $\chi^2$ with respect to $(M_Z, m_t, \alpha, \alpha_s)$ for the fixed set of supersymmetric corrections. 6. If not converged, go to step 3. 7. Apply current limits on the superpartner and Higgs boson mass spectrum from direct searches. If this fails, disregard this point. We include the current mass limits from CDF, D0 [@DMP:cdfd0], and LEP II [@DMP:lep2]. LEP II has produced new limits on the chargino mass, the slepton masses, the Higgs boson masses, and the light top-squark mass. Because the spectrum in high scale models is correlated, the CDF and D0 gluino and squark mass limits are typically irrelevant. For example, in the [ ]{}gauge-mediated model, after imposing the limits on the chargino, Higgs bosons, and sleptons, we find that the squark and gluino masses are larger than 260 GeV. The direct search limits are 230 and 180 GeV, respectively [@DMP:cdfd0]. The oblique approximation ------------------------- We can give a concise description of the overall magnitude and relevance of the supersymmetric corrections by considering the oblique approximation. Most of the precision observables involve gauge boson exchange, and hence they receive universal (i.e. process and flavor-independent) corrections from the gauge boson self-energies. With $M_Z$, $G_\mu$, and $\alpha$ as inputs, there are three independent linear combinations of gauge-boson self-energies which appear in physical observables in the lowest order of a derivative expansion. In some cases the full corrections are dominated by the oblique corrections. This might be expected, since the oblique corrections include contributions from every non-singlet superpartner, so they are enhanced by the number of generations and/or the number of colors. The non-oblique corrections (the fermion wave-function, vertex, and box corrections) arise from a limited set of diagrams, since the loops are constrained by the external fermion quantum numbers. We parametrize the oblique corrections by $S$, $T$ and $U$ [@DMP:stu]. These are given by the expressions [@DMP:stu; @form] $$\begin{aligned} S&=& \biggl[\cos^2\theta_W\left(F_{ZZ} - F_{\gamma\gamma}\right)\\ &&\qquad- {\cos\theta_W\over\sin\theta_W}\cos2\theta_W F_{\gamma Z}\biggr] \times {4\sin^2\theta_W\over\alpha}\nonumber\\ T&=& \biggl[{\Pi_{WW}(0)\over M_W^2} - {\Pi_{ZZ}(0)\over M_Z^2}\\ &&\qquad\qquad -2{\sin\theta_W\over\cos\theta_W}{\Pi_{\gamma Z}(0) \over M_Z^2}\biggr] \times {1\over\alpha}\nonumber\\ U&=&\biggl[F_{WW} - \cos^2\theta_WF_{ZZ} - \sin^2\theta_WF_{\gamma\gamma} \\ &&\qquad\qquad -\sin2\theta_WF_{\gamma Z}\biggr] \times{4\sin^2\theta_W\over\alpha}\nonumber\end{aligned}$$ where $F_{ij}=(\Pi_{ij}(M_j^2)-\Pi_{ij}(0))/M_j^2$ (except $F_{\gamma\gamma}=\Pi_{\gamma\gamma}(M_Z^2)/M_Z^2$). The most important of these parameters is $T$. In Fig. 3 we show the contributions to $T$ from the various supersymmetric sectors in the supergravity model. Each point in the scatter plots is a best fit at the randomly chosen point in supersymmetric parameter space, as described in Section \[DMP:proc\]. Fig. 3(a) shows the chargino/neutralino contribution vs. the light chargino mass, (b) shows the stop/sbottom contribution vs. the heavy stop mass, (c) shows the slepton contribution vs. the left-handed selectron mass, and (d) shows the supersymmetric Higgs boson contribution vs. the CP-odd Higgs boson mass. We note that all the sectors give positive contributions to $T$ (also to $U$). We see that the first three sectors contribute with the same order of magnitude (at most about 0.06, 0.12, and 0.07, respectively). The Higgs sector and the contributions of the first two generation squarks each contribute at most about 0.015. If we add all these contributions together, we obtain the plots in Fig. 4. Here we show the total supersymmetric contributions to $S$, $T$ and $U$ vs. the light chargino mass. The cancellation between the slepton and chargino sectors results in typically smaller values for $S$. The SUSY contributions to $S$, $T$ and $U$ are in the ranges ($-0.05,0.1$), (0,0.2), and (0,0.09). For chargino masses above 300 GeV the decoupling results in suppressed contributions. We illustrate the relevance of these corrections in Figs. 5. Here, in the $T$, $S$ plane, we show the 68% and 95% CL contours found by varying $M_Z$, $m_t$, $\alpha$, $\alpha_s$, and $U$, with $M_H$ fixed to its best fit value, in the standard model. On top of these contours we show scatter plots of the supersymmetric contributions to $S$ and $T$. In Fig. 5(a) the minimal supergravity scatter plot is shown, in 5(b) the [ ]{}gauge-mediated model scatter plot is shown, and the [ ]{}GM model results are shown in 5(c). The point (0,0) is a part of each supersymmetric scatter plot, since this corresponds to the decoupled region of parameter space, where $M_{\rm SUSY}$ is large. As we decrease $M_{\rm SUSY}$, the scatter moves up and to the right or left. One can get a feeling for the overall magnitude of the supersymmetric corrections here. The direct limits on the supersymmetric mass spectrum have significantly constrained the magnitude of the supersymmetric corrections. For example, in the [ ]{}gauge-mediated model the contributions to $S$ and $T$ fall almost entirely inside the 68% contour. Full one-loop analysis ---------------------- To rule out points in parameter space we need to consider the full one-loop supersymmetric corrections, not just the oblique corrections. The oblique approximation works well for some observables, but poorly for others. We illustrate this in Fig. 6. In Fig. 6(a) we show the $W$-boson mass vs. $M_{\rm SUSY}$ in the oblique approximation and the full one-loop result, and they are seen to be in good agreement (here we work in the supergravity model, define $M_{\rm SUSY} = M_0 = M_{1/2} = A_0$, and set $\tan\beta=2$ and $\mu>0$). We show the correction in units of “pull” (that is, we divide the correction by the experimental error). In Fig. 6(b) we show the full and oblique $\sin^2\theta^{\rm lept}_{\rm eff}$. Clearly the non-oblique corrections are significant in this case. In the full one-loop supersymmetric analysis we have included an external constraint on the strong coupling, $\alpha_s=0.118 \pm0.003$, which we obtained by combining all except the $Z$-lineshape data [@DMP:alphas]. The extent of the complete one-loop corrections for each observable is shown in Fig. 7, where the horizontal lines indicate the range of values of each observable in the entire supersymmetric parameter space. The top line corresponds to the supergravity model, the middle line to the [ ]{}GM model, and the bottom line to the [ ]{}GM model. The small vertical line shows the value of the pull in the standard model. Looking at this plot the general impression is that the supersymmetric corrections are unable to provide a significantly more satisfactory description of the data than the standard model. Some of the observables are seen to be irrelevant in the fit (i.e. $R_c,\ {\cal A}_c,\ {\cal A}_b,\ {\cal A}_\mu,\ {\cal A}_\tau,\ Q_W({\rm Cs})$ and $Q_W({\rm Tl})$; the supersymmetric corrections to these observables are small relative to the experimental uncertainty). Of the observables with sizable corrections, some can reduce the SM discrepancies, ($R_b$, $A_{LR}$), while others can increase the SM discrepancies (${\cal A}_{e,\tau}(\tau)$, $A^{FB}_b$). There are interesting correlations among the corrections to different observables. For example, the positive corrections to $R_b$ (which reduce the SM discrepancy) are accompanied by positive corrections to $A^{FB}_b$ (which lead to a larger discrepancy). We illustrate this in Fig. 8, which shows the $A^{FB}_b$ pull vs. the $R_b$ pull in the supergravity model. The minimum $\chi^2$ found in our random scan over parameter space is 29.6 for 19 degrees of freedom in the supergravity model. For both GM models, we find the minimum $\chi^2$ values of 30.3 for 20 degrees of freedom. These correspond to goodness of fits of 5.8% and 6.5%, respectively. We can compare these numbers with the standard model goodness of fit of 10.9%. We see that the marginal reduction in $\chi^2$ is more than compensated for by the increase in the number of input parameters. What we are really interested in is the set of points with large values of $\chi^2$. Not counting the supersymmetry parameters as fit parameters, we deem a point in supersymmetric parameter space excluded if the goodness of the fit is less than 5%. In Fig. 9 we show the plots of $\chi^2$ vs. the input parameters in the supergravity model, with $\mu>0$. All points above the upper horizontal line are excluded at the 95% confidence level. Fig. 10 shows the same plot with $\mu<0$. With $\mu>0$ there is significantly more parameter space ruled out. We see that in the $\mu>0$ ($\mu<0$) case, there are no points excluded if $M_{1/2}> 155 \ (160)$ GeV, or $M_0>160\ (100)$ GeV, or $\tan\beta<2.2\ (3)$. The excluded parameter space forms a complicated hyper-region. We plot the full one-loop $\chi^2$ vs. $\Lambda$ in the [ ]{}and [ ]{}GM models in Fig. 11, with $\mu>0$. We see that in the [ ]{}model values of $\Lambda<30$ TeV are excluded. In the [ ]{}case, $\Lambda<12$ TeV is excluded. In the [ ]{}GM model with $\mu<0$ there are no points excluded by this analysis. The $B\rightarrow X_s\gamma$ constraint ======================================= We now specially consider a very important observable. The CLEO measurement [@DMP:cleo] of the rare decay $B\rightarrow X_s\gamma$ yields the 90% confidence interval $1.0\times10^{-4} < B(B\rightarrow X_s\gamma) < 4.2\times10^{-4}$. This measurement imposes a significant constraint on supersymmetric models [@DMP:bsg]. The charged Higgs loops and chargino loops give the largest contributions. The chargino contribution contains a term proportional to $\mu\tan\beta$, and it can be much larger than the standard model amplitude, and can be of either sign. Hence, the $B\rightarrow X_s\gamma$ rate in supersymmetric models can be much larger or much smaller than the standard model prediction. This leads to very large values of $\chi^2$ in some regions of parameter space (e.g. $\mu>0$ and large $\tan\beta$). This is illustrated in Fig. 12, where we show the $\chi^2$ before and after considering the $b\rightarrow s\gamma$ constraint, in the [ ]{}GM model, with $\mu>0$. We see from Fig. 12(b) that we can exclude chargino masses below $M_Z$ with the $b\rightarrow s\gamma$ constraint. For larger values of $\tan\beta$, this constraint becomes stronger. We show in Fig. 13 the lower bounds on the lightest neutralino and lightest chargino masses vs. $\tan\beta$. There are corresponding bounds on the other superpartner masses. There are no such bounds in the case $\mu<0$. In the supergravity model the $b\rightarrow s\gamma$ constraint does not yield such strong limits, because the parameter space has more freedom. Nevertheless, the additional region of parameter space which is excluded after including the $b\rightarrow s\gamma$ measurement is significant. In particular, the $\mu>0$, large $\tan\beta$ region is severely constrained. Conclusions =========== Global fits to the world’s precision data provide significant constraints on supersymmetric models. We gave an encapsulated view of the supersymmetric corrections by examining the oblique set. We then indicated the amount of parameter space which is excluded based on a full one-loop analysis. We found it important to include as many observables as possible in the fit, since different models, or different regions of parameter space in a given model, are more or less sensitive to different observables. We showed the added sensitivity after including the $b\rightarrow s\gamma$ measurement in the list of observables. The large $\tan\beta$, $\mu>0$ region of parameter space was shown to be severely constrained. Because the supersymmetric corrections decouple and the standard model with a light Higgs boson is consistent with the data, most of the supersymmetric parameter space is consistent with the data. Some regions of parameter space with light superpartners are excluded, but, on the other hand, there are points in parameter space with very light particles which are consistent with the data. In fact, the point in the supergravity model parameter space with the smallest $\chi^2$ in our scan includes a light right-handed top-squark (55 GeV). So who knows what the next experiments will find? Acknowledgements {#acknowledgements .unnumbered} ================ D.M.P. thanks the CERN theory group and the CERN computing center staff for generous hospitality. [9]{} J. Erler, P. Langacker, Phys. Rev. D52 (1995) 441. A. Böhm for the ALEPH, DELPHI, L3 and OPAL Collaborations, talk presented at [*32nd Rencontres de Moriond: Electroweak Interactions and Unified Theories*]{}, Les Arcs, France, March 1997. SLD Collaboration: K. Abe [*et al.*]{}, Phys. Rev. Lett. 78 (1997) 17, 78 (1997) 2075 and 79 (1997) 804; P.C. Rowson, talk presented at [*32nd Rencontres de Moriond: Electroweak Interactions and Unified Theories*]{}, Les Arcs, France, March 1997. C. S. Wood [*et al.*]{}, Science 275 (1997) 1759; N. H. Edwards, S. J. Phipp, P. E. G. Baird, S. Nakayama, Phys. Rev. Lett. 74 (1995) 2654; P. A. Vetter, D. M. Meekhof, P. K. Majumder, S. K. Lamoreaux, E. N. Fortson, Phys. Rev. Lett. 74 (1995) 2658. R. Raja for the D0 and CDF Collaborations, preprint FERMILAB–CONF–97–194–E, [hep-ex/9706011]{}; D0 Collaboration: B. Abbott [*et al.*]{}, preprint FERMILAB–PUB–97–172–E, [hep-ex/9706014]{}. CDF Collaboration: F. Abe [*et al.*]{}, Phys. Rev. Lett. 65 (1990) 2243; [*ibid.*]{} 75 (1995) 11; Phys. Rev. D43 (1991) 2070; [*ibid.*]{} D52 (1995) 4784; D0 Collaboration: S. Abachi [*et al.*]{}, Phys. Rev. Lett. 77 (1996) 3309, and preprint FERMILAB–CONF–97–222–E, [hep-ex/9706028]{}. S. Eidelman, F. Jegerlehner, Z. Phys. C67 (1995) 585. G.L. Kane, Robin G. Stuart, James D. Wells, Phys. Lett. B354 (1995) 350; P.H. Chankowski, S. Pokorski, Phys. Lett. B366 (1996) 188; W. de Boer, A. Dabelstein, W. Hollik, W. Mösle, U. Schwickerath, [hep-ph/9609209]{} and Z. Phys. C75 (1997) 627. M. Boulware, D. Finnell, Phys. Rev. D44 (1991) 2054; J.D. Wells, C. Kolda, G.L. Kane, Phys. Lett. B338 (1994) 219; X. Wang, J.L. Lopez, D.V. Nanopoulos, Phys. Rev. D52 (1995) 4116; D. Garcia, J. Sola, Phys. Lett. B354 (1995) 335; P.H. Chankowski, S. Pokorski, Nucl. Phys. B475 (1996) 3; J. Ellis, J.L. Lopez, D.V. Nanopoulos, Phys. Lett. B397 (1997) 88 and B372 (1996) 95. A. Chamseddine, R. Arnowitt, P. Nath, Phys. Rev. Lett. 49 (1982) 970; R. Barbieri, S. Ferrara, C. Savoy, Phys. Lett. B119 (1982) 343; L.J. Hall, J. Lykken, S. Weinberg, Phys. Rev. D27 (1983) 2359. M. Dine, W. Fischler, M. Srednicki, Nucl. Phys. B189 (1981) 575; S. Dimopoulos, S. Raby, Nucl. Phys. B192 (1981) 353; C. Nappi, B. Ovrut, Phys. Lett. B113 (1982) 175; L. Alvarez-Guamé, M. Claudson, M. Wise, Nucl. Phys. B207 (1982) 96. K. Inoue, A. Kakuto, H. Kamatsu, S. Takeshita, Prog. Theo. Phys. 68 (1982) 927; L. Alvarez-Gaumé, J. Polchinski and M.B. Wise, Nucl. Phys. B221 (1983) 495; J. Ellis, J.S. Hagelin, D.V. Nanopoulos, K. Tamvakis, Phys. Lett. B125 (1983) 275; L.E. Ibañez, C. Lopez, Nucl. Phys. B233 (1984) 511; L.E. Ibañez, C. Lopez, C. Muñoz, Nucl. Phys. B250 (1985) 218. M.E. Machacek, M.T. Vaughn, Nucl. Phys. B222 (1983) 83; [*ibid.*]{} B236 (1984) 221; [*ibid.*]{} B249 (1985) 70; I. Jack, Phys. Lett. B147 (1984) 405; Y. Yamada, Phys. Rev. D50 (1994) 3537; I. Jack, D.R.T. Jones, Phys. Lett. B333 (1994) 372; S. Martin and M. Vaughn, Phys. Lett. B318 (1993) 331; [*ibid.*]{}, Phys. Rev. D50 (1994) 2282. R. Culbertson, proceedings of the 5th International Conference on Supersymmetries in Physics (SUSY 97), Philadelphia, Pennsylvania, May 27-31, 1997. F. Cerutti, proceedings of the 5th International Conference on Supersymmetries in Physics (SUSY 97), Philadelphia, Pennsylvania, May 27-31, 1997. M.E. Peskin, T. Takeuchi, Phys. Rev. Lett. 65 (1990) 964; [*ibid.*]{} Phys. Rev. D46 (1992) 381. P.H. Chankowski, A. Dabelstein, W. Hollik, W.M. Mösle, S. Pokorski, J. Rosiek, Nucl. Phys. B417 (1994) 101. For a review see P. N. Burrows, Acta Phys. Polon. B28 (1997) 701. CLEO Collaboration: M.S. Alam [*et al.*]{}, Phys. Rev. Lett. 74 (1995) 2885. S. Bertolini, F. Borzumati, A. Masiero, G. Ridolfi, Nucl. Phys. B353 (1991) 591; F. Borzumati, Z. Phys. C63 (1994) 291; V. Barger, M.S. Berger, P. Ohmann, R.J.N. Phillips, Phys. Rev. D51 (1995) 2438; F. Gabbiani, E. Gabrielli, A. Masiero, L. Silvestrini, Nucl. Phys. B477 (1996) 321; D. Choudhury, F. Eberlein, A. Konig, J. Louis, S. Pokorski, Phys. Lett. B342 (1995) 180; J. Lopez, D. Nanopoulos, X. Wang, A. Zichichi, Phys. Rev. D51 (1995) 147; A. Ali, G.F. Giudice, T. Mannel, Z. Phys. C 67 (1995) 417; J.L. Hewett, J.D. Wells, Phys. Rev. D55 (1997) 5549; N.G. Deshpande, B. Dutta, S. Oh, Phys. Rev. D56 (1997) 519; H. Baer, M. Brhlik, Phys. Rev. D55 (1997) 3201. [^1]: Talk given by D.M.P. at the 5th International Conference on Supersymmetries in Physics (SUSY 97), Philadelphia, Pennsylvania, May 27-31, 1997. D.M.P. is supported by Department of Energy contract DE–AC03–76SF00515. [^2]: We need to specify the remaining fermion masses as well, but the predictions for the observables we consider are not very sensitive to these inputs. [^3]: Because of the small error, we take $G_\mu$ as a fixed input. [^4]: We include the correlations among these nine observables. [^5]: The data is current as of spring 1997. [^6]: This plot corresponds to the “minimal supergravity” model with the universal soft parameters set to $M_{\rm SUSY}$, $\tan\beta=2$ and $\mu>0$. Some region of the curve has superpartners lighter than the current limits.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Cylindrical multishell structure is one of the prevalent atomic arrangements in nanowires. Being multishell, the well-defined atomic periodicity is hardly realized in it because the periodic units of individual shells therein generally do not match except for very few cases, posing a challenge to understand its physical properties. Here we show that moiré patterns generated by superimposing atomic lattices of individual shells are decisive in determining its electronic structures. Double-walled carbon nanotubes, as an example, are shown to have spectacular variations in their electronic properties from metallic to semiconducting and further to insulating states depending on their moiré patterns, even when they are composed of only semiconducting nanotubes with almost similar energy gaps and diameters. Thus, aperiodic multishell nanowires can be classified into new one-dimensional moiré crystals with distinct electronic structures.' author: - Mikito Koshino - Pilkyung Moon - 'Young-Woo Son' bibliography: - 'dwnt.bib' title: 'Incommensurate double-walled carbon nanotubes as one-dimensional moiré crystals' --- [^1] [^2] [^3] Introduction ============ When repetitive structures are overlaid against each other, a new superimposed moiré pattern emerges and is observed in various macroscopic phenomena [@oster]. Recent progress in stacking two-dimensional crystals [@geim] enables the patterns to occur at the atomic scale, showing their distinct quantum effects [@yankowitz; @ponomarenko; @dean; @hunt]. Even in one-dimension, this atomic pattern is realized naturally in the multishell organic and inorganic tubular shaped nanowires [@iijima; @ge; @tenne]. Among them, the double-walled carbon nanotubes (DWNTs) formed by two concentric single-walled carbon nanotubes (SWNTs) are the simplest multi-shell nanotube structures [@saito]. The electronic structure of SWNT, a basic building block of DWNTs, depends on its way of rolling a single layer graphene along a specific chiral vector into a seamless cylindrical shape. The chiral vector, ${\bf C}=n{\bf a}_1 +m{\bf a}_2$, or a set of integers $(n,m)$ uniquely determines electronic structures of SWNTs where ${\bf a}_1$ and ${\bf a}_2$ are the primitive vectors of hexagonal lattice of graphene \[Fig. 1(d)\]. They are metallic if $|n-m|$ is a multiple of three and otherwise semiconducting [@saito; @ando; @charlier_review]. This simple rule can be obtained by reducing or quantizing one dimension of the two dimensional massless Dirac energy bands of graphene. In spite of such a clear rule, its extension to double-walled structures is far from trivial [@shen; @ando; @charlier_review]. Ever since its discovery [@iijima], direct [*ab initio*]{} or empirical calculations have been performed to obtain the energy bands of DWNTs only if two single-walled nanotubes have a common periodicity along its axis [@saito; @ando; @charlier_review; @shen; @saito2; @charlier; @okada]. Very few of DWNTs, however, have commensurate atomic structures and the rest of them do not have the well-defined periodicity, posing a significant challenge to understand their electronic properties. This situation also holds for other inorganic one-dimensional multishell tubular structures with several different atomic elements [@tenne]. In the literature, incommensurate DWNTs were studied numerically in terms of the electronic structure [@lambin2000electronic; @ahn] and the transport properties, [@roche2001conduction; @yoon2002quantum; @uryu2004electronic; @uryu2005electronic] and it is generally believed that the interlayer coupling does not strongly modify the energy bands of the individual SWNTs. On the other hand, there has been a rapid progress in stacking various two-dimensional crystals and in understanding their electronic properties [@geim]. The most notable example among them is twisted bilayer graphene (TBLG) where a single layer graphene is overlaid on top of the other with a rotational stacking fault [@lopes; @latil; @hass; @shallcross; @kindermann]. These bilayer structures exhibit moiré patterns of which periodicity is quite larger than that of the unit cell of graphene. When one layer rotates with respect to the other from zero to 60 degrees continuously, two hexagonal lattices can have a common exact supercell only for a few discrete rotation angles while they cannot have the well-defined periodic unit for infinite possible other choices of angles [@shallcross; @macdonald; @falko; @moon2]. Formation of moiré pattern in TBLGs, however, do not require an exact matching of atomic positions between the two layers for the common supercell, and its periodicity continuously changes as the angle varies [@shallcross; @macdonald; @falko; @moon2]. Recent theoretical [@moon2] and experiment [@havener] studies demonstrates that the electronic structure of TBLG is dictated not by the exactly matched atomic supercell but by the periodicity of moiré superlattice. Therefore, successful descriptions of the electronic structures of TBLGs without commensurability validate the effective theory [@macdonald; @falko; @moon2; @moon3] based on the Bloch wave expansion with respect to the moiré lattice in momentum space. This motivates us to explore a possible dimensional reduction from TBLGs with moiré patterns to one-dimensional structures which can be mapped onto DWNTs exactly. Using the effective theory and atomic structure mapping, we uncover that the moiré pattern plays a decisive role in determining electronic structures of DWNTs without any commensurability and that the resulting properties are far beyond a simple sum of electronic bands of two constituent nanotubes. Mapping from BLG to DWNT ======================== ![image](fig_schematics.ps){width="0.8\hsize"} We begin by describing atomic structure mapping procedures from bilayer graphene (BLG) to DWNT. This involves a rotation (its operator form is $\mathcal R$) and a subsequent uniaxial contraction ($\mathcal M$) of one layer with respect to the other in BLG. The upper layer is designated for the inner tube with the chiral vector of ${\bf C}=n_1 {\bf a}_1 +n_2 {\bf a}_2$ and the lower for the outer with ${\bf C}'=n'_1{\bf a}_1 +n'_2{\bf a}_2$ \[Figs. 1(a) and 1(d)\]. First, the two different chiral vectors for the inner and outer SWNTs are aligned by rotating the lower layer, resulting in a usual TBLG with a moiré pattern \[Figs. 1(b) and 1(e)\]. After then, the lower one shrinks uniaxially along ${\bf C}$ to match the two chiral vectors exactly \[Figs. 1(c) and 1(f)\]. Resulting new primitive vectors for the lower layer become $\tilde{\bf a}_i={\mathcal {MR}}{\bf a}_i$ $(i=1,2)$. Corresponding reciprocal lattice vectors ${\bf b}_i$ and $\tilde{\bf b}_i$ for the upper and lower layers can be defined to satisfy ${\bf a}_i \cdot {\bf b}_j =\tilde{\bf a}_i \cdot \tilde{\bf b}_j =2\pi\delta_{ij}$ $(i,j=1,2)$. The exactly same atomic structure can be obtained by unfolding a DWNT into a bilayer graphene nanoribbon and by subsequently shrinking the width of outer ribbon down to the inner one \[Figs. 1(g) to 1(i)\]. Therefore, the modified TBLG structure matches the atomic structure of DWNT with a periodic boundary condition along $\bf C$ as shown in Fig. 1. ![image](fig_lattice_and_bz.ps){width="1.\hsize"} The mismatch between lattice periods of the upper and lower layers in the modified TBLG gives rise to the moiré superlattice pattern \[Figs. 2(a)-2(c)\]. In this structure, the arbitrary position $\bf r$ in the lower layer is displaced by $\boldsymbol{\delta} ({\bf r})=({\mathcal I}-{\mathcal R}^{-1}{\mathcal M}^{-1}){\bf r}$ by the mapping where $\mathcal I$ is an identity operator. The periodic vectors (${\bf L}^{\text M}_i$) of emerged moiré pattern can be obtained by using a condition of $\boldsymbol{\delta}({\bf L}^{\text M}_i )={\bf a}_i$ and are given by ${\bf L}^{\text M}_i=({\mathcal I}- {\mathcal R}^{-1}{\mathcal M}^{-1})^{-1}{\bf a}_i$ $(i=1,2)$. The corresponding reciprocal vectors satisfying ${\bf G}^{\text M}_i\cdot {\bf L}^{\text M}_j = 2\pi\delta_{ij}$ are given by $${\bf G}^{\text M}_i=({\mathcal I}-{\mathcal M}^{-1}{\mathcal R}){\bf b}_i \quad (i=1,2). \label{eq_g_def}$$ We can immediately show that ${\bf G}^{\text M}_i\cdot {\bf C}=2\pi (n_i -n'_i)$ $(i=1,2)$ so that the moiré period is commensurate with the chiral vector ${\bf C}$ as it should. The periodic boundary condition for DWNT forces the two-dimensional wave space be quantized into one-dimensional lines perpendicular to ${\bf C}$ with intervals of $2\pi/|{\bf C}|$. ![image](fig_armchair_zigzag.ps){width="0.95\hsize"} Effective Hamiltonian ===================== With the given conditions on the momentum spaces of the modified TBLG, now we construct the effective Hamiltonian for low energy electrons. The mapped lower layer for the outer tube has a distorted hexagonal Brillouin zone (BZ) while the upper layer for the inner one has a usual BZ of graphene. Figures 2(a)-2(c) show the actual lattice structures and BZs for each of DWNTs studied in the later sections. The low energy electrons can be described by effective Hamiltonians around the each corner of intralayer BZ: ${\bf K}_\xi=-\xi (2{\bf b}_1+{\bf b}_2)/3$ for the upper layer and $\tilde{\bf K}_\xi=-\xi(2\tilde{\bf b}_1+\tilde{\bf b}_2)/3={\mathcal M}^{-1}{\mathcal R}{\bf K}_\xi$ for the lower one where $\xi=\pm1$ denotes time-reversal partners. Near the corners, the intralayer Hamiltonians for the upper and lower layer (layer 1 and 2 hereafter) can be written as ${\mathcal H}_1 ({\bf k})\simeq -\hbar v({\bf k}-{\bf K}_\xi)\cdot (\xi\sigma_x, \sigma_y)$ and ${\mathcal H}_2 ({\bf k})\simeq-\hbar v[{\mathcal R}^{-1}{\mathcal M}({\bf k}-\tilde{\bf K}_\xi)]\cdot (\xi\sigma_x, \sigma_y)$, respectively where ${\bf k}=(k_x,k_y)$ is the Bloch wave number for intralayer momentum space, $\sigma_x$ and $\sigma_y$ are the Pauli matrices acting on the two sublattices of upper ($A_1, B_1$) and lower ($A_2, B_2$) layer, and $v$ the electron velocity of graphene. The low energy electrons of each layer interact through interlayer coupling such that the total Hamiltonian of the modified TBLG is written in the basis of $(A_1, B_1, A_2, B_2)$ as $${\mathcal H}_\xi = \left( \begin{array}{cc} {\mathcal H}_1 ({\bf k})& U^\dagger \\ U & {\mathcal H}_2 ({\bf k}) \end{array} \right), \label{eq_H}$$ where $U$ has interlayer coupling matrix elements expressed as $\langle {\bf k}', X'_{l'}|T|{\bf k}, X_l\rangle %\simeq %\frac{1}{\Omega_\text{M}}\int_{\Omega_\text{M}} d{\bf r} %U_{X'_{l'}X_l}\left[\frac{{\bf k}+{\bf k}'}{2}, \boldsymbol{\delta}({\bf r})\right] %e^{-i({{\bf k}'-{\bf k})\cdot {\bf r}}} $ where $|{\bf k}, X_l\rangle$ is an intralayer Bloch wave basis, $X$ and $X'$ are either of $A$ or $B$, $l$ and $l'$ are either of 1 or 2, and $T$ is an interlayer coupling Hamiltonian. In the following, we consider a situation where the moiré period is much greater than the atomic scale, i.e., $|{\bf G}^\text{M}_i| \ll 2\pi/a$. Then the interlayer matrix elements can be explicitly written in a quite simple form with three Fourier wave components of $1$, $e^{i\xi {\bf G}^\text{M}_1\cdot {\bf r}}$ and $e^{i\xi ({\bf G}^\text{M}_1 + {\bf G}^\text{M}_2)\cdot {\bf r}}$ as, $$\begin{aligned} && U = \begin{pmatrix} U_{A_2 A_1} & U_{A_2 B_1} \\ U_{B_2 A_1} & U_{B_2 B_1} \end{pmatrix} = u_0 (d) \Biggl[ \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} + \nonumber\\ && \quad \begin{pmatrix} 1 & \omega^{-\xi} \\ \omega^{\xi} & 1 \end{pmatrix} e^{i\xi\Vec{G}^{\rm M}_1\cdot\Vec{r}} + \begin{pmatrix} 1 & \omega^{\xi} \\ \omega^{-\xi} & 1 \end{pmatrix} e^{i\xi(\Vec{G}^{\rm M}_1+\Vec{G}^{\rm M}_2)\cdot\Vec{r}} \Biggr], \nonumber\\ \label{eq_U}\end{aligned}$$ where $u_0$ is the coupling parameter depending on intertube distance of $d$, and $\omega = \exp(2\pi i/3)$ (See derivations in Appendix \[app\_interlayer\_hamiltonian\]). We can infer that the effect of interlayer coupling will be significant when the distance between the two $K$-points of each layer, $\Delta {\bf K}_\xi \equiv \tilde{\bf K}_\xi -{\bf K}_\xi =\xi (2{\bf G}^\textrm{M}_1 +{\bf G}^\textrm{M}_2)/3$, is close to either of the three Fourier wave components 0, $\xi {\bf G}^\text{M}_1$ or $\xi ({\bf G}^\text{M}_1 + {\bf G}^\text{M}_2)$. This condition actually corresponds to the strong coupling case referred in the next section. We note that the effective Hamiltonian in Eq. (\[eq\_H\]) shares the essential features with those describing other two-dimensional moiré crystals such as TBLG as well as graphene on hBN. The only difference arises from the definition of the moiré reciprocal vectors ${\bf G}^\text{M}_i$ \[Eq. (\[eq\_g\_def\])\], where ${\mathcal M}={\mathcal I}$ for TBLG [@moon2], while ${\mathcal M}$ is an equibiaxial expansion (unlike the uniaxial one in the present case) for graphene on hBN[@moon3]. In spite of such a apparent similarity, the DWNT is not a rolled-up version of two-dimensional moiré crystals, because there are two degrees of freedom, ${\mathcal M}$ and ${\mathcal R}$, depending on the choice of the inner and outer SWNTs, giving a variation in the relative angles and magnitudes of moire reciprocal vectors. In TBLG, for example, ${\bf G}^\text{M}_1$ and ${\bf G}^\text{M}_2$ are always 120$^\circ$ rotation of each other due to the three-fold rotational symmetry, and then $\Delta {\bf K}_\xi$ never agrees with either of three Fourier wave components. In DWNT, as shown in the next section, a wider parameter space allows this condition to be met, and also other distinct situations which are hardly realized in two-dimensional bilayer systems. In Eq. (\[eq\_U\]), we neglect the dependence on the relative offset between the atomic positions in the two nanotubes, assuming that the two tubes share the same in-plane atomic position at some particular point. In incommensurate DWNTs, the relative offset corresponds to a shift of the origin of the coordinate and does not change the electronic structure. In commensurate DWNTs such as the zigzag-zigzag or armchair-armchair tubes, on the other hand, it should be noted that the band structure generally does depend on the offset, and it leads to a noticeable difference particularly in small DWNTs.[@kwon1998electronic] ![image](fig_strong_map.ps){width="0.8\hsize"} Armchair and zigzag DWNTs ========================= By numerically solving eigenvalues of equation (\[eq\_H\]) under the quantization condition of ${\bf k}\cdot {\bf C}=2\pi N$ ($N$ is integer), we can obtain the energy-momentum relationship of electrons in DWNTs with and without commensurability. First, the well-known results for commensurate DWNTs are reproduced by using our method (Fig. 3). In the case of a DWNT having $(n,n)$ SWNT inside $(m,m)$ one \[hereafter $(n,n)$@$(m,m)$ DWNT\], the calculated band structures from our continuum model agree well with previous results from [*ab initio*]{} methods [@charlier_review] \[Fig. 3(a)\]. For a $(n,0)$@$(m,0)$ DWNT, the agreement between results from both methods are also very good \[Figs. 3(b) and 3(c)\]. In the former case, the low energy band structures deform greatly such that the two linear crossing bands push up and downward due to intertube interactions while in the latter no significant deformation can be noted. This sharp contrast can be understood by checking the coupling condition considered before. In the former case, $\Delta {\bf K}_\xi$ exactly coincides with $\xi {\bf G}^\textrm{M}_1$ so that all combinations of $(n,n)$ SWNTs are always in the strong coupling condition. In the latter case, we have ${\bf G}^\textrm{M}_2=0$ and $\Delta{\bf K}_\xi=(2/3)\xi{\bf G}^\textrm{M}_1$, so that $\Delta{\bf K}_\xi$ does not coincide with either of 0, $\xi {\bf G}^\text{M}_1$ or $\xi ({\bf G}^\text{M}_1 + {\bf G}^\text{M}_2)$, thus being. in the weak coupling condition. ![image](fig_spectral_weight.ps){width="0.7\hsize"} General Incommensurate DWNTs ============================ Strong coupling condition ------------------------- The effective continuum model and the criteria for the strong coupling work as well for incommensurate and chiral DWNTs. By measuring the distance between $\Delta{\bf K}_\xi$ and the three Fourier wavenumbers with varying ${\bf C}$ and ${\bf C}'$, we can find all possible combinations of SWNTs to make DWNTs with strong intertube couplings. After some algebra, the criteria for the strong coupling is reduced to the simple conditions that (i) ${\bf C}-{\bf C}'$ is parallel to the armchair direction, (ii) ${\bf C}$ and ${\bf C}'$ are nearly parallel to each other (See Appendix \[app\_conditions\] for the derivation). For one example, here, we choose semiconducting (35,19) SWNT for the inner shell and then search semiconducting outer SWNTs to show a strong or weak intertube coupling. Figure 4(a) shows the distance between $\Delta{\bf K}_\xi$ and $\xi {\bf G}^\text{M}_1$ as a function of ${\bf C}'$ with ${\bf C}$ fixed to (35,19), where the darker color indicates smaller distance. The strong coupling region actually extends to the armchair direction as expected from the criteria discussed before. For the outer tube, we take (40,24) SWNT in the strong coupling condition, and (47,15) off from it, where the intertube distance is close to the graphite’s interlayer spacing in both cases. For (35,19)@(40,24) DWNT, the atomic structure of the corresponding modified TBLG and its BZ are shown in Figs. 2(a) and 2(d), respectively. We see that ${\bf G}^\text{M}_1$ is indeed very close to the displacement between two $K$-points in Fig. 2(d). The calculated energy band structure is drawn with projected and extended scheme in Fig. 5(a) (See Appendix \[app\_calculating\_band\] for the calculation method). Since the (35,19) tube has an energy band gap of 0.18 eV and (40,24) of 0.15 eV (and the curvature effect is too small to close the gap [@okada]), one may expect that the DWNT composed of the two tubes will have an energy gap. However, the resulting band structure shows the characteristic of metallic energy bands \[Fig. 5(a)\]. The lowest energy bands of decoupled nanotubes indeed mix together very strongly and the final low energy-momentum dispersions are quite different from the original ones. In the case of (35,19)@(47,15) DWNT \[Fig. 2(b)\], its energy-momentum dispersion is nothing but a simple sum of the two tubes with a slight energy shift \[Fig. 5(b)\] because this is off the strong coupling condition \[Fig. 2(e)\]. Corresponding density of states (DOS) for each case is displayed in Figs. 5(a) and 5(b) showing a sharp contrast between the two coupling conditions, although these two DWNTs have almost the same spectra in the absence of the intertube couping. In the strong coupling regime, the interlayer Hamiltonian $U$ links the SWNT states at almost the same energy and thus leads to an energy shift linear to $u_0$. In general situations, on the other hand, the two states connected by $u_0$ generally belong to different energies with a typical difference $\Delta E \sim \hbar v |{\bf G}^{\rm M}_i|$. When $u_0 \ll \Delta E$, the energy shift becomes the second order as $\sim u_0^2/ \Delta E$, and this is the case in Fig. 5(b). We can further obtain an insight from analytic expression for energy gap of strongly coupled case. The low energy spectrum of strongly coupled DWNT is well approximated by the two Dirac cones separated by $\Delta{\bf K}_\xi$, which are directly coupled by one of three Fourier components, 0, $\xi {\bf G}^\text{M}_1$ or $\xi ({\bf G}^\text{M}_1 + {\bf G}^\text{M}_2)$. Suppose that the lowest energy bands of decoupled inner and outer SWNTs with respect to each band center are expressed (here $\hbar v = 1$) as $E=\pm\left[m^2_\text{i}+k^2\right]^{1/2}$ and $E=\pm\left[m^2_\text{o}+ k^2\right]^{1/2}$ with energy gaps of $2|m_\text{i}|$ and $2|m_\text{o}|$, respectively. When the two semiconducting SWNTs have similar diameters, we can approximate $m_\text{i}, m_\text{o} \approx m$ and then the energy bands in the presence of the intertube coupling of $u_0$ are approximately written as four hyperbolas (See Appendix \[app\_two-mode\]). If $\Delta{\bf K}_\xi\simeq\xi{\bf G}^\textrm{M}_1$, the four branches are given by, $$E(k) =-u_0 \pm \left[(m-m_D(u_0))^2+(k+k_D(u_0))^2 \right]^{1/2}$$ and $$E(k)=+u_0\pm\left[(m+m_D(u_0))^2+(k-k_D(u_0))^2\right]^{1/2},$$ where $m_D(u_0)=u_0 \xi \cos(\phi+60^\circ)$, $k_D(u_0)=u_0 \xi \sin(\phi+60^\circ)$, and $\phi$ is the angle from $x$-axis to ${\bf C}$. The energy gap is found to be $\Delta E = 2(|m|-u_0)$, and vanishes when $u_0 > |m|$. From these expression, we find that the intertube interactions can indeed modify the semiconducting energy bands of bare SWNTs  [@ywson] into metallic ones in the strong coupling condition. In Fig. \[fig\_spectral\_weight.ps\], we chose relatively large DWNTs (i.e. the energy gap is small) such that $u_0 > |m|$, to actually demonstrate the gap closing. Smaller DWNTs also have large band shifts in the strong coupling condition, while they remain semiconducting when the energy gaps of the constituent SWNTs are larger than $2u_0$. Localized insulating condition ------------------------------ Besides the strong and weak coupling regimes, another classification is possible for the electronic structures of incommensurate DWNTs. In Figs. 2(c) and 2(f), we display a modified TBLG atomic structure and BZ for (26,3)@(35,3) DWNT, and its energy-momentum dispersion and DOS are shown in Fig. 5(c). The two constituent SWNTs are both semiconducting, and their chiral vectors are almost parallel with $(n,0)$ nanotubes. Unlike previous two cases, we observe a number of flat bands both in conduction and valence energy bands, and the corresponding DOS also shows such characteristics \[Fig. 5(c)\]. The flat band occurs because electronic states at contiguous $k$-points on the same layer are hybridized by the intertube coupling $U$. Then an electron on each tube feels a periodic effective potential with very long spatial period, and the bound states appear at every single bottom of the effective potential. The system is then viewed as a series of weakly connected quantum dots, and it offers a unique situation in which identical quantum dots are arranged regularly at a precise period for a macroscopic length. Since the matrix $U$ couples the different layers, we need a second order process $U^\dagger G U$ or $U G U^\dagger$ ($G$ is Green’s function of decoupled SWNTs) to connect the $k$-points on the same layer, and such a process has the Fourier components of $\pm{\bf G}^\text{M}_1$, $\pm{\bf G}^\text{M}_2$ and $\pm({\bf G}^\text{M}_1+{\bf G}^\text{M}_2)$. Therefore, the flat band localization condition requires that either of ${\bf G}^\text{M}_1$, ${\bf G}^\text{M}_2$ or ${\bf G}^\text{M}_1+{\bf G}^\text{M}_2$ is very small, but not exactly zero. In the case of (26,3)@(35,3) DWNT, $|{\bf G}^\text{M}_2|$ is merely about $0.014/a$, which corresponds to the spatial period about $450a\sim 110\,\mathrm{nm}$. Similarly to the strong coupling case, the criteria for the flat band is reduced to the simple conditions that (i) ${\bf C}-{\bf C}'$ is parallel to the zigzag direction, (ii) ${\bf C}$ and ${\bf C}'$ are nearly parallel (See Appendix \[app\_conditions\] for the derivation). Figure 4(b) shows the length of ${\bf G}^\text{M}_2$ as a function of ${\bf C}'$ with the fixed ${\bf C}$ of $(26,3)$, where the flat band region actually extends to the zigzag direction. From the last consideration, we can conclude that DWNTs with two semiconducting SWNTs can be classified into three categories, e.g., strong coupling near armchair-armchair DWNTs, localized insulating coupling near zigzag-zigzag ones and weak coupling cases otherwise. ![image](fig_spectral_weight_metal.ps){width="0.7\hsize"} DWNTs including metallic SWNTs ------------------------------ Our theory is not limited to semiconducting DWNTs. Three coupling conditions still hold as well when either or both of the two SWNTs are metallic. Figure \[fig\_spectral\_weight\_metal\](a) shows the spectrum of (18,15)@(23,20) DWNT in the strong-coupling condition, where the low energy linear bands of decoupled metallic tubes are repelled away without gap opening, similar to the armchair-armchair DWNT. The insulating localized states are also possible for DWNTs composed of metallic SWNTs, where the metallic behavior of the original SWNTs is completely lost due to the formation of the bound states at the moiré potential extrema. Figures \[fig\_spectral\_weight\_metal\](b) and \[fig\_spectral\_weight\_metal\](c) show the spectra for a DWNT consisting of two metallic SWNTs \[(27,3)@(36,3)\], and for one consisting of a metallic SWNT and a semiconducting SWNT \[(27,3)@(35,3)\], respectively, both in the localized insulating condition. We observe the formation of flat bands in both cases, but in much greater energy range in (c) than in (b). The significant difference comes from the different interlayer spacing $d$, which gives the different interlayer coupling $u_0(d)\approx 0.07$eV ($d\approx 0.351$nm) in the former and 0.25 eV ($d\approx 0.312$nm) in the latter. As the effective potential is the second order in $u_0$, a change of the magnitude $u_0$ results in a significant difference in the energy region where the flat bands are formed. Actually the localized insulating condition strongly interferes with the condition for each SWNT to be metallic or semiconducting. We can show that a metallic-metallic DWNT in the localized insulating condition appears only when $|\Vec{C}-\Vec{C}'|\approx 3ma$ with integer $m$, and thus we have only a choice of $|\Vec{C}-\Vec{C}'|\approx 9a$ near the graphite interlayer spacing, which is actually the case of Fig. \[fig\_spectral\_weight\_metal\](b). Finally, let us mention the relationship and differences between one-dimensional and two-dimensional moiré crystals. In the two-dimensional TBG, the strong interlayer coupling occurs only when the rotation angle is sufficiently small, where the band dispersion of the low-lying levels are significantly suppressed[@trambly2010localization; @morell2010flat; @luican2011single; @macdonald; @moon1]. This situation corresponds to the strong coupling and localized insulating regimes in incommensurate DWNTs, since the moire reciprocal lattice vectors ${\bf G}^{\rm M}_i$ and the K-point difference $\Delta{\bf K}_\xi$ are all tiny and simultaneously satisfy the two conditions (for strong coupling and localized insulating) while not separately. On the other hand, the uniqueness of one-dimensional moiré crystal is in that the flexibility of moiré pattern due to more degrees of freedom realizes the two different coupling conditions independently, leading to a variety of situations which do not have explicit counterparts in two-dimensional systems. For example, the strong coupling condition just requires $\Delta {\bf K}_\xi \sim {\bf G}^{\rm M}_i$, but ${\bf G}^{\rm M}_i$ is not necessarily small. Therefore, as seen in Fig. \[fig\_spectral\_weight.ps\](a), we only have strong band repulsion due to the strong interlayer mixing, but the spectrum is not chopped into flat bands because ${\bf G}^{\rm M}_i$ is not small. On the other hand, the localized insulating regime actually requires that some ${\bf G}^{\rm M}_i$ is tiny, but not necessarily close to $\Delta {\bf K}_\xi$. As a result, we have flat bands, but the hybridization between 1 and 2 is not strong as seen in Fig. \[fig\_spectral\_weight.ps\](c). Conclusions =========== It is evident now that combination of SWNTs with almost the same physical properties such as diameter and energy gap can end up with very different DWNTs depending on the interlayer moiré interference. Therefore, all these criteria for incommensurate and chiral DWNTs considered hitherto will dramatically influence their optical absorptions, photoluminescence, electric transport and Raman scattering that have been used for characterizing and understanding their physical properties [@saito; @ando; @charlier_review; @shen; @endo; @shimamoto]. Considering that the moiré pattern is present for almost all possible one-dimensional multishell tubular structures with several different atomic elements [@tenne], our current theoretical framework shall not be limited to multishell carbon nanotubes also. Moreover, our study puts forth a new classification of nanotubes as the first example of one-dimensional moiré crystals and paves a firm ground to utilize superb technological merits of DWNTs [@shen; @endo; @shimamoto]. [*Note added*]{}: After preparing this paper, we became aware of a recent paper [@liu2014van] having a result that overlaps with a part of our theory. M. K. was supported by JSPS Grant-in-Aid for Scientific Research No. 24740193 and No. 25107005. P. M. was supported by New York University Shanghai Start-up Funds, and appreciate the support from East China Normal University for providing research facilities. Y.-W.S. was supported by the NRF funded by the MSIP of Korean government (CASE, 2011-0031640 and QMMRC, No. R11-2008-053-01002-0). Computations were supported by the CAC of KIAS. Interlayer Hamiltonian {#app_interlayer_hamiltonian} ====================== Here we derive the interlayer coupling matrix $U$ in the effective Hamiltonian of DWNT, Eq. (\[eq\_H\]) in the main text. We assume that the moiré superlattice period is much larger than the lattice constant. The local lattice structure is then approximately viewed as a non-rotated bilayer graphene slided by a displacement vector $\GVec{\delta}$, which slowly depends on the position $\Vec{r}$ as $$\boldsymbol{\delta} ({\bf r})=({\mathcal I}-{\mathcal R}^{-1}{\mathcal M}^{-1}){\bf r} \label{eq_delta_r}$$ as argued in the main text. Similarly to the two-dimensional moiré superlattice [@moon2; @moon3], the interlayer Hamiltonian of the DWNT is obtained by replacing $\GVec{\delta}$ with $\GVec{\delta}(\Vec{r})$ in the Hamiltonian of non-rotated bilayer graphene with a constant $\GVec{\delta}$. Let us consider a non-rotated bilayer graphene with a constant in-plane displacement $\GVec{\delta}$ and interlayer spacing $d$. We define $\Vec{a}_1$ and $\Vec{a}_2$ as the lattice vectors of graphene, $\Vec{b}_1$ and $\Vec{b}_2$ as the corresponding reciprocal lattice vectors. We model the system with the tight-binding model for $p_z$ atomic orbitals. The Hamiltonian is written as $$\begin{aligned} H = -\sum_{\langle i,j\rangle} t(\Vec{R}_i - \Vec{R}_j) |\Vec{R}_i\rangle\langle\Vec{R}_j| + {\rm H.c.}, \label{eq_Hamiltonian_TBG}\end{aligned}$$ where $\Vec{R}_i$ and $|\Vec{R}_i\rangle$ represent the lattice point and the atomic state at site $i$, respectively, and $t(\Vec{R}_i - \Vec{R}_j)$ is the transfer integral between the sites $i$ and $j$. We adopt a Slater-Koster parametrization [@slater1954simplified] $$\begin{aligned} && -t(\Vec{R}) = V_{pp\pi}\left[1-\left(\frac{\Vec{R}\cdot\Vec{e}_z}{d}\right)^2\right] + V_{pp\sigma}\left(\frac{\Vec{R}\cdot\Vec{e}_z}{d}\right)^2, \nonumber \\ && V_{pp\pi} = V_{pp\pi}^0 e^{- (R-a_0)/r_0}, \quad V_{pp\sigma} = V_{pp\sigma}^0 e^{- (R-d_0)/r_0}, \nonumber\\ \label{eq_transfer_integral}\end{aligned}$$ where $\Vec{e}_z$ is the unit vector perpendicular to the graphene plane, $a_0 = a/\sqrt{3} \approx 0.142\,\mathrm{nm}$ is the distance of neighboring $A$ and $B$ sites on monolayer, and $d_0 \approx 0.335\,\mathrm{nm}$ is the interlayer spacing if bulk graphites. Other parameters are typically $V_{pp\pi}^0 \approx -2.7\,\mathrm{eV}$, $ V_{pp\sigma}^0 \approx 0.48\,\mathrm{eV}$ and $r_0 \approx 0.045\,\mathrm{nm}$. [@moon2] We define the Bloch wave basis of a single layer as $$\begin{aligned} && |\Vec{k},X_l\rangle = \frac{1}{\sqrt{N}}\sum_{\Vec{R}_{X_l}} e^{i\Vec{k}\cdot\Vec{R}_{X_l}} |\Vec{R}_{X_l}\rangle, %\nonumber \\ %&& |\Vec{k},B_l\rangle = %\frac{1}{\sqrt{N}}\sum_{\Vec{R}_{B_l}} e^{i\Vec{k}\cdot\Vec{R}_{B_l}} %|\Vec{R}_{B_l}\rangle, \label{eq_bloch_base}\end{aligned}$$where $X = A,B$ is the sublattice index, $l= 1, 2$ is the layer index, and $N$ is the number of monolayer’s unit cell in the whole system. The interlayer matrix element is then written as $$\begin{aligned} && U_{A_2A_1}(\Vec{k},\GVec{\delta}) \equiv \langle \Vec{k},A_2| H |\Vec{k},A_1\rangle = u(\Vec{k},\GVec{\delta}), \nonumber\\ && U_{B_2B_1}(\Vec{k},\GVec{\delta}) \equiv \langle \Vec{k},B_2| H |\Vec{k},B_1\rangle = u(\Vec{k},\GVec{\delta}), \nonumber\\ && U_{B_2A_1}(\Vec{k},\GVec{\delta}) \equiv \langle \Vec{k},B_2| H |\Vec{k},A_1\rangle = u(\Vec{k},\GVec{\delta} - \GVec{\tau}_1), \nonumber\\ && U_{A_2B_1}(\Vec{k},\GVec{\delta}) \equiv \langle \Vec{k},A_2| H |\Vec{k},B_1\rangle = u(\Vec{k},\GVec{\delta} + \GVec{\tau}_1), \label{eq_interlayer_U}\end{aligned}$$ where $$\begin{aligned} u(\Vec{k},\GVec{\delta}) = \sum_{n_1,n_2} - t(n_1 \Vec{a}_1 + n_2 \Vec{a}_2 + d\Vec{e}_z + \GVec{\delta}) \nonumber\\ \hspace{20mm} \times \exp\left[-i\Vec{k}\cdot(n_1 \Vec{a}_1 + n_2 \Vec{a}_2 + \GVec{\delta}) \right].\end{aligned}$$ Here $\GVec{\tau}_1 = (-\Vec{a}_1+2\Vec{a}_2)/3$ is a vector connecting the nearest $A$ and $B$ sublattices, and $\Vec{e}_z$ is the unit vector perpendicular to the graphene plane. Since the function $u(\Vec{k},\GVec{\delta})$ is periodic in $\GVec{\delta}$ with periods $\Vec{a}_1$ and $\Vec{a}_2$, it is Fourier transformed as, $$\begin{aligned} u(\Vec{k},\GVec{\delta}) = -\sum_{m_1,m_2} \tilde{t}(m_1\Vec{b}_1+m_2\Vec{b}_2+\Vec{k}) \nonumber\\ \hspace{20mm} \times \exp[ i(m_1\Vec{b}_1+m_2\Vec{b}_2)\cdot \GVec{\delta} ], \label{eq_ukq_2}\end{aligned}$$ where $\tilde{t}(\Vec{q})$ is the in-plane Fourier transform of $t(\Vec{R})$ defined by $$\begin{aligned} \tilde{t}(\Vec{q}) = \frac{1}{S} \int t(\Vec{R}+ d\Vec{e}_z) e^{-i \Vec{q}\cdot \Vec{R}} d\Vec{R}, \label{eq_tilde_t}\end{aligned}$$ with $S = |\Vec{a}_1\times\Vec{a}_2|$, and the integral in $\Vec{R}$ is taken over an infinite two-dimensional space. In the present tight-binding model, $t(\Vec{R})$ exponentially decays in $R \, \gsim \, r_0$, so that the Fourier transform $\tilde{t}(\Vec{q})$ decays in $q \, \gsim \, 1/r_0$. In Eq. (\[eq\_ukq\_2\]), therefore, we only need to take a few Fourier components within $|m_1\Vec{b}_1+m_2\Vec{b}_2+\Vec{k}| \,\lsim\, O(1/r_0)$. ![ Dependence of $u_0$ on interlayer spacing $d$. []{data-label="fig_u0"}](fig_u0.ps){width="0.9\hsize"} In the following we only consider the electronic states near $\Vec{K}_\xi$ point, and then we can approximate $u(\Vec{k},\GVec{\delta})$ with $u(\Vec{K}_\xi,\GVec{\delta})$. Eq. (\[eq\_ukq\_2\]) then becomes $$\begin{aligned} u(\Vec{K}_\xi,\GVec{\delta}) \approx u_0 \left[ 1 + e^{i \xi \Vec{b}_1 \cdot \mbox{\boldmath \scriptsize $\delta$}} + e^{i \xi (\Vec{b}_1 + \Vec{b}_2)\cdot \mbox{\boldmath \scriptsize $\delta$}} \right],\end{aligned}$$ with $$\begin{aligned} u_0 = \tilde{t}(\Vec{K}_\xi).\end{aligned}$$ Note that $u_0$ depends on interlayer spacing $d$ through $\tilde{t}(\Vec{q})$ in Eq. (\[eq\_tilde\_t\]). In the present choice of the tight-binding parameters we have $u_0 = 0.11$eV at the graphite interlayer spacing, $d = 0.334\,\mathrm{nm}$. The second largest Fourier component is $\tilde{t}(2\Vec{K}_\xi) \approx 0.0016$eV, and is safely neglected. Unlike the graphite system, DWNTs can have wide range of $d$ between $0.29\, \mathrm{nm}$ and $0.41\, \mathrm{nm}$ [@villalpando2010tunable; @pfeiffer2006tube; @villalpando2008raman; @ren2002morphology]. In this range, $u_0$ also varies widely from $0.33\, \mathrm{eV}$ to $0.017\, \mathrm{eV}$, as we plot in Fig. \[fig\_u0\]. By replacing $\GVec{\delta}$ with $\GVec{\delta}(\Vec{r})$ in Eq. (\[eq\_delta\_r\]), we obtain the interlayer Hamiltonian of the DWNT, Eq. (\[eq\_U\]). Here we used the relation $ \Vec{b}_i \cdot \GVec{\delta}(\Vec{r}) = \Vec{G}^{\rm M}_i\cdot\Vec{r}. %= \Vec{b}_i \cdot ({\mathcal I}-{\mathcal R}^{-1}{\mathcal M}^{-1}){\bf r} %= [({\mathcal I}-{\mathcal R}^{-1}{\mathcal M}^{-1})^\dagger b_i]\cdot\Vec{r} %= [({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})\Vec{b}_i]\cdot\Vec{\bf r} $ Conditions for strong coupling case and flat band case {#app_conditions} ====================================================== We derive the condition for the two chiral vectors $\Vec{C}$ and $\Vec{C}'$ to give the strong coupling case and the flat band case. The strong interlayer coupling occurs when $\Delta \Vec{K}_\xi = \xi(2\Vec{G}_1^M + \Vec{G}_2^M)/3$ is close to either of $0$, $\xi\Vec{G}_1^M$ or $\xi(\Vec{G}_1^M+\Vec{G}_2^M)$ (see the main text). The condition is written as $$\begin{aligned} \begin{cases} 2 \Vec{G}_1^M + \Vec{G}_2^M \approx 0 & {\rm or} \\ \Vec{G}_1^M - \Vec{G}_2^M \approx 0 & {\rm or} \\ \Vec{G}_1^M + 2\Vec{G}_2^M \approx 0. \end{cases}\end{aligned}$$ Using $\Vec{G}_i^M = ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})\Vec{b}_i$, this is rewritten as $$\begin{aligned} \begin{cases} ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})(2 \Vec{b}_1 + \Vec{b}_2) \approx 0 & {\rm or} \\ ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})(\Vec{b}_1 - \Vec{b}_2) \approx 0 & {\rm or} \\ ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})(\Vec{b}_1 + 2\Vec{b}_2) \approx 0. \end{cases} \label{eq_cond_for_strong}\end{aligned}$$ Since the vectors $2 \Vec{b}_1 + \Vec{b}_2$, $\Vec{b}_1 - \Vec{b}_2$, and $\Vec{b}_1 + 2\Vec{b}_2$ are parallel to zigzag direction (i.e., $0$, $2\pi/3$, $-2\pi/3$ from the $x$-axis), the condition Eq. (\[eq\_cond\_for\_strong\]) is simplified to $$\begin{aligned} ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})\Vec{x} \approx 0\end{aligned}$$ for $\Vec{x}$ parallel to a zigzag direction. ![Geometry to explain the strong coupling condition (see the text). []{data-label="fig_strong_cond"}](fig_strong_cond.ps){width="0.95\hsize"} Let us consider the amplitude of $({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})\Vec{x}$ as a function of $\Vec{x}$. We introduce a rotated coordinate system $(x',y')$ with $x'$-axis set to parallel to $\Vec{C}$. Then we can write $$\begin{aligned} M = \begin{pmatrix} C/C' & 0 \\ 0 & 1 \end{pmatrix}, \quad R = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}.\end{aligned}$$ For $\Vec{x}=(\cos\varphi,\sin\varphi)$, we have $$\begin{aligned} && |({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})\Vec{x}|^2 = \left(\frac{C'}{C}\cos(\varphi +\theta)-\cos\varphi\right)^2 \nonumber \\ &&\hspace{30mm} +(\sin(\varphi +\theta)-\sin\varphi)^2. \label{eq_1-mr}\end{aligned}$$ The first term in the right hand side vanishes when $\Vec{x}$ is perpendicular to $\Vec{C}-\Vec{C'}$. This is geometrically explained in Fig. \[fig\_strong\_cond\], where we actually see ${\rm OH}=C'\cos(\varphi +\theta) = C\cos\varphi$ when $\Vec{x} \perp \Vec{C}-\Vec{C'}$. The second term becomes small when $\theta$ is small, i.e. $\Vec{C}$ and $\Vec{C}'$ are nearly parallel. Therefore, we have strong intertube coupling when (i) $\Vec{C}-\Vec{C'}$ is parallel to the armchair direction (i.e., perpendicular to the zigzag direction), and (ii) $\Vec{C}$ and $\Vec{C'}$ are nearly parallel. On the other hand, the flat band case takes place when either of $\Vec{G}_1^M$, $\Vec{G}_2^M$ or $\Vec{G}_1^M+\Vec{G}_2^M$ is very close to zero, but not exactly zero. In a similar manner, the condition is rewritten as $$\begin{aligned} \begin{cases} ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R}) \Vec{b}_1 \approx 0 & {\rm or} \\ ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R}) \Vec{b}_2 \approx 0 & {\rm or} \\ ({\mathcal I}-{\mathcal M}^{-1}{\mathcal R})(\Vec{b}_1 + \Vec{b}_2) \approx 0. \end{cases} \label{eq_cond_for_flat}\end{aligned}$$ Since the vectors $\Vec{b}_1$, $\Vec{b}_2$, and $\Vec{b}_1 + \Vec{b}_2$ are now parallel to the armchair direction, the flat band condition is obtained by replacing “zigzag” and “armchair” in the previous argument for the strong coupling condition. Therefore, we have a flat band DWNT when (i) $\Vec{C}-\Vec{C'}$ is parallel to the zigzag direction, and (ii) $\Vec{C}$ and $\Vec{C'}$ are nearly parallel. Calculating band structures of chiral DWNTs {#app_calculating_band} =========================================== ![ Band structures near ${\bf K}_+$ valley of DWNTs with (a) (35,19)@(40,24) (strong coupling), (b) (35,19)@(47,15) (weak coupling), and (c) (26,3)@(35,3) (flat band). The solid (black) lines represent the energy bands of the DWNT in the presence interlayer coupling, while the dotted red (blue) lines are those of inner (outer) SWNTs. We do not show the subgroups which contain no energy bands in the given range. []{data-label="fig_band_structures"}](fig_band_structures.ps){width="0.9\hsize"} We present the details of the band calculation for DWNT with the effective continuum theory. Every eigenstate is labeled by the Bloch wave number $\Vec{k}$ defined on the cutting lines ${\bf k}\cdot {\bf C}=2\pi N$ ($N$ is integer) inside the two-dimensional Brillouin zone spanned by $\Vec{G}_1^{\rm M}$ and $\Vec{G}_2^{\rm M}$. Since ${\bf G}^{\text M}_i\cdot {\bf C}=2\pi (n_i -n'_i)$, the cutting lines are categorized into $n_r$ different subgroups where $n_r={\rm GCD}(n_1-n'_1,n_2-n'_2)$. To obtain the energy spectrum, we take the $k$-points of $\Vec{q}=\Vec{k} +m_1\Vec{G}_1^{\rm M}+m_2\Vec{G}_2^{\rm M}$ ($m_1,m_2$: integers) in the region $|\Vec{q}-(\Vec{K}_\xi+\tilde{\Vec{K}}_\xi)/2| < k_{\rm max}$ with a sufficiently large wave-cutoff $k_{\rm max}$, and numerically diagonalize the Hamiltonian within the limited wave space. Figure \[fig\_band\_structures\] shows the band structures of $\xi=+$ valley calculated for DWNTs studied in the main text; (a) (35,19)@(40,24), (b) (35,19)@(47,15), and (c) (26,3)@(35,3). Here the energy bands are separately plotted for each of $n_r$ subgroups, while we omitted the subgroups which contain no energy bands in the given range. The solid curves represent the energy bands of the DWNT with the interlayer coupling, and the dotted and dashed curves are those of independent SWNTs without coupling. We actually see the band gap closing in the strong coupling case \[Fig. \[fig\_band\_structures\](a)\] and the flat low-energy bands in the flat band case \[Fig. \[fig\_band\_structures\](c)\] as argued in the main text. In Figs. \[fig\_spectral\_weight.ps\] and \[fig\_spectral\_weight\_metal\] in the main text, we presented the spectral function in the extended zone scheme instead of the complex band structure folded into the first Brillouin zone. This is defined as $$A(\Vec{k},\varepsilon) = \sum_\alpha \sum_{X,l} |\langle \alpha | \Vec{k}, X_l\rangle|^2 \delta(\varepsilon-\varepsilon_\alpha),$$ where $|\alpha\rangle$ and $\varepsilon_\alpha$ are the eigenstate and the eigenenergy, respectively, $X = A,B$ is the sublattice index, $l= 1, 2$ is the layer index, and $|\Vec{k}, X_l\rangle$ is the plane wave basis defined by Eq. (\[eq\_bloch\_base\]). The spectral function is defined on the cutting lines ${\bf k}\cdot {\bf C}=2\pi N$ on the infinite two-dimensional reciprocal space, and not limited to the reduced Brillouin zone. Figures \[fig\_spectral\_weight.ps\] and \[fig\_spectral\_weight\_metal\] are obtained by taking summation of the spectral functions over different cutting lines near a single $K_\xi$ point, and projecting it on a single $k$-axis. Two-mode approximation in strong interlayer coupling condition {#app_two-mode} ============================================================== ![ Lowest energy bands of DWNT in strong coupling condition. The surface plot shows the energy dispersion of the modified TBLG, Eq. (\[eq\_twomode\_soln\_2D\]). The black lines show the one-dimensional dispersion Eq. (\[eq\_E1D\_two\_modes\_approx\]) along the quantization line of semiconducting DWNT (see text). []{data-label="fig_twomode_3D"}](fig_twomode_3D.ps){width="0.65\hsize"} ![image](fig_twomode_1D.ps){width="0.7\hsize"} Here we derive an approximate analytic expression of the low energy spectrum of DWNTs in the strong coupling condition. We consider the strong coupling case of $\Delta\Vec{K}_\xi = \xi\Vec{G}_1^{\rm M}$, and apply the two-mode approximation for the two Dirac cones of layer 1 and 2 which are directly coupled by one of the three Fourier components, $\xi {\bf G}^\text{M}_1$, in the interlayer Hamiltonian. The effective Hamiltonian is written as $$\begin{aligned} \mathcal{H}_\mathrm{low} = \begin{pmatrix} \mathcal{H}_1(\Vec{k}) & U^\dagger \\ U & \mathcal{H}_2(\Vec{k}) \end{pmatrix}, \label{eq_twomode_full}\end{aligned}$$ where $$\begin{aligned} && \mathcal{H}_1(\Vec{k}) \simeq - \hbar v(\Vec{k}-\Vec{K}_\xi)\cdot (\xi\sigma_x, \sigma_y), \nonumber\\ && \mathcal{H}_2(\Vec{k}) \simeq - \hbar v[\mathcal{R}^{-1}\mathcal{M}(\Vec{k}-{\Vec{K}}_\xi-\Delta{\Vec{K}}_\xi)]\cdot (\xi\sigma_x, \sigma_y), \nonumber\\ && U = u_0 \begin{pmatrix} 1 & \omega^{-\xi} \\ \omega^{\xi} & 1 \end{pmatrix}e^{i\xi\Vec{G}^{\rm M}_1\cdot\Vec{r}}.\end{aligned}$$ The two Dirac cones are separated by $\Delta\Vec{K}_\xi$, and they are exactly merged by the Fourier component $e^{i\xi\Vec{G}^{\rm M}_1\cdot\Vec{r}}$ since $\Delta\Vec{K}_\xi = \xi\Vec{G}_1^{\rm M}$. By applying a unitary transformation $\mathcal{H}'_\mathrm{low} = V^{\dagger}\mathcal{H}_\mathrm{low} V$ with $V={\rm diag}(1,1,e^{i\xi\Vec{G}^{\rm M}_1\cdot\Vec{r}},e^{i\xi\Vec{G}^{\rm M}_1\cdot\Vec{r}})$, Eq. (\[eq\_twomode\_full\]) is simplified to $$\begin{aligned} \mathcal{H}'_\mathrm{low} = \begin{pmatrix} \mathcal{H}'(\Vec{k}) & U'^\dagger \\ U' & \mathcal{H}'(\Vec{k}) \end{pmatrix}, \label{eq_twomode_simple}\end{aligned}$$ with $$\begin{aligned} & \mathcal{H}'(\Vec{k}) = - \hbar v\Vec{k}\cdot (\xi\sigma_x, \sigma_y), \nonumber\\ & U' = u_0 \begin{pmatrix} 1 & \omega^{-\xi} \\ \omega^{\xi} & 1 \end{pmatrix},\end{aligned}$$ where the wave number $\Vec{k}$ is measured relative to $\Vec{K}_\xi$. and we use the approximation $\mathcal{R}^{-1}\mathcal{M}\Vec{k}\approx \Vec{k}$ assuming that $\mathcal{R}^{-1}\mathcal{M}$ is close to the identity matrix, i.e., $\Vec{C}$ and $\Vec{C}'$ sufficiently close to each other. The above equation gives the energy dispersions of two shifted Dirac cones $$\begin{aligned} && E^\pm_1(\Vec{k}) = -u_0 \pm \hbar v |\Vec{k}-\Vec{k}_0|, \nonumber\\ && E^\pm_2(\Vec{k}) = u_0 \pm \hbar v |\Vec{k}+\Vec{k}_0|, \label{eq_twomode_soln_2D}\end{aligned}$$ where $\Vec{k} = (k_x, k_y)$ and $$\begin{aligned} \Vec{k}_0 \equiv \frac{u_0\xi}{\hbar v} \begin{pmatrix} \cos(-60^\circ) \\ \sin(-60^\circ) \end{pmatrix}.\end{aligned}$$ The surface plot in Fig. \[fig\_twomode\_3D\] shows the dispersion Eq. (\[eq\_twomode\_soln\_2D\]), where we see that the two shifted Dirac cones touch on a single line $E = - \hbar v {\bf k}\cdot{\bf k}_0/|{\bf k}_0|$. The lowest energy bands of DWNTs along the quantization line closest to $\Vec{K}_\xi$ are given as $$\begin{aligned} \Vec{k} = k \begin{pmatrix} -\sin\phi \\ \cos\phi \end{pmatrix} + m \begin{pmatrix} \cos\phi \\ \sin\phi \end{pmatrix},\end{aligned}$$ where $\phi$ is the angle from $x$-axis to $\Vec{C}$, $k$ is one-dimensional wave number along the tube axis, $m = 2\pi\nu\xi/(3C)$ and $\nu = 2n_1+n_2$ (in modulo of 3) is either of 0, 1 or $-1$. This gives four branches of one-dimensional energy bands $$\begin{aligned} && E^\pm_1(k) = -u_0 \pm \hbar v \sqrt{(m-m_D(u_0))^2 +(k+k_D(u_0))^2}, \nonumber\\ && E^\pm_2(k) = u_0 \pm \hbar v \sqrt{(m+m_D(u_0))^2 +(k-k_D(u_0))^2}, \nonumber\\ \label{eq_E1D_two_modes_approx}\end{aligned}$$ where $$\begin{aligned} && m_D(u_0) \equiv \frac{u_0\xi}{\hbar v} \cos(\phi+60^\circ), \nonumber\\ && k_D(u_0) \equiv \frac{u_0\xi}{\hbar v} \sin(\phi+60^\circ).\end{aligned}$$ In Fig. \[fig\_twomode\_3D\], we plot the energy dispersion Eq. (\[eq\_E1D\_two\_modes\_approx\]) for the case of $\nu=1$ with black curves, which can be recognized as the intersect between the shifted Dirac cones and $k$-space quantization plane. The energy band gap of DWNT is determined by the conduction band minimum of $E_1$ $$E^{(c)} = -u_0 + \hbar v |m-m_D(u_0)|,$$ and the valence band maximum of $E_2$ $$E^{(v)} = u_0 - \hbar v |m+m_D(u_0)|.$$ The difference $$\begin{aligned} &\Delta E = E^{(c)} - E^{(v)} \nonumber\\ &= -2u_0 + \hbar v |m-m_D(u_0)| + \hbar v |m+m_D(u_0)| \label{eq_energy_difference}\end{aligned}$$ shows that the DWNT can have a finite gap of $$\Delta E = 2(\hbar v |m|-u_0)$$ only when $\hbar v |m| \ge u_0$. Compared to the gap in the absence of interlayer interaction, $2\hbar v|m|$, we can see that the interlayer interaction in DWNT reduces the gap of the system by $2u_0$ in a strong coupling condition. Figure \[fig\_twomode\_1D\](a) shows the numerically calculated band dispersions in the extend zone scheme for (35,19)@(40,24) DWNT, plotted along the quantization line closest to $\Vec{K}_\xi$. We can see a good consistency with the analytic expression in Fig. \[fig\_twomode\_1D\](b), which is calculated by Eq. (\[eq\_E1D\_two\_modes\_approx\]). Besides, since armchair-armchair DWNT is one example of the strong coupling condition, its energy dispersion \[Fig. 3(a) in the main text\] $$\begin{aligned} && E^\pm_1(k) = -u_0 \pm (\hbar vk + u_0\xi), \nonumber\\ && E^\pm_2(k) = u_0 \pm (\hbar vk - u_0\xi), %E_{\xi}(k) = \{ %- u_0 - \hbar vk - u_0\xi, %- u_0 + \hbar vk + u_0\xi, %\nonumber\\ %u_0 - \hbar vk + u_0\xi, %u_0 + \hbar vk - u_0\xi %\}\end{aligned}$$ is also reproduced by setting $\phi = 30^\circ$ and $\nu = 0$ in Eq. (\[eq\_E1D\_two\_modes\_approx\]). [^1]: All authors contributed to the manuscript extensively. [^2]: All authors contributed to the manuscript extensively. [^3]: All authors contributed to the manuscript extensively.
{ "pile_set_name": "ArXiv" }
--- abstract: | Recently by using quantized Berry phases, a prescription for a local characterization of [*gapped*]{} topological insulators is given[@Hatsugai06a]. One requires the ground state is gapped and is invariant under some anti-unitary operation. A spin liquid which is realized as a unique ground state of the Heisenberg spin system with frustrations is a typical target system, since pairwise exchange couplings are always time-reversal invariants even with frustrations. As for a generic Heisenberg model with a finite excitation gap, we locally modify the Hamiltonian by a continuous $SU(2)$ twist only at a specific link and define the Berry connection by the derivative. Then the Berry phase evaluated by the entire many-spin wavefunction is used to define the local topological order parameter at the link. We numerically apply this scheme for several spin liquids and show its physical validity. address: ' Department of Applied Physics, University of Tokyo ' author: - Y Hatsugai title: | Quantized Berry Phases\ for a Local Characterization of Spin Liquids\ in Frustrated Spin Systems [^1] --- Topological Orders ================== In a modern condensed matter physics, a concept of the symmetry breaking has a fundamental importance. At a sufficiently low temperature, most of classical systems show some ordered structure which implies that the symmetry at the high temperature is spontaneously lost or reduced. This is the spontaneous symmetry breaking which is usually characterized by using a [*local* ]{} order parameter as an existence of the long range order. States of matter in a classical system are mostly characterized by this order parameter with the symmetry breaking. Even in a quantum system, the local order parameter and the symmetry breaking play similar roles and they form a foundation of our physical understanding. Typical examples can be ferromagnetic and Neel orders in spin systems. Recent studies in decades have revealed that this symmetry breaking may not be always enough to characterize some of important quantum states[@wen89; @Hatsugai04e]. Low dimensionality of the system and/or geometrical frustrations come from the strong correlation can prevent from a formation of the local order. Especially with a quantum fluctuation, there may happen that a quantum ground state without any explicit symmetry breaking is realized even in the zero temperature. Such a state is classified as a quantum liquid which mostly has an energy gap (may not be always). Typical example of this quantum liquids is the Haldane spin chain and the valence bond solid (VBS) states[@Haldane83-c; @Affleck87-AKLT]. Also some of the frustrated spin systems and spin-Peierls systems can belong to this class[@Rokhasar88; @Read91-LN; @Sondi01]. To characterize these quantum liquids, a concept of a topological order can be useful[@wen89; @Hatsugai04e]. It was proposed to characterize quantum Hall states which are typical quantum liquids with energy gaps. There are many clearly different quantum states but they do not have any local order parameter associated with symmetry breaking. Then topological quantities such as a number of degenerate ground states and the Chern numbers as the Hall conductance are used to characterize the quantum liquids. We generalize the idea to use the topological quantities such as the Chern numbers for the characterization of the generic quantum liquids[@Hatsugai04e]. This is a global characterization. When we apply this to spin systems with the time-reversal symmetry (TR), the Chern number is vanishing in most cases. Recently we propose an alternative for the system with the TR invariance by the quantized Berry phases[@Hatsugai06a]. Although, the Berry phases can take any values generically, the TR invariance of the ground state guarantees a quantization of the Berry phases which enables us to use them as local topological order parameters. In the present article, we use it for several spin systems with frustrations and verify the validity. Although the geometrical frustration affects the standard local order substantially, it does not bring any fundamental difficulties for the topological characterizations as shown later. It should be quite useful for characterizations for general quantum liquids[@Hatsugai06a]. Finally we mention on the energy spectra of the systems with classical or topological orders. There can be interesting differences between the standard order and the topological order. As for energy spectra, we have two situations when the symmetry is spontaneously broken. If the spontaneously broken symmetry is continuous, there exists a gapless excitation as a Nambu-Goldstone mode. On the other hand, the symmetry is discrete, the ground states are degenerate and above these degenerate states, there is a finite energy gap. Note that when the system is finite (with periodic boundary condition), the degeneracy is lifted by the small energy gap, $e^{-L^d/\xi}$, where $L$, $d$ and $\xi$ are a linear dimension of the finite system, dimensionality and a typical correlation length. For the topological ordered states with energy gaps, we may expect degeneracy of the ground states depending on the geometry of the system (topological degeneracy). When the system is finite, we expect edge states generically[@Hatsugai93b]. It implies the topological degeneracy is lifted by the energy gaps of the order $e^{-L/\xi}$. Local Order Parameters of Quantum Liquids ========================================= After the first discovery of the fractional quantum Hall states, the quantum liquids have been recognized to exist quite universally in a quantum world where quantum effects can not be treated as a correction to the classical description and the quantum law itself takes the wheel to determine the ground state. The resonating valence bond (RVB) state which is proposed for a basic platform of the high-$T_C$ superconductivity is a typical example[@Anderson87]. The RVB state of the Anderson can be understood as a quantum mechanical collection of [*local*]{} spin singlets. When it becomes mobile under the doping, the state is expected to show superconductivity. Original ideas of this RVB go back to the Pauling’s description of benzene compounds where the quantum mechanical ground state is composed of [*local bonding states (covalent bonds)* ]{} where the basic variables to describe the state is not electrons localized at sites but the bonding states on links[@Pauling]. This is quite instructive. That is, in both of the Anderson’s RVB and the Pauling’s RVB, basic objects to describe the quantum liquids are quantum mechanical objects as a [*singlet pair*]{} and a [*covalent bond*]{}[@Hatsugai06a]. The “classical” objects as small magnets (localized spins) and electrons at site never play major roles. The constituents of the liquids themselves do not have a classical analogue and purely quantum mechanical objects. Based on this view point, it is natural to characterize these quantum objects, the singlet pairs and the covalent bonds, as working variables of the [*local*]{} quantum order parameters. It is to be compared with the conventional order parameter (a magnetic order parameter is defined by a local spin as a working variable). From these observations, we proposed to use quantized Berry phases to define local topological order parameters[@Hatsugai06a]. ( We only treat here the singlet pairs as the topological order parameters. As for the local topological description by the covalent bonds, see ref.\[1\].) For example, there can be many kinds of quantum dimer states for frustrated Heisenberg models, such as column dimers, plaquette dimers, etc. As is clear, one can not find any classical local order parameters to characterize them. However, our topological order parameters can distinguish them as different phases not by just a crossover. Quantized Berry Phases for the Topological Order Parameters of Frustrated Heisenberg Spins ========================================================================================== Frustration among spins prevent from forming a magnetic order and their quantum ground states tend to belong to the quantum liquids without any symmetry breaking. Since they do not have any local order parameters, even if they have apparent different physical behaviors, it is difficult to make a clear distinction as a phase not just as a crossover. We apply the general scheme in the reference \[1\] to classify these frustrated spin systems. Defining quantized Berry phases as $0$ or $\pi$, the spin liquids are characterized locally reflecting their topological order. We can distinguish many topological phases which are separated by local quantum phase transitions (local gap closings). We consider following spin $1/2$ Heisenberg models with general exchange couplings, $ H = \sum_{ij} {J }_{ij}{{\mbox{\boldmath $S$}} _i} \cdot {\mbox{\boldmath $S$}} _j $. [*We allow frustrations among spins.* ]{} We assume the ground state is [*unique and gapped*]{}. To define a local topological order parameter at a specific link $ \langle ij \rangle $, we modify the exchange by making a local $SU(2)$ twist $\theta $ only at the link as $$\begin{aligned} J_{ij}{\mbox{\boldmath $S$}} _i \cdot {\mbox{\boldmath $S$}} _j &\to & J_{ij} \big( \frac {1}{2} ( e^{- i\theta } S_{i+} S_{j-} + e^{ i \theta} S_{i-}S_{j+} ) + S_{iz}S_{jz} \big).\end{aligned}$$ Writing $x=e^{i\theta}$, we define a parameter dependent Hamiltonian $H(x)$ and its normalized ground state $|\psi(x) \rangle $ as $H(x) |\psi(x) \rangle =E(x) | \psi(x) \rangle $, $\langle {\psi} | {\psi} \rangle= 1$. Note that this Hamiltonian is invariant under the time-reversal (TR) $\Theta_T$, $ \Theta_{ T} ^{-1} H(x) \Theta_{ T} = H(x) $[@tri]. Also note that by changing $\theta:0\to 2\pi$, we define a closed loop $C$ in the parameter space of $x$. Now we define the Berry connection as $ {A}_\psi = \langle {\psi} | d {\psi} \rangle = \langle {\psi} | \frac {d }{d x} \psi\rangle dx $. Then the Berry phase along the loop $C$ is defined as $ i{\gamma } _C ({A}_\psi )= \int_C {A}_\psi $[@berry84]. Besides that the system is gapped, we further assume [*the excitation gap is always finite*]{} (for $^\forall x$), to ensure the regularity of the ground state[@Hatsugai04e]. This may not be alway true, since the gap can collapse by the local perturbation as an appearance of localized states (edge states)[@Hatsugai93b]. Note that by changing a phase of the ground state as $| {\psi}(x) \rangle =| {\psi}^\prime(x) \rangle e^{i\Omega(x)} $, the Berry connection gets modified as $A_\psi= {A}_\psi^\prime + i d {\Omega} $ [@berry84; @Hatsugai04e]. It is a gauge transformation. Then the Berry phase, ${\mbox{\boldmath $\gamma $}}_C $ also changes. It implies that the Berry phase is not well defined without specifying the phase of the ground state (the gauge fixing). It can be fixed by taking a single-valued reference state $|\phi \rangle $ and a gauge invariant projection into the ground state $ P = | \psi \rangle \langle \psi |= | \psi^\prime \rangle \langle \psi^\prime| $ as $ |{\psi}_\phi \rangle = {P} |{\phi} \rangle /\sqrt{N_\phi }$, $N_\phi = \|{P} |\phi \rangle \|^2 = |\eta_\phi| ^2$, $\eta_\phi= \langle \psi | \phi \rangle $[@Hatsugai04e; @Hatsugai06a]. We here require the normalization, $N_\phi$, to be finite. When we use another reference state $| \phi^\prime \rangle $ to fix the gauge, we have $ |\psi_\phi \rangle =| \psi_{\phi^\prime} \rangle e^{i \Omega },\ {\Omega} = {\rm arg}\, ( {\eta}_\phi - {\eta}_{\phi'} ) $. Due to this gauge transformation, the Berry phase gets modified as $ {\gamma } _{C} ({A}_{\psi_\phi} ) = {\gamma } _{C} ({A}_{\psi_{\phi^\prime}} )+ \Delta, \quad \Delta_{}= \int_C d {\Omega} $. Since the reference states $|\phi \rangle $ and $|\phi' \rangle $ are single-valued on the $C$, the phase difference $\Omega $ is just different by $\Delta=2\pi M_C $ with some integer $M_C$. Generically it implies that the Berry phase has a gauge invariant meaning just up to the integer as $$\begin{aligned} \gamma _C & \equiv & -i \int_C\,{A},\quad {\rm mod}\, 2\pi \end{aligned}$$ By the TR invariance, the Berry phase get modified as $ \gamma _C (A_\psi) = \sum_J C_J^* d C_j=-\sum_J C_J d C_j^*= -\gamma _C(A_{\Theta\psi}) $ since $\sum_J|C_J|^2=1$[@Hatsugai06a]. Therefore to be compatible with the gauge ambiguity, the Berry phase of the unique TR-invariant ground state, $|\psi \rangle \propto \Theta| \psi \rangle $, satisfies $\gamma _C (A_\psi) \equiv -\gamma _C(A_{\psi})\ ( {\rm mod}\, 2\pi)$. Then it is required to [*be quantized*]{} as $$\begin{aligned} \gamma _C(A_\psi) &=& 0, \pi \ ({\rm mod}\ 2\pi ).\end{aligned}$$ This quantized Berry phases have a topological stability since any small perturbations can not modify unless the gauge becomes singular. Here we note that the Berry phase of the singlet pair for the two site problem is $\pi$[@Hatsugai06a]. Now let us take any dimer covering of all sites ${\cal D}=\{\langle ij \rangle \}$ ($\#{\cal D}=N/2$, $N$ is a total number od sites) and assume that the interaction is nonzero only on these dimer links, then the Berry phases, $\pi$, pickup the dimer pattern $\cal D$. Now imagine an adiabatic process to include interactions across the dimers. Due to the topological stability of the quantized Berry phase, they can not be modified unless the dimer gap collapses. This dimer limit presents a non-trivial pattern of a quantized Berry phase and shows the usefulness of the quantized Berry phases as [*local order parameters of singlet pairs*]{}. To show its real validity of the quantized Berry phases, we have diagonalized the Heisenberg Hamiltonians numerically by the Lanzcos algorithm and calculated the quantized Berry phases explicitly. The first numerical examples are the Heisenberg chains with alternating exchanges. When the exchanges are both antiferromagnetic as $J_A>0$ and $J_{A'}>0$, it is a spin Pierls or dimerized chain. In this case, the Berry phases are $\pi$ on the links with the strong exchange couplings and $0$ on the one with the weak couplings (Fig.\[f:1D\]). This is expected from the adiabatic principle and the quantization. When one of them is negative as $J_A>0$ and $J_{F}<0$, the calculated Berry phases are $\pi$ for the antiferromagnetic links and $0$ for the ferromagnetic ones. It is independent of the ratio $J_A/J_F$. Since the strong ferromagnetic limit is equivalent with the spin $1$ chain, it is consistent with the topological nontrivial structure of the Haldane phases. Further analysis on the $S=1$ systems will be published elsewhere. ![ One dimensional Heisenberg models with alternating exchange interactions with periodic boundary condition (left). Numerically evaluated distribution of the quantized Berry phases (right). $J_A, J_{A'}>0$ and $J_F<0$. The results are independent of the system size. ( We have checked a consistency of the results for various possible system sizes.) \[f:1D\] ](1D.eps){width="1.0\linewidth"} ![ One dimensional Heisenberg models with NN and NNN exchanges (left) with periodic boundary condition. Numerically evaluated distribution of the quantized Berry phases (right). (a), (b) and (c): three different exchange configurations of $J=1$ and $ J'=2$. \[f:tri\] ](triangle12-1and2.eps){width="1.0\linewidth"} -0.6cm Next numerical examples are spin chains with nearest neighbor (NN) and next nearest neighbor (NNN) exchanges as ladder of triangles (Fig.\[f:tri\]). These are typical systems with frustrations. (a) and (b) are two different but specific configurations where one may adiabatically connect the system with different dimer coverings by the strong coupling bonds. In these cases, the quantized Berry phases are $\pi$ for the strong coupling links and $0$ for the rest links. This is consistent with the adiabatic principle. We note here that it is difficult to make a qualitative difference between the two quantum liquids by a conventional methods. However we have made a clear distinction between them as two different topological phases. The present scheme is not only valid for these simple situations but also useful for generic situation. For example, as for a system in the Fig.\[f:tri\] (c), we can not use the adiabatic principle simply. However the quantized Berry phases show non trivial behaviors and it make a clear distinction that the phase (c) is topologically different from the ones in the (a) and (b) as an independent phase not just as a crossover. A local quantum phase transition separates them by the gap closing. As is now clear, the present scheme is quite powerful to make a local characterization of the topological quantum insulators. Part of the work was supported by Grant-in-Aids for Scientific Research (Grant No. 17540347) and on Priority Areas (Grant No. 18043007) from MEXT, and the Sumitomo Foundation. [99]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} Y. Hatsugai, preprint, cond-mat/0603230. , ****, (). , ****, (), ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). L. Pauling, Proc. Nat. Acad. Sci. 39, 551 (1953). The TR is defined as $\Theta_T=K \otimes_j(i \sigma _{jy})$, as the anti-unitary operation ($K$: complex conjugation). It operates for a state $ |G \rangle = \sum_{J=\{\sigma _1,\cdots,\sigma_N\}} C_J | \sigma _1,\sigma _2,\cdots,\sigma _N \rangle $, ($\sigma _i=\pm 1$) as $ \Theta_{ T} |G \rangle = \sum C_J^* (-)^{\sum_{i=1}^N (1+\sigma _i)/2} | -\sigma _1,\cdots,-\sigma _N \rangle $. Then spins get transformed as $ ^\forall j,\ {\mbox{\boldmath $S$}}_j \to \Theta_{ T} ^{-1} {\mbox{\boldmath $S$}}_j \Theta_{ T} = - {\mbox{\boldmath $S$}} _j $ and ${\mbox{\boldmath $S$}}_i\cdot {\mbox{\boldmath $S$}}_j $ is a TR invariant. , ****, (). Y. Hatsugai, Phys.  Rev. Lett. ****, 3697 (1993), [^1]: submitted for the proceeding of the conference, Highly Frustrated Magnetism 2006 (HFM2006), (June 29, 2006) http://www.kobe-u.ac.jp/hfm2006/
{ "pile_set_name": "ArXiv" }
--- abstract: 'We know extensions of first order logic by quantifiers of the kind , with new axioms and appropriate semantics. Related are operations such as , Hilbert’s $\varepsilon$-operator, Churche’s $\lambda$-notation, minimization and similar ones, which also bind a variable within some expression, the meaning of which is however partly defined by a translation into the language of first order logic. In this paper a generalization is presented that comprises arbitrary variable-binding symbols as non-logical operations. The axiomatic extension is determined by new equality-axioms; models allocate functionals to variable-binding symbols. The completeness of this system of the so called [*functional logic of 1st order*]{} will be proved.' author: - | Schönbrunner Josef\ Institut für Logistik der Universität Wien\ Universitätsstraae 10/11, A-1090 Wien (Austria)\ e-mail a8121dab@@helios.edvz.univie.ac.at nocite: - '[@*]' - '[@goedel]' title: | **Completeness Proof of Functional Logic,\ A Formalism with Variable-Binding Nonlogical Symbols\ ** --- **Mathematics Subject Classification:** 03C80, 03B99. Introduction ============ Functional logic is a generalization of first order predicate logic with different kinds of objects by adding the following new features: 1\. The division of expressions into the categories of sentences and individuals (i.e. [*formulas* ]{} and [*terms*]{}) is weakened as with a differentiation of sorts of terms formulas shall also be treated as a sort. Thus the classification of the symbolic entities into [*logical connectives*]{}, [*predicate symbols*]{}, [*function symbols* ]{} loses its significance, as the membership to one of it depends only on its signature (i.e. number and sorts of the argument-places and sort of the resulting expression). The sentential sort (formulas) retains its special role and will be refered to as $\prop$. Thus the signature of a binary connective is ‘$\prop(\prop,\prop)$’, that of a $n$-ary predicate symbol ‘$\prop({\alpha_1,...,\alpha_n})$’, that of a $n$-ary function symbol ‘$\gamma({\alpha_1,...,\alpha_n})$’ and that of a constant symbol ‘$\gamma$’, if each $\alpha_i$ and ‘$\gamma$’ are sorts. Not to be found in predicate logic are symbolic entities whose argument-places are mixed, partly of sort $\prop$ and partly of another object-sort. These do not fit into any of the categories of [*logical connectives*]{}, [*predicate symbols*]{} or [*function symbols* ]{} mentioned above. An example is the expression ‘$?(E,a,b)$’ denoting an object “$a$ if $E$, $b$ otherwise”, which is built up by a symbolic entity ‘?’ of the signature ‘$\alpha(\prop,\alpha,\alpha)$’. 2\. In a formalized theory of predicate logic expressions such as $\clabst{x}{\E}$, $\clabst{x\in M}{\E}$, $\iota x({\E})$, $\varepsilon x(\E)$, $\mu x(\E)$, $\mathop{\mu\,x}\limits_{x<b}(\E)$, $\int_a^b e \cdot dx$ are characterized only by an external rule of translation into the language of the theory. In functional logic, however, such expressions can be generated internally by symbolic entities that bind variables. This is the essential extension of this formalism. In standardized symbolisation a symbolic entity ‘op’ of the resulting sort $\gamma$ with $k$ argument places of signature $ (\alfa_i \mathbin,\sqv\beta_i)\komma \sqv\beta_i=(\beta\indij)_\berj {\scriptstyle(1\le i\le k)} $ is linked with the generation rule by which\ is an expression of sort $\gamma$, if each $a_i$ is an expression of sort $\alfa_i$ and each $\defsqidetj q$ is a sequence of variables of sorts-sequence $\defsqidetj\beta$. The case $r_i=0$ means that the optional part, which is written as $\mathop{\hbox{$[\mathinner{\ldotp\ldotp\ldotp\ldotp\ldotp}]$}} \limits_{\smash{if\ r_i>0}}$, is to be dropped. This case applies to [*logical connectives*]{}, [*predicate symbols* ]{} and [*function symbols* ]{} in all argument-places $i$, only quantifiers have a signature with $r_1=1$. Putting the template $\mathop{\hbox{$[\mathinner{\ldotp\ldotp\ldotp\ldotp\ldotp}]$}} \limits_{\smash{if\ r_i>0} }$ around something is used to consider both cases $r_i=0$ as well as $r_i>0$. If $r_i>0$ the brackets can be erased and ‘$\mathop{[(\sqv q_i):]}\limits_{\smash{if\ r_i>0}} a_i$’ stands for ‘$(\sqv q_i):a_i$’, which is an abbreviation of $\obl{(q_{i\,1},\ldots,q_{i\,r_i}):a_i}$. $\obl{q_{i,j}}$ are the binding variables to $\obl{a_i}$. \#1: [.]{} [**Examples:**]{} 1. The extension of a formal Peano-system by axioms like $$\apl_k((x_1,\ldots,x_k):e,a_1,\ldots,a_k)= \Subs{e}{x_1\zweildt x_k}{a_1\zweildt a_k} ,\hfill$$ $$\PR(a,(y,z):b,n)= \felse(n\mathord=0,\>a,\>\apl_2((y,z):b,\>n-1, \>\PR(a,(y,z):b,n-1)))$$ allows the representation of each primitive recursive function by a single term. (In the above schemes of axioms $e,a,b,a_i$ range over arbitrary terms and $n$ is a number variable.) If all free variables of $a$ and $b$ which are not members of $\encurs{\obl y, \obl z}$ are in $\encurs{\obl{u_1},...,\obl{u_m}}$, then the term $\obl{\PR(a,(y,z):b,n)}$ can be associated with a $m+1-$ary function of arguments $u_1,...,u_m,n$, defined by primitive recursion from [*base-function* ]{} $\enangle{u_1,...,u_m} \mapsto a$ and [*iteration-function* ]{} $\enangle{u_1,...,u_m,y,z} \mapsto b$. 2\. Quantifiers to variables of different sorts must be distinguished, the signature of $\obl{\GQ^{\alpha}}$ is $\obl{\prop((\alfa):\prop)}$. In standardized manner, a formula $\obl{\AQ{x^{\alfa}} E}$ would be\ $\obl{\GQ^{\alpha}((x^{\alfa}):E)}$. 3\. A standardized version of expressing “the least $x$ less than $b$ such that $E$ if one exists, or $b$ if none exists” (usually symbolized by $\obl{\mathop{\mu\,x}\limits_{x<\rmp b}\,\E}$) is $\obl{\mathop\mu_<(\rmp b,(x):\,\E)}$, the signature of $\obl{\mu_<}$ being $\obl{\nu(\nu,(\nu):\prop)}$ if $\nu$ is the sort of natural Numbers. A standardized symbolic language and an ideal language for application with the same expressional ability are different. The first should be simple in order to avoid unnecessary expense in metatheoretic treatment. With regard to application this simplicity can be disadvantageous. For instance in predicate calculus one symbol cannot be used with different signatures depending on the sorts of arguments it appears with. Such multiple use of a symbol became popular in programming languages, when looking at overloaded versions of procedure-names. Application of formal logics could profit from such a technique, too. For instance, consider sorts $\alpha,\beta$ and a class of models such that the range of $\beta$ is a substructure of the range of $\alpha$, if the signature of a symbol w.r.t a certain argument-place is of sort $\alpha$, then any term of sort $\beta$ also fits into that place. “overloading of symbols” may yield simpler axiom-schemes. Yet it requires change from the notion of [*symbol* ]{} to that of [*symbolic entity* ]{} (= symbol + signature). As a basis of meta-linguistic reference we shall take the standardized form. Results derived on this basis can easily be transferred into more flexible symbolism for practical use. Non standardized usages of writing such as that w.r.t. quantifiers shall be retained like alias clauses in our object language. Instead of overloading the various ‘$\GQ^\alpha$’ into one ‘$\GQ$’ and various ‘$\overset{\alpha}=$’ to ‘=’ we stipulate: ‘$\GQ x$’ stands for ‘$\GQ^\alpha x$’ if $\obl x \in \VAR_{\alpha}$ and ‘$a=b$’ stands for ‘$a \overset{\gamma}= b$’ if $\obl a, \obl b \in \F_\gamma$. As to the logical axioms, the usual schemes of predicate calculus may be adapted, but [*binding of variables* ]{} (significant to the axioms) is performed by symbols other than quantifiers, too. One part of the equality axioms become $$ \hspace*{-7mm} \ifBonnerQ :::::::::: {\textstyle\GQ_{z\biind i1}...\GQ_{z\biind i{r_i}}} \else (\forall z\biind i1)...(\forall z\biind i{r_i})\; \fi (a_i\vSubs{x_i}{z_i} \stackrel{\alpha_i}= b_i\vSubs{y_i}{z_i}) \limp \opex xa \stackrel{\gamma}= \opex yb$$ where $\Vec{x_i} \equiv \xtup{x}i{r_i}$, similarly $\Vec{y_i}, \Vec{z_i}$, and where $a_i\vSubs{x_i}{z_i}$ designates the expression obtained from $a_i$ by replacing each [*free occurrence* ]{} of $x\biind ij$ by $z\biind ij$ (for [$1 \le j \le r_i$]{}) and $\opex yb$ differs from $\opex xa$ only by the $i$-th argument. Note: (1) if $r_i=0$, then the above sequence of universal quantifiers becomes empty; (2) If $\gamma=\prop$ is the sentential sort, then $\stackrel{\prop}=$ is to be identified with $\leftrightarrow$ (=logical equivalence). The main problem is introducing appropriate semantics to which the calculus is complete. Let “op” be a [*symbol* ]{} with $k>0$ argument places, at least one of them provides binding variables i.e. $r_i>0$ for some $\indto ik$. At first consideration we suppose an interpretation-structure to assign to ‘op’ the functional $$\textstyle \M(\oblt{op}):\Prod_{i:1\zweildt k}V_i\>\to\>\M_{\gamma}\>,\mkern 22mu V_i = \hbox{\footnotesize$ \begin{cases} \M_{\alpha_i} & \text{if \enspace $r(i)=0$} \\ \FigMap & \text{if \enspace $r(i)>0$} \end{cases}$}$$ ($\M_{\gamma}$ is the range of $\gamma$ and $\operatorname{Map}(\rmp X,\rmp Y)=\clabst{f}{f:\rmp X\to\rmp Y}=Y^X $). But this turns out to fix too much, as assignment only to a part of the functions of $\FigMap$ will be relevant for evaluation of expressions. Nothing beyond that partial assignment you may expect to come out from the [*syntactic information of a consistent theory*]{}. To overcome this problem a certain restriction of the argument ranges $V_i$ will help. The notion of a structure $\M$ must therefore be extended by a new component which assigns a selected set $\M_{\gamma}^{\vec{\sigma}} \subseteq \text{Map}(\Prodim\M_{\sigma_i},\M_{\gamma})$ to each sequence of sorts $\gamma,\vec{\sigma}$. The selected sets are characterized by some [*closure qualities* ]{} similar to those that apply to the set of [(primitive-) recursive functions]{}, for instance constant functions and projections are to be included. In a trivial way, however, we find an extension $\BM$ of $\M$ so that $\BM_{\gamma}^{\vec{\sigma}}=\text{Map}(\Prodim\M_{\sigma_i},\M_{\gamma})$ and the [*interpretations of expressions* ]{} by $\M$ and by $\BM$ coincide as well as the [*semantic consequences* ]{} $\M\satq$ and$\BM\satq$. To construct a model of a [*consistent formal theory* ]{} the method of extension to a [*complete Henkin Theory* ]{} as in Henkin’s Proof of the Completeness Theorem s. [@henkin; @shoenf] is still applicable. Survey ====== As basic structure of a $1^{\text{st}}$ order functional logic language we define the $\lfi-$signature. Then a standardized language is specified that determines the notion of an expression $\obl{e}$ of sort $\gamma$. This is defined inductively by a characteristic syntactic relation of $\obl{e}$ to a symbol ‘op’ (the root of $\obl{e}$), argument expressions $\obl{a_i}$ and possibly variables $\obl{v\biind ij}$ binding $\obl{a_i}$. As this relation shall frequently appear as a background premise within definitions and proofs constantly using the same arguments $\obl e$,‘op’,‘$a_i$’ and $\obl{v\biind ij}$, we introduce the abbreviation . The definition of a $\lfi-$structure is based on the notion of a $\lfi-$signature according to features discussed in the introduction. We shall only consider logic with [*fixed equality* ]{} base on [*normal structure semantics*]{}. To derive semantics for the language from the notion of structure based on a signature, that is to establish an interpretation of the expressions (of various sorts), the usual definition as a map from [*variables-assignments* ]{} to the domain of the sort the expression belongs to is not suitable. Instead of it now an expression $\obl{e}$ will be evaluated according to a $\lfi-$structure $\M$ by assigning a mapping on the set of the so called perspectives of $\obl e$ consisting of all finite sequences of variables, such that all free variables of $\obl{e}$ appear within that sequence. Let $\gamma$ be the resulting sort of $\obl e$. The evaluation of $\obl e$ based on $\M$ maps the empty sequence $\enangle{}$ into a member of the range $\M_{\gamma}$ of $\gamma$, provided that $\enangle{}$ is a [*perspective* ]{} of $\obl e$ (i.e. if $\obl e$ has no free variable) and it maps a non-empty perspective $\enangle{\obl{u_1},\ldots,\obl{u_m}}$ of $\obl{e}$ into a function of $\M_{\sigma_1}\times\ldots\times \M_{\sigma_m}\longrightarrow \M_{\gamma}$, if $\sigma_j$ is the sort of the variable $\obl{u_j} \enspace \scriptstyle(j=1,\ldots m)$. The definition will be [*inductive* ]{} based on the background-assumption of . As to the [*axiomatization*]{}, the [*logical axioms* ]{} differ in shape from [predicate logic ]{} only a little with regard to [*equality logic*]{}. But we must also take into account an extension of some notions which are basic to formulate axioms of logic, namely the notions of free and bound variables, substitution and substitutability. The [*axioms system* ]{} together with the [*rules* ]{} [Modus Ponens ]{} and [Generalization ]{} establishes the [*calculus of Functional Logic*]{}. The extension of this calculus by individual [*nonlogical axioms* ]{} is called a [*functional logic theory*]{}. A $\lfq-$structure-model of a [*consistent functional logic theory* ]{} can be constructed as in predicate logic from an extension of that theory which inherits consistency, admits examples and is complete. ([*admitting examples* ]{} is related to the existence of terms $t$ for each formula $\varphi$ with at most one free variable $x$, so that $\EQ{x}\,\varphi \limp \varphi\psub xt$ is a theorem; we associate this theorem to designate $t$ as an example, if $\EQ{x\,\varphi}$ is true. ) In Henkin’s proof this is achieved in two steps: The 1st extension produces a [*theory* ]{} that admits examples by addition of constant symbols and [*special axioms* ]{} (s. [@henkin; @shoenf; @barwise]). Consistency continues as this extension is [*conservative* ]{} (each theorem of the extended theory, if restricted to the original language, is also provable within the original theory). The 2nd extension by Lindenbaum’s theorem enlarges the set of nonlogical axioms without changing the language. Both extensions can easily be adapted to functional logic. The definition of a [**]{}, which shall prove to be a model of the constructed extension to a [*closed Henkin Theory*]{} and hence also a model of the original theory, also relies on a so called [*norm function*]{} that assigns a representative to each [*closed expression* ]{} within a [*congruence class*]{}. This class will be defined by the congruence relation, that applies to $\obl{a}$ and $\obl{b}$ iff $\obl{a=b}$ is a theorem of the [*extended theory*]{}. As we suppose [*completeness* ]{} of the [*extended theory*]{}, there are exactly two [*congruence classes*]{} of expressions of sort $\prop$ ; hence we choose the constants $\falsum$ and $\verum$ (representing [*true* ]{} or [*false* ]{} respectively) as values of the [*norm function*]{} of formulae. Upon the set of [*norms* ]{} (i.e values of the [*norm function*]{}), which is a subset of [*closed expressions* ]{} to each [*sort*]{} as [*base-range*]{}, we then define our so called [*term-structure*]{}. The model quality of this structure will be obtained as an immediate consequence of a theorem (by specialization). The claim of this theorem is that the [*evaluation* ]{} of an expression $\obl e$ by the [*term-structure* ]{} $\CM$ is a function which assigns to each [*perspective* ]{} a mapping from a cartesian product of certain ranges $\zweildt\CM_{\sigma_i}\zweildt$ to $\CM_{\gamma}$, which can be described exclusively by application of [*multiple substitution* ]{} (variables by terms) from $\obl e$ and application of the [*norm function*]{}. The validity of a formula (=expression of sort $\prop$) within a model means that its interpretation maps one (and implicitly all) non-empty [*perspectives* ]{} into a constant function of value $\M(\verum)$. In case of a closed formula this implies that the [*empty perspective* ]{} is assigned the value $\M(\verum)$. If $\CM$ takes the place of $\M$, $\M(\verum)$ changes into $\verum$ ($=\CM(\verum)$). By applying the preceding theorem to an $\obl{e}$ of sort $\prop$ and taking into account that [*equality of the sort* ]{} $\prop$ and [*logical equivalence*]{} become one and the same ($\obl{\eqs{\prop}}=\obl{\leftrightarrow}$), you easily conclude the equivalence of $\obl{e}$ [*being valid in the term-model*]{} and [*being deducible in the extended theory*]{}. As we refer to an [*extension*]{}, the restriction of $\CM$ to the language of the original theory is also a model of this theory. This confirms the [*satisfiability* ]{} of that theory on the assumption of its [*consistency*]{}. [\ ]{} =0em \#1 \#2 $$\displaylines{\hspace*{-1.4em}\rm#1\hfill \\[1.5\baselineskip]#2}$$ \#1[$\rm#1$]{} \#1\#2[[\#1\#2]{}]{} \#1[{\#1}]{} \#1 [ ]{} Signature and Language ====================== \[Df.Fnl\] $\fnlsigs$: The notion of is determined by the following key-components: : sorts; : symbolic operations; : sorts for which variables and quantification are provided. : variables; : signature map, for characterizes as a symbolic operation to generate expressions of sort $\gamma$ from $n$ argument-expressions of sort $\alpha_i$, that might be bound by  variables of sorts . Significant for the notion to be defined is also a distinguished sort $\prop$ (the type of formulae) and distinguished elements of  : $ \moqt\verum\kom \moqt\falsum\kom \moqt\neg\kom \moqt\rightarrow\kom \moqt\wedge\kom \moqt\vee\kom \moqt\leftrightarrow\kom \moqt{\GQ^\alfa} \kom \moqt{\PQ^\alfa} $ (for each $\alfa$ of ) and $\moqt{\eqs\alfa}$ (for ) with fixed values relative to . In formalized manner now we stipulate all characterizations of this definition as follows: $\fnlsigs \;\boldsymbol{\lleqv}\; \text{Conjunction of the following attributes}: $\ $$\begin{aligned} \quad& S = (\SRT_S,\SOP_S,\VSRT_S,\VAR_S,\sig_S) \skp \VSRT_S \incl \SRT_S \skp \VAR_S\cap \SOP_S=\emptyset {\\&}\sig_S\colon(\SOP_S \cup \VAR_S) \to \SRT_S \times \textstyle \bigcup\limits_m(\,\SRT^m \times {\VSRT^*}^m\,) {\\&}(\GQ \enqt v \in \VAR_S)\ \sig_S\enqt v \in \VSRT_S\times\enbrace\enangle\empty^2 \skp \SOP_S \cup \VAR_S \quad \text{can be well-ordered} \footnotemark[1] {\\&}(\GQ \alfa \in \VSRT_S)\ \sparenth{big}{ \clabst{\enqt v \in \VAR_S}{ {\sig_S\enqt v=(\alfa,\emptyseq,\emptyseq)}} \quad \mbox{is enumerable}} {{\\[\medskipamount]}&}\prop \in \SRT_S \skp \Junklst \in \SOP_S {\\&}(\GQ \alfa \in \VSRT_S) \quad \moqt{\GQ\nolimits^\alfa}, \moqt{\PQ\nolimits^\alfa} \in \SOP_S \qquad\qquad (\GQ \alfa \in \SRT_S) \quad \moqt{\eqs\alfa} \in \SOP_S {{\\[\medskipamount]}&}\parbox{.94\textwidth}{ $\sig_S$ for the distinguished members of $\SOP_S$ is specified by a circumscription $\ustypeS$, \enspace (s. auxiliary notations below) \enspace:} \end{aligned}$$ $$\begin{array}{*{5}{c|}c} \text{op} &\verum, \falsum &\lnot &\limp,\land,\lor,\leqv &\GQ^\alfa, \PQ^\alfa &\eqs\alfa \\ \hline \ustypeS\text{`op'} &\obl{\prop} &\obl{(\prop)\prop} &\obl{(\prop,\prop)\prop} &\obl{((\alfa)\prop)\prop} &\obl{(\alfa,\alfa)\prop} \\ &&&&\mbox{\footnotesize$\for \alpha\in\VSRT_S$} &\mbox{\footnotesize$\for \alpha\in\ISRT_S$} \\ \hline \end{array}$$ (dependent components) to a given $\fnlsigs$:\ $^{\renewcommand{\arraystretch}{1.2} \newcommand{\vsigalpha}{(\alfa,\emptyseq,\emptyseq)} } \begin{Array}{lrclc>{$\quad}r<{$}} \AQu{\alpha\in\SRT_S} &\CSOP_S^{\alpha} &= &\clabst{\oblt{c}\in\SOP_S}{\sig_S\oblt{c}=\vsigalpha} &\aliasgl \CSOP_{\alpha} &constants \\ \AQu{\alfa \in \VSRT_S} &\VAR^\alfa_S &= &\clabst{\obl{v} \in \VAR_S}{{\sig_S\obl{v}=\vsigalpha}} &\aliasgl \VAR_{\alpha} &variables% v. Typ $\alpha$ \end{Array} \\ \begin{array}[t]{rl} \AQu{\vec{\sigma} = \enangle{\sigma_i}_{\indto il} \in \VSRT\stern} &\VAR^{\vec{\sigma}}_S \aliasgl \VAR_{\vec{\sigma}} = \Prod_{\indto il} \VAR_{\sigma_i} = \\ &= \clabst{\vec{u}}{%-------------- \vec{u} = \enangle{u_i}_{\indto il} \land \AQu{\indto il} \obl{u_i} \in \VAR_{\sigma_i} } \end{array} $\ For $\sig_S$ we use a circumscription that is more convenient for application: $$\begin{aligned} \ustypeS \colon \SOP_S &\cup \VAR_S \to (\SRT_S\cup \enbrace{\enqt(,\enqt,,\enqt)})\stern \qquad \AQu{\enqt{\text{op}} \in \SOP_S \cup \VAR_S} \\ \sigsoop S &= \stdsigx \quad \text{ iff } \\ \ustypeS\;\oblt{op} &= \begin{cases} \enqt\gamma &\falls m\mathbin=0 \\ \enqt{(\theta_1,\zweildt,\theta_m)\gamma} &\falls m\mathbin>0 \end{cases} \qquad \enqt{\theta_i}= \begin{cases} \enqt{\alfa_i} &\falls r_i=0 \\ \enqt{(\beta_{i\;1},\zweildt,\beta_{i\;r_i})\alfa_i} &\falls r_i>0 \end{cases} \end{aligned}$$ ** Subscript $S$ will be omitted ($\SRT$ for $\SRT_S, ... ,\mathop{\VAR_\alfa} \\ \text{ for } \mathop{\VAR^\alfa_S}$) if only one $\fnlsig$ is considered. $\alpha,\beta,\gamma,...,\alpha_i,\beta_{ij},...$ denote members of $\SRT$. $u,...,z,\enspace u_i,...,v_{ij},...$ denote members of $\VAR$. Symbols with an arrow-accent refer to a finite sequence and writing such symbols one after the other denotes the concatenation of the sequences (if $\vec{p}=\enangle{\tup pk}$ and $\vec{q}=\enangle{\tup ql}$ then $\vec{p}\vec{q}=\enangle{\tup pk, \tup ql}$). If such a symbol e.g. $\vec{u}$ appears inside a quoted string, as for instance , it denotes the string $\obl{u_1,\ldots,u_l}$, which is the concatenation of each $\obl{u_i}$ with $\obl{,}$ interspearsed. We shall always assume $\vec{\alpha}=\indtupel{\alpha}im \komma \VVec{\beta}=\indtupel{\vec{\beta}}im \komma \vec{\beta_i}=\enangle{\beta\biind ij}_{\indto j{r_i}} $. $\enspace{\parenth{big}{\F_S^\gamma}_{\gamma \in \SRT_S}}$ \[Sprache\] Let $S$ be a [$\fnlsig$]{}. The [*standardized language* ]{} of $S$ is introduced as a mapping on $\SRT_S$ by stipulating for each [$\gamma \in \SRT_S$]{} the set $\F_S^\gamma$ of expressions of sort $\gamma$ inductively as follows (by above clause (1)$\F_\gamma \aliasgl \F_S^\gamma$): $$\enqt{e} \in \F_\gamma \leqv (\EQ{\oblt{op},m,\gamma,\vec{\alpha},\VVec{\beta},\vec{a},\VVec{v}}) ( \Syntass)$$ where  abbreviates the conjunction of the following formulae: $$\begin{array}{l} \obl{\op} \in \SOP \cup \VAR \spa \sigoop = \sigopx \\ \vec{\alpha} \in \SRT^m \spa \VVec{\beta} \in \tprod{\indto im}{\VSRT^{r_i}} \spa \mbox{($\vec{\alpha},\VVec{\beta}$ rely on above I.(6))} \\ \vec{a} = \enangle{\obl{a_i}}_{\indto im} \in \textstyle\Prod\limits_{\indto im} \F_{\alpha_i} \spa \mbox{(each $\obl{a_i} \in \F_{\alpha_i}$)} \qquad \VVec{v} = \enangle{\vec{v}_i}_{\indto im} \in \Prod\limits_{\indto im} \VAR_{\vec{\beta_i}} \\ \AQ{i}\; \parenth{Big}{\vec{v_i} = \enangle{\obl{v\biind ij}}_{\indto j{r_i}} \land \AQu{1\le j < k \le r_i}\obl{v\biind ij}\neq\obl{v\biind ik} } \iftrue \\ \obl e = \begin{cases} \obl{\op} & \falls m=0 \\ \obl{\opex va} & \falls m>0 \end{cases} \else \\[3pt] if $m=0$ \enspace then\enspace $\obl e = \obl{\op}$ \qquad if $m\neq 0$\enspace then\enspace $\obl e = \obl{\opex va}$ \fi \end{array}$$ $$\begin{aligned} \obl{\op} &\in \SOP \cup \VAR \\ \vec{\alpha} &= \enangle{\alpha_i}_{\indto im} \in \SRT^* \\ \vec{a} &= \enangle{\obl{a_i}}_{\indto im} \in \textstyle\Prod\limits_{\indto im} \F_{\alpha_i} \end{aligned} \begin{aligned} \sigoop &= \sigopx \\ \VVec{\beta} &= \enangle{\vec{\beta}_i}_{\indto im} \in \VSRT^{**} \\ \bigl( &\leqv(\AQ{\indto im})\enspace \obl{a_i} \in \F_{\alpha_i} \bigr) \vphantom{\Prod\limits_{\indto im} \F_{\alpha_i}} \end{aligned}$$ $$\begin{aligned} {}&\VVec{v} = \enangle{\vec{v}_i}_{\indto im} = \enangle{\enangle{\obl{v\biind ij}}_{\indto j{r_i}}}_{\indto im} \komma \text{each } \obl{v\biind ij} \in \VAR_{\beta\biind ij} \komma \text{if } j\neq k \text{ then } \obl{v\biind ij} \neq \obl{v\biind ik} \\[3pt] \empty& \text{if } m=0 \text{ then } \obl e = \obl{\op} \qquad \text{if } m\neq 0 \text{ then } \obl e = \obl{\opex va} \end{aligned}$$ $$\begin{aligned} {}&\enangle{\VVec{v}} = \enangle{\vec{v}_i}_{\indto im} = \enangle{\enangle{\obl{v\biind ij}}_{\indto j{r_i}}}_{\indto im} \\ \empty&(\AQ{\indto im}) \begin{Array}^{\renewcommand{\arraystretch}{1.2}}[t]{ll} (\AQ{1 \le j \le r_i}) & \obl{v\biind ij} \in \VAR_{\beta\biind ij} \\ (\AQ{1 \le j < k \le r_i}) \enspace & \obl{v\biind ij} \neq \obl{v\biind ik} \end{Array} \\ \empty&\obl e = \begin{cases} \obl{\op} & \falls m=0 \\ \obl{\opex va} & \falls m>0 \end{cases} \end{aligned}$$ This definition characterizes the expression $\obl e$ as a chain of symbols which is produced by a symbolic operation $\oblt{op}$ either exclusively (constant or variable) or together with argument-expressions $\obl{a_i}\quad (\indto im)$ possibly accompanied by binding variables $\vec{v_i}$ ($\obl{e} \in \F_{\gamma}$ is composed of smaller expressions $\obl{a_i} \in \F_{\alpha_i}$). In predicate logic binding variables $\vec{v_i}$ are only provided for the two quantifiers, but expressions which are built up by another symbol $\obl{\op}$ are either of shape $\obl{\op}$ or $\obl{\op(a_1,\zweildt,a_m)}$. Even in application of functional logic binding variables will be rare and never appear in front of more than one argument of a symbolic operation. The above definition is a prerequisite to almost all remaining conceptions of this article, always refering to the formula abbreviated by . $\F_S = \bigcup\limits_{\gamma\in\SRT}\F_{\gamma} \qquad \sparenth{normal}{\vec{\sigma}\in\SRT^{\ell}} \quad \F_{\vec{\sigma}} = \Prod_{\indto i{\ell}} \F_{\sigma_i} $ Semantics of Functional Logic ============================= \[Strukt\] $\FnlSMG SM$ : 1. $\fnlsigs=(\SRT,\SOP,\VSRT,\VAR,\sig)$ 2. $\M$ is a mapping defined on $\SRT \cup (\SRT \x \VSRT\stern) \cup \SOP$. This mapping assigns elements of $\SRT$ to corresponding ranges, members of $\SOP$ to symbol-interpretations (i.e. corresponding elements of or functions on such ranges or functionals in case of symbols that bind variables). To ordered pairs of $\SRT \x \VSRT\stern$ it assigns those components which determine the classes of functions admitted as arguments of the letter functionals. 3. $\AQu{\alpha \in \SRT}\enspace \M(\alpha) \aliasgl \M_{\alpha} \neq \emptyset$ and we automatically extend $\M$ to $\SRT\stern$: $ \AQu{\vec{\sigma}= \enangle{\sigma_i}_{\indto i\ell} \in \SRT\stern} \enspace \M(\vec{\sigma})\aliasgl\M_{\vec{\sigma}} \defgl \Prod_{\indto i\ell} \M_{\sigma_i} $ 4. (\_[\_i]{}\^[\_i]{} (\_i, \_i) ) \ if $m=0$ then $\M\obl{\op}\in\M_{\gamma}$, otherwise $ \Map \M\obl{\op} :\Prod_{\indto im}\M_{\alpha_i}^{\vec{\beta}_i} ->\M_{\gamma}. $\ 5. For arbitrary $\gamma\in\SRT \komma \vec{\sigma}=\enangle{\sigma_i}_{\indto i\ell} \in \VSRT^\ell$ 1. [rcccl]{} =0&(,) &= &(,) &= \_\ &gt;0&(,) &&(\_,\_) &\_\^[\_]{}\ \ alias notation: $\M_\gamma^{\vec{\sigma}} \defgl \M(\gamma,\vec{\sigma})$ 2. [ : ]{} 3. [@ rcccl@r ]{} & \^\_ && &\_\^ &\ & \^\_j && [\_i]{}[x\_j]{} &\_[\_j]{}\^ &\ 4. [l]{}\ \_\^ \_ [()]{} \_\^ 5. \ $m>0 \komma \ell>0$\ $ \AQu{\indto im} \quad % \begin{array}[t]{l} \reob{g}_i \in \M_{\alpha_i}^{\vec{\sigma}\verkett\vec{\beta}_i} \quad \text{ and introducing the auxiliary notation:} \\ \reob{h}_i = \begin{cases} \reob{g}_i &\text{ if } r_i=0 \\ \funkd{\M\vec{\sigma}}% {\text{Map}(\M\vec{\beta}_i\komma \M\alpha_i)}% {\revec{y}}% {{\reob{g}_i}_{\revec{y}} = {\umklm[]{% \revec{z} \mapsto \reob{g}_i(\revec{y}\verkett\revec{z})}} } &\text{ if } r_i>0 \quad \mbox{ (3.3 implies $\reob{h}_i\revec{y}\in\M_{\alpha_i}^{\vec{\beta}_i}$) } \end{cases} % \end{array} $\ \ $ \funkd{\M\vec{\sigma}}{\M\gamma}% {\revec{y}}{\M\obl\op(\reob{h}_1(\revec{y}),\dots,\reob{h}_m(\revec{y}))} \in \M_{\gamma}^{\vec{\sigma}} $ (composition) 6. [:]{} 7. \_ = = = and\ $ \enangle{\M_{\prop}, \M\obl{\curlywedge}, \M\obl{\curlyvee}, \M\obl{\neg}, \M\obl{\wedge}, \M\obl{\vee} } = \AlgB = \enangle{\mathbb{B}, \AlgOp0, \AlgOp1, \AlgOp{\com}, \AlgOp{\sqcap}, \AlgOp{\sqcup}} $ forms a Boolean algebra with two elements, $\M\obl{\|\rightarrow\|}$ and $\M\obl{\|\leftrightarrow\|}$ are represented by the (dependent) truth-operations $\sqimp_{\AlgB}$ und $\sqbipf_{\AlgB}$. $\M\obl{\GQ^{\alpha}}$ and $\M\obl{\PQ^{\alpha}}$ are defined for $\alpha\in\VSRT$ as follows:\ $ \M\obl{\GQ^{\alpha}}\komma \M\obl{\PQ^{\alpha}} \colon \M_{\prop}^{\enangle{\alpha}} \rightarrow \M_{\prop} $ for each $\theta \in \M_{\prop}^{\enangle{\alpha}}$ we stipulate\ if $(\boldsymbol{\GQ}\;\reob x \in \M_{\alpha}) \enspace \theta\enangle{\reob x} = \AlgOp1 $ then $\M\obl{\GQ^{\alpha}}(\theta)=\AlgOp1$ otherwise $\M\obl{\GQ^{\alpha}}(\theta)=\AlgOp0$;\ if $(\boldsymbol{\PQ}\;\reob x \in \M_{\alpha}) \enspace \theta\enangle{\reob x} = \AlgOp1 $ then $\M\obl{\PQ^{\alpha}}(\theta)=\AlgOp1$ otherwise $\M\obl{\PQ^{\alpha}}(\theta)=\AlgOp0$. 8. \_ \_ 1 = 0 To extend a [*structure* ]{} $\M$ into an [*interpretation* ]{} of the language, i.e. to find an [*evaluation* ]{} of [*expressions* ]{} $\Expr_{\gamma}$ another approach than that based on [*variables-assignments* ]{} as in [*predicate logic* ]{} is required. The following definitions are prerequisites for the new approach. *\[persp\] $\Map\persp:\Expr_S->\PM(\VAR\stern).$ (Let $\fnlsigs \komma \Expr_S=\bigcup\limits_{\gamma\in\SRT} \Expr_{\gamma}$)\ For $\obl{e}\in\Expr_S$, $\persp \obl{e}$ denotes the set of all $\enangle{\obl{u_i}}_{\indto i\ell} \in \VAR\stern$ such that all free variables of $\obl e$ are in $\clabst{\obl{u_i}}{1\le i \le \ell}$.* We shall need a more technical approach in defining this conception using [*syntactic induction*]{}. If  (Def. ) is assumed, then $\persp\obl{e}$ depends on $\persp\obl{a_i}$ as follows: $$\begin{array}^{\Zzwi} {>{\enspace}l|c||>{\enspace}l} \multicolumn{2}{l}{\text{{\bf cases}}} & \persp\obl{e} = \\ \hline {m = 0} &\oblt{op}\in\VAR &\clabst{\enangle{\obl{u_i}}_{\indto i\ell} \in \VAR\stern}{ \EQu{\indto j\ell}\; \obl{\op}=\obl{u_j}} \\ \cline{2-3} &\oblt{op}\notin\VAR &\VAR\stern \\ \hline \multicolumn{1}{l}{\enspace m > 0} &&\clabst{\vec{u}\in\VAR\stern}{ \AQu{\indto im}\;\vec{u}\verkett\vec{v_i}\in\persp\obl{a_i}} \vphantom{\Big\vert} \\ \hline \end{array}$$ *\[Lp\] (Let $\fnlsigs \komma \vec{u}\in\VAR\stern$)* \_\[\] = \_\[\] = \_[i]{} \_[\_i]{}\[\] $\Expr_{\gamma}[\vec{u}]$ is the set of expressions of $\Expr_{\gamma}$ whose free variables are among $\clabst{\obl{u_i}}{\indto i\ell}$ if $\vec{u} = \enangle{\obl{u_i}}_{\indto i\ell}$. $\Expr_{\gamma}[]$ therefore is the set of [*closed*]{} $\gamma-$[*expressions*]{}. (for $\fnlsigs$): $\Expr_{\gamma} = \bigcup\limits_{\vec{u}\in\VAR\stern} \Expr_{\gamma}[\vec{u}]$ \[pGP\] (for $\fnlsigs \quad \vec{u}= \enangle{\obl{u_i}}_{\indto i\ell} \in \VAR_{\vec{\sigma}} \quad \vec{\sigma}\in\VSRT^* $): $$\enqt{e} \in \Expr_\gamma[\vec{u}] \leqv (\EQ{\oblt{op},m,\gamma,\vec{\alpha},\VVec{\beta},\vec{a},\VVec{v}}) (\pSyntass)$$ where   (=perspective G.P.) can be obtained from   by modification of two conditions: if we change $\oblt{op} \in \SOP \cup \VAR$ into $\oblt{op} \in \SOP \cup \\ \cup \clabst{\obl{u_i}}{\indto i\ell}$ and $\vec{a} \in \Expr_{\vec{\alpha}}$ into $\vec{a} \in \Prod_ {\indto im}\Expr_{\alpha_i}[\vec{u}\verkett\vec{v_i}] $ (each $\obl{a_i} \in \F_{\alpha_i}[\vec{u}\verkett\vec{v_i}]$). *\ Let $\FnlSM SM \komma \gamma\in\SRT \komma \obl{e}\in\Expr_{\gamma} $ and  be assumed. The evaluation $\obl{e}_{\M}$ of $\obl{e}$ is defined to be a function on $\persp\obl{e}$. Let $ \vec{u}=\enangle{\obl{u_i}}_{\indto i\ell} \in \persp\,\obl{e} \komma \AQu{\indto i\ell} \usig\obl{u_i}=\obl{\sigma_i} \komma \vec{\sigma}=\enangle{\sigma_i}_{\indto im} $. Then $\obl{e}_{\M}(\vec{u})$ is defined inductively:* $$ \begin{array}^{\Zzwi \newcommand{\mzw}[1]{\multicolumn{2}{c||}{#1}}} {c|c|c||>{\enspace}l} \multicolumn{3}{c}{\text{{\bf cases}}} & \obl{e}_{\M}(\vec{u}) = \\ \hline \ell=0 &\mzw{m=0} &=\M\oblt{op} \\ \cline{2-4} \begin{sizemath}{\small} (\vec{u}=\enangle{}) \end{sizemath} &\mzw{m>0} &=\M\oblt{op} \parenth{big}{\enangle{\obl{a_i}_{\M}(\vec{v_i})}_{\indto im}} \\ \hline \ell>0 &m=0 &\oblt{op} \in \VAR &= \text{pj}_k^{\vec{\sigma}} = \funkd{\M\vec{\sigma}}{\M\sigma_k}{\enangle{\reob{x}_i}_i}{\reob{x}_k} \quad \begin{Array}^{\footnotesize}{l} \text{where } k= \\ \mathop{\operatorname{max}j}\limits_{\indto j\ell}(\obl{u_j}=\oblt{op}) \end{Array} \\ \cline{3-4}&&& \\[-2ex] &&\oblt{op} \in \SOP &=\text{cst}^{\vec{\sigma}}_{\M\oblt{op}} = \funkd{\M\vec{\sigma}}{\M\gamma}{\revec{x}}{\M\oblt{op}} \\[3ex] \cline{2-4} &\mzw{}& \\[-2ex] &\mzw{m>0}&= \funkd{\M_{\vec{\sigma}}}{\M_{\gamma}} {\revec{x}}{\M\oblt{op}(\enangle{\reob{h}_i(\revec{x})}_{\indto im})} \quad \parbox{18mm}{\footnotesize with $\reob{h}_i$ defined below by *) } \\ \hline \end{array}$$ $ \text{\small *)}\enspace \begin{Array}[t]{l} (\text{case }\ell>0,m>0)\AQu{\indto im} \reob{h}_i\colon\M_{\vec{\sigma}}\to\M_{\alpha_i}, \enspace \text{if } r_i = 0 \colon \reob{h}_i\colon\revec{x} \mapsto \obl{a_i}_{\M}(\vec{u})(\revec{x}) \\ \text{if } r_i > 0 \text{ then } \reob{h}_i\colon\revec{x} \mapsto \parenth{big}{\obl{a_i}_{\M}(\vec{u}\vec{v_i})}_{\revec{x}} = \funkd{\M_{\vec{\beta_i}}}{\M_{\alpha_i}}% {\revec{y}}{\obl{a_i}_{\M}(\vec{u}\vec{v_i})(\revec{x}\revec{y})}. \end{Array} $ $ \obl e \in \Expr_{\gamma}[\vec{u}] \land \vec{\sigma} \in \VSRT\stern \land \vec{u} \in \VAR_{\vec{\sigma}} \limp \obl{e}_{\M}(\vec{u}) \in \M^{\vec{\sigma}}_{\gamma} $ [(+ Remark)]{} This  is already required for the argument expressions $\obl{a_i}$ of the preceding definition (\[Interp\]) to assert that $\enangle{\reob{h}_i(\revec{x})}_{\indto im}$ belongs to the domain of $\M\oblt{op}$ (this assertion also requires (3.3) of \[Strukt\] [*def.*]{}). Conditions \[Strukt\](3) imply that the above [**]{} propagates from the $\obl{a_i}$ to $\obl e$; so [*syntactic induction* ]{} ensures its validity and any circularity of \[Interp\] [*def.*]{} that might result from presupposing it (for $a_i$) is avoided as well. *\[eq.eval\] If $\M,{\frak{N}}\in \fnlstruct \spa \AQu{\gamma\in\SRT} \M_{\gamma} = {\frak{N}}_{\gamma} $ and\ $\AQu{\oblt{op}\in\SOP \kom \sigoop=\stdsig} \AQu{\revec{h}\in \Prodim\M^{\vec{\beta_i}}_{\alpha_i} \cap \Prodim{\frak{N}}^{\vec{\beta_i}}_{\alpha_i} } \M\oblt{op}(\revec{h})={\frak{N}}\oblt{op}(\revec{h}) $\ (for $m=0,\revec{h}=\emptyseq:\quad \M\oblt{op}(\revec{h})={\frak{N}}\oblt{op}(\revec{h})$) then $\AQu{\obl{e}\in\bigcup\limits_{\gamma\in\SRT}\Expr_{\gamma}} \obl e_{\M} = \obl e_{{\frak{N}}} $* syntactic induction on $\obl{e}$ [Conclusion]{}*\[FullStr\] If $\overline{\M}$ is characterized by $\AQ{\gamma}\;\BM_{\gamma}=\M_{\gamma} \quad \AQ{\gamma,\vec{\sigma}}\; \BM_{\gamma}^{\vec{\sigma}}={\BM_{\gamma}}^{\BM_{\vec{\sigma}}} $\ and $\AQ{\oblt{op}}\,\parenth{big}{\Prodim\M_{\alpha_i}^{\vec{\beta_i}}} \restr \BM\oblt{op} = \M\oblt{op} $ then $\AQu{\obl{e}\in\bigcup\limits_{\gamma\in\SRT}\Expr_{\gamma}} \obl{e}_{\BM}=\obl{e}_{\M} $.*
{ "pile_set_name": "ArXiv" }
--- abstract: 'A vacancy defect is described by a Frenkel–Kontorova model with a discommensuration. This vacancy can migrate when interacts with a moving breather. We establish that the width of the interaction potential must be larger than a threshold value in order that the vacancy can move forward. This value is related to the existence of a breather centred at the particles adjacent to the vacancy.' address: - 'Grupo de Física No Lineal. Departamento de Física Aplicada I. ETSI Informática. Universidad de Sevilla. Avda. Reina Mercedes, s/n. 41012-Sevilla (Spain)' - 'Department of Mathematics. Heriot-Watt University. Edinburgh EH14 4AS (UK)' author: - J Cuevas - C Katerji - JFR Archilla - JC Eilbeck - FM Russell date: 'June 16,2003' title: Influence of moving breathers on vacancies migration --- , , , , Discrete breathers ,Mobile breathers ,Intrinsic localized modes ,Defects 63.20.Pw ,63.20.Ry ,63.50.+x ,66.90.+r Introduction ============ The interaction of moving localized excitations with defects is presently a subject of great interest and can be connected with certain phenomena observed in crystals and biomolecules. Recently, Sen *et al* [@SAR00] have observed that, when a silicon crystal is irradiated with an ion beam, the defects are pushed towards the edges of the sample. The authors suggest that mobile localized excitations called quodons, which are created in atomic collisions, are responsible for this phenomenon. The interpretation is that the quodons are moving discrete breathers that can appear in 2D and 3D lattices and move following a quasi-one-dimensional path [@MER98]. The interaction of moving breathers with defects is currently of much interest [@CPAR02b; @BSS02; @KMN02] (for a review on the concept of discrete breather see, e.g. [@FW98]). In this paper, we consider a simple one-dimensional model in order to study how a moving breather can cause a lattice defect to move. This study is new in the sense that most studies that consider the interaction of moving discrete breathers with defects, assume that the position of the latter are fixed and cannot move through the lattice. The defect that we consider is a lattice vacancy, which is represented by an empty well or anti-kink in a Frenkel–Kontorova model [@FF96]. The aim of this paper is to determine in which conditions the vacancy moves towards the ends of the chain. This is a previous step to reproduce the phenomenon observed in [@SAR00] for higher dimensional lattices. We have observed, as it will be explained in detail in Section \[sec:numres\], that different phenomena can occur: the vacancy can move forwards or backwards or remain at rest, and the breather can be reflected, refracted or trapped. This is quite a different scenario from the continuous case, in which the vacancy (or anti-kink) only moves backwards and the breather is always refracted [@KM89]. The model ========= In order to study the migration of vacancies, we consider a Hamiltonian Frenkel–Kontorova model with anharmonic interaction potential [@BK98]: $$H=\sum_n\frac{1}{2}\dot x_n^2+V(x_n)+C\,W(x_{n+1}-x_n).$$ The dynamical equations are: $$\label{eq:dyn} F(\{x_n\})\equiv \ddot x_n+V'(x_n)+C\,[W'(x_n-x_{n-1})-W'(n_{n+1}-x_n)]=0,$$ where $\{x_n\}$ are the absolute coordinates of the particles; $V(x)$ is the on–site potential, which is chosen of the sine-Gordon type: $$V(x)=\frac{L^2}{4\pi^2}\left(1-\cos\frac{2\pi x}{L}\right),$$ with $L$ being the period of the lattice. The choice of a periodic potential allows us to represent a vacancy easily. Thus, if we denote the vacancy site as ${n_{\mathrm{v}}}$ (see figure \[fig:FK\]), the displacements of the particles with respect to their equilibrium position are: $$\left\{ \begin{array}{ll} u_n=x_n-nL & n<{n_{\mathrm{v}}}\\ \\ u_n=x_n-(n+1)L & n>{n_{\mathrm{v}}}.\end{array} \right.$$ ![Scheme of the Frenkel–Kontorova model with sine-Gordon on–site potential. The balls represent the particles, which interact through a Morse potential. The vacancy is located at the site ${n_{\mathrm{v}}}$.[]{data-label="fig:FK"}](figfk.eps){width="\singlefig"} The interaction potential $W(x)$ is of the Morse type: $$W(x)=\frac{1}{2b^2}[\exp(-b(x-a))-1]^2,$$ where $a$ is the distance between neighboring minima of the interaction potential. In order to avoid discommensurations, we have chosen $L=a=1$. The parameter $b$ is a measure of the inverse of the width of the interaction potential well. The interaction between particles is stronger when $b$ decreases. The $1/b^2$ factor allows a Taylor expansion of $W(x)$ at $x=a$ independent on $b$ up to second order and, in consequence, the curvature at the bottom of the interaction potential $CW(x)$ depends only on $C$. The reason for the choice of this potential is twofold. On the one hand, it represents a way of modelling the interaction between atoms in a lattice so as the lager the distance between particles, the weaker the interaction between them becomes. On the other hand, if a harmonic interaction potential were chosen, apart from being unphysical in this model, the movement of the breather would involve a great amount of phonon radiation, making it impossible to perform the study developed in this paper. Throughout this paper, the results correspond to a breather frequency ${\omega_{\mathrm{b}}}=0.9$. Values of ${\omega_{\mathrm{b}}}\in[0.9,1)$ lead to qualitatively similar results. Values of ${\omega_{\mathrm{b}}}\lesssim0.9$ have not been chosen as moving breathers do not exist [@MARIN]. Numerical Results {#sec:numres} ================= Preliminaries ------------- In order to investigate the migration of vacancies in our model, we have launched a moving breather towards the vacancy located at the site ${n_{\mathrm{v}}}$. This moving breather has been generated using a simplified form of the marginal mode method [@CAT96; @AC98], which consists basically in adding to the velocity of a stationary breather a perturbation which breaks its translational symmetry, and letting it evolve in time. In these simulations, a damping term for the particles at the edges has been introduced in order that the effects of the phonon radiation be minimized. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Energy density plot of the interaction moving breather–vacancy. The vacancy is located at ${n_{\mathrm{v}}}=0$. Note that the vacancy moves backwards (left) and, in the case of the figure to the right, the breather passes through the vacancy.[]{data-label="fig:sim1"}](simb1.eps "fig:"){width="\middlefig"} ![Energy density plot of the interaction moving breather–vacancy. The vacancy is located at ${n_{\mathrm{v}}}=0$. Note that the vacancy moves backwards (left) and, in the case of the figure to the right, the breather passes through the vacancy.[]{data-label="fig:sim1"}](simb2.eps "fig:"){width="\middlefig"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The initial perturbation, $\{\vec V_n\}$ has been chosen as $\vec V=\lambda(\ldots,0,-1/\sqrt{2},0,1/\sqrt{2},0,\ldots)$, where the nonzero values correspond to the neighboring sites of the initial center of the breather. This choice of the perturbation allows it to be independent on the parameters of the system $b$ or $C$. If the pinning mode were chosen as an initial perturbation, it would depend on the parameters of the system. Breather–vacancy interaction ---------------------------- When a moving breather reaches the site occupied by the particle adjacent to the vacancy, i.e., the location ${n_{\mathrm{v}}}-1$, it can jump to the vacancy site or remain at rest. If the former takes place, the vacancy moves backwards. However, if the interaction potential is wide enough, the particle at the ${n_{\mathrm{v}}}+1$ site, can feel the effect of the moving breather at the ${n_{\mathrm{v}}}-1$ site and it can also move towards the vacancy site. In the last case, the vacancy moves forwards. Figures \[fig:sim1\] and \[fig:sim2\] illustrate both phenomena. It is interesting that the vacancy can migrate along several sites before stopping if the interaction between particles is strong enough (see Figure \[fig:sim3\]). The largest jumps we have detected are of eleven sites. There is no apparent correlation between the characteristics of the moving breather, e.g. its kinetic energy and its phase (which has no obvious definition but depends on the initial distance between the breather and the vacancy and the initial velocity of the breather). As an example, Figure \[fig:phase\] shows the vacancy jumps corresponding to different values of the translational kinetic energy of the breather. We have not been able to detect any pattern. The same plot with respect to the breather distance to the vacancy has a similar appeareance. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Energy density plot showing the interaction of the moving breather with the vacancy. The latter is located at ${n_{\mathrm{v}}}=0$. Note that, in the figure to the left, the breather is reflected and the vacancy remains at rest, while in the figure to the right, the vacancy moves forwards.[]{data-label="fig:sim2"}](simre.eps "fig:"){width="\middlefig"} ![Energy density plot showing the interaction of the moving breather with the vacancy. The latter is located at ${n_{\mathrm{v}}}=0$. Note that, in the figure to the left, the breather is reflected and the vacancy remains at rest, while in the figure to the right, the vacancy moves forwards.[]{data-label="fig:sim2"}](simf1.eps "fig:"){width="\middlefig"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \ Numerical simulations show that the occurrence of the three different cases depends highly on the relative phase of the incoming breather and the particles adjacent to the vacancy. However, some conclusions can be extracted: 1) The incident breather always losses energy; 2) The breather can be reflected, trapped (with emission of energy) or refracted by the vacancy, in analogy to the interaction moving breather-mass defect [@CPAR02b]; 3) the refraction of the breather (i.e. the breather passes through the vacancy) can only take place if the vacancy moves backwards, i.e. the particle to the left jumps one site in the direction of the breather. A explanation of this fact is that the particles to the right of the vacancy, in order to support a moving breather, need a strong interaction which cannot be provided by the interaction across a vacancy site, because the distance correspond to the soft part of the Morse potential. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left: energy density plot of the interaction moving breather–vacancy. The vacancy is located at ${n_{\mathrm{v}}}=0$. It can travel several sites along the lattice and eventually stops. Right: detail of the center of the plot showing the variables.[]{data-label="fig:sim3"}](simtv.eps "fig:"){width="\middlefig"} ![Left: energy density plot of the interaction moving breather–vacancy. The vacancy is located at ${n_{\mathrm{v}}}=0$. It can travel several sites along the lattice and eventually stops. Right: detail of the center of the plot showing the variables.[]{data-label="fig:sim3"}](coord.eps "fig:"){width="\middlefig"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Numerical simulations --------------------- As mentioned earlier, the moving breather–vacancy interaction is highly phase-dependent in a non obvious way. That is, the interaction depends on the velocity of the breather and the distance between the breather and the vacancy. Consequently, a systematic study of the state of the moving breather and the vacancy after the interaction cannot be performed. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Left) Number of sites that the vacancy jumps after its interaction with a moving breather. (Right) Zoom on a part of the left figure. Note there is no apparent correlation[]{data-label="fig:phase"}](phase1.eps "fig:"){width="\middlefig"} ![(Left) Number of sites that the vacancy jumps after its interaction with a moving breather. (Right) Zoom on a part of the left figure. Note there is no apparent correlation[]{data-label="fig:phase"}](phase2.eps "fig:"){width="\middlefig"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Therefore, we have performed a great number of simulations each one consisting in launching a single breather towards the vacancy site. In particular, we have chosen 1000 breathers following a Gaussian distribution of the perturbation parameter $\lambda$ with mean value $0.13$ and variance $0.03$ for different values of the parameters $b$ and $C$. Figure \[fig:rands\] shows the probabilities that the vacancy remains at its original site, or that it jumps backwards or forwards, for $C=0.5$ and $C=0.4$. Figure \[fig:randav\] shows the mean values of the number of vacancy jumps for the forward and backward movement as a function of the inverse potential width $b$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Probability that the vacancy remains at its site (squares), moves backwards (circles) or moves forwards (triangles), for a Gaussian distribution of $\lambda$ as a function of the inverse potential width $b$.[]{data-label="fig:rands"}](rands5.eps "fig:"){width="\middlefig"} ![Probability that the vacancy remains at its site (squares), moves backwards (circles) or moves forwards (triangles), for a Gaussian distribution of $\lambda$ as a function of the inverse potential width $b$.[]{data-label="fig:rands"}](rands4.eps "fig:"){width="\middlefig"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Mean value of the number of vacancy jumps for the backward (circles) and forward (triangles) movements as a function of the inverse potential width $b$. The results correspond to the simulation performed to obtain Figure \[fig:rands\].[]{data-label="fig:randav"}](randav5.eps "fig:"){width="\middlefig"} ![Mean value of the number of vacancy jumps for the backward (circles) and forward (triangles) movements as a function of the inverse potential width $b$. The results correspond to the simulation performed to obtain Figure \[fig:rands\].[]{data-label="fig:randav"}](randav4.eps "fig:"){width="\middlefig"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- An important consequence can be extracted from this figure. There are two different regions of values for the parameter $b$, separated by a critical value $b_0(C)$. For $b>b_0(C)$, the probability that the vacancy moves forwards is almost zero, whereas for $b<b_0(C)$, this probability is significant. For example, $b_0(C=0.5)\approx0.70$ and $b_0(C=0.4)\approx0.55$. Figure \[fig:unifs\] represents this dependence for an uniform distribution of $\lambda\in(0.10,0.16)$, and shows the occurrence of the same phenomenon. Thus, this result seems to be independent on how $\lambda$ is distributed. ![Probability that the vacancy remains at its site (squares), moves backwards (circles) or moves forwards (triangles), for an uniform distribution of $\lambda$.[]{data-label="fig:unifs"}](unifs.eps){width="\singlefig"} Analysis of some results. Vacancy breather bifurcation. ------------------------------------------------------- The non-existence of forwards vacancy migration can be explained through a bifurcation. If we analyze the spectrum of the Jacobian of the dynamical equations (\[eq:dyn\]) defined by $\mathcal{J}\equiv\partial_xF(\{x_n\})$, bifurcations can be detected. A necessary condition for the occurrence of a bifurcation is that an eigenvalue of $\mathcal{J}$ becomes zero. Figure \[fig:jacrand\] shows the dependence of the eigenvalues closest to zero with respect to $b$ for $C=0.5$ and $C=0.4$. It can be observed that, in both cases, there is an eigenvalue that crosses zero in $b\in(0.65,0.70)$ for $C=0.5$ and in $b\in(0.50,0.55)$ for $C=0.4$. These values agree with the points where the probability of the jump forward vanishes. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Dependence of the Jacobian eigenvalues with respect to $b$. It can be observed that one eigenvalue changes its sign, and another is constant and close to zero. The first one is responsible for the bifurcation studied in the text, while the second one indicates the quasi-stability necessary for breather mobility [@CAGR02].[]{data-label="fig:jacrand"}](jacrand5.eps "fig:"){width="\middlefig"} ![Dependence of the Jacobian eigenvalues with respect to $b$. It can be observed that one eigenvalue changes its sign, and another is constant and close to zero. The first one is responsible for the bifurcation studied in the text, while the second one indicates the quasi-stability necessary for breather mobility [@CAGR02].[]{data-label="fig:jacrand"}](jacrand4.eps "fig:"){width="\middlefig"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- These bifurcations are related to the disappearance of the entities we call *vacancy breathers*. They are defined as breathers centered at the site neighboring to the vacancy, e.g. the ${n_{\mathrm{v}}}-1$ or ${n_{\mathrm{v}}}+1$ sites. It can be observed (figure \[fig:amprand\]) that, for $b$ below the bifurcation value, vacancy breathers do not exist. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Amplitude maxima of a vacancy breather versus $b$. It can be observed that the vacancy breather disappears at the bifurcation point (see figure \[fig:jacrand\]). This is related to the vanishing of the forward movement probability.[]{data-label="fig:amprand"}](amprand5.eps "fig:"){width="\middlefig"} ![Amplitude maxima of a vacancy breather versus $b$. It can be observed that the vacancy breather disappears at the bifurcation point (see figure \[fig:jacrand\]). This is related to the vanishing of the forward movement probability.[]{data-label="fig:amprand"}](amprand4.eps "fig:"){width="\middlefig"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Conclusions =========== In this paper, we have observed that a moving breather can force a vacancy defect to move forwards, backwards or let it at its site. We have also analyzed the influence of the width of the coupling potential and the coupling strength on the possibility of movement of a vacancy after the collision with a moving breather. We have observed that the width of the potential must be higher than a threshold value in order that the vacancy can move forwards. This behaviour is relevant because experiments developed in crystals show that the defects are pushed towards the edges. We have also established that the non–existence of a breather centered at the sites adjacent to the vacancy is a necessary condition for the forward vacancy movement. The incident breathers can be trapped, in the sense that the energy becomes localized at the vacancy next–neighbors, which radiate and eventually the energy spreads through the lattice. It can also be transmitted or reflected. The transmission can only occur if the vacancy moves backwards. The moving breather always losses energy but there is not a clear correlation between the vacancy and breather behaviours. The authors acknowledge Prof. F Palmero, from the GFNL of the University of Sevilla, for valuable suggestions. They also acknowledge partial support under the European Commission RTN project LOCNET, HPRN-CT-1999-00163. J Cuevas also acknowledges an FPDI grant from ‘La Junta de Andalucía’. [10]{} P Sen, J Akhtar, and FM Russell. MeV ion-induced movement of lattice disorder in single crystalline silicon. , 51:401, 2000. JL Marín, JC Eilbeck, and FM Russell. Localized moving breathers in a 2-[D]{} hexagonal lattice. , 248:225, 1998. J Cuevas, F Palmero, JFR Archilla, and FR Romero. Moving discrete breathers in a [K]{}lein–[G]{}ordon chain with an impurity. , 35:10519, 2002. I Bena, A Saxena, and JM Sancho, Interaction of a discrete breather with a lattice junction. , 65:036617, 2002. PG Kevrekidis, BA Malomed, HE Nistazakis, DJ Frantzeskakis, A Saxena, and AR Bishop, Scattering of a solitary pulse on a local defect or breather. , 66:193, 2002. S Flach and CR Willis. Discrete breathers. , 295:181, 1998. LM Floría and JJ Mazo. Dissipative dynamics of the [F]{}renkel-[K]{}ontorova model. , 45:505, 1996. YuS Kivshar and BA Malomed. Dynamics of solitons in nearly integrable systems. , 61:763, 1989. OM Braun and YuS Kivshar. Nonlinear dynamics of the [F]{}renkel–[K]{}ontorova model. , 306:1, 1998. JL Marín. dissertation, University of Zaragoza, Department of Condensed Matter, June 1997. Ding Chen, S Aubry, and GP Tsironis. Breather mobility in discrete $\phi^4$ lattices. , 77:4776, 1996. S Aubry and T Cretegny. Mobility and reactivity of discrete breathers. , 119:34, 1998. J Cuevas, JFR Archilla, YuB Gaididei, and FR Romero. Moving breathers in a [DNA]{} model with competing short- and long-range dispersive interactions. , 163:106, 2002.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider non-relativistic systems in quantum mechanics interacting through the Coulomb potential, and discuss the existence of bound states which are stable against spontaneous dissociation into smaller atoms or ions. We review the studies that have been made of specific mass configurations and also the properties of the domain of stability in the space of masses or inverse masses. These rigorous results are supplemented by numerical investigations using accurate variational methods. A section is devoted to systems of three arbitrary charges and another to molecules in a world with two space-dimensions.' author: - 'E.A.G. Armour' - 'J.-M. Richard' - 'K. Varga' bibliography: - 'stabrev.bib' date: 'Last update , by JMR' title: | STABILITY OF FEW-CHARGE SYSTEMS\ IN QUANTUM MECHANICS --- We would like to thank our collaborators on the topics covered by this review, W. Byers Brown, S. Fleck, A. Krikeb, A. Martin and Tai T. Wu, for their encouragement and useful advice. E.A.G.A. thanks EPSRC (UK) for support for this research through grants GR/L29170 and GR/R26672. K.V. is supported by OTKA grants (Hungary) T029003 an T037991 and he is sponsored by the U.S. Department of Energy under contract DE-AC05-00OR22725 with the Oak Ridge National Laboratory, managed by UT-Battelle, LLC. J.-M. R. benefitted from the hospitality of IPNL, Université de Lyon, where part of this work was done.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present PyXtal, a new package based on the Python programming language, used to generate structures with specific symmetry and chemical compositions for both atomic and molecular systems. This software provides support for various systems described by point, rod, layer, and space group symmetries. With only the inputs of chemical composition and symmetry group information, PyXtal can automatically find a suitable combination of Wyckoff positions with a step-wise merging scheme. Further, PyXtal can generate different dimensional organic crystals with molecules occupying both general and special Wyckoff positions, when the molecular geometry is given. Optionally, PyXtal also accepts user-defined parameters (e.g., cell parameters, minimum distances). In general, PyXtal serves two purposes: (1) it can be used to generate custom structures, (2) it can be interfaced with existing structure prediction codes that require the generation of random symmetric structures. In addition, we provide several utilities that facilitate the analysis of structures, including symmetry analysis, geometry optimization, and simulations of powder X-ray diffraction. Full documentation of PyXtal is available at <https://pyxtal.readthedocs.io>.' address: 'Department of Physics and Astronomy, University of Nevada Las Vegas, Las Vegas, Nevada 89154, USA' author: - Scott Fredericks - Dean Sayre - Qiang Zhu bibliography: - 'ref.bib' title: 'PyXtal: a Python Library for Crystal Structure Generation and Symmetry Analysis' --- Symmetry; Crystallography; Structure Prediction; Wyckoff sites; Global optimization [**PROGRAM SUMMARY**]{}\ [*Program Title:*]{} PyXtal\ [*Licensing provisions:*]{} MIT [@1]\ [*Programming language:*]{} Python 3\ [*Nature of problem:*]{} Knowledge of structure at the atomic level is the key to understanding materials’ properties. Typically, the structure of a material can be determined either from experiment (such as X-ray diffraction, spectroscopy, microscopy) or from theory (e.g., enhanced sampling, structure prediction). In many cases, the structure needs to be solved iteratively by generating a number of trial structure models satisfying some constraints (e.g., chemical composition, symmetry, and unit cell parameters). Therefore, it is desirable to have a computational code to be able to generate such trial structures in an automated manner.\ [*Solution method:*]{} The PyXtal package is able to generate many possible random structures for both atomic and molecular systems with all possible symmetries. To generate the trial structure, the algorithm can either start with picking the symmetry sites randomly from low to high multiplicities, or use sites that are predefined by the user. For molecules, the algorithm can automatically detect the molecules’ symmetry and place them into special Wyckoff positions while satisfying their compatible site symmetry. With the support of symmetry operations for point, rod, layer and space groups, PyXtal is suitable for the computational modeling of systems from zero, one, two, and three dimensional bulk crystals.\  \ [0]{} <https://opensource.org/licenses/MIT> Introduction {#intro} ============ Knowing the atomic structure is the key to understanding the properties of materials. Ideally, the full atomic structure can be experimentally determined through single crystal X-ray diffraction. If a single crystal sample is not available, only partial structural information can be extracted from various characterizations, such as powder X-ray diffraction/absorption, Raman spectroscopy, nuclear magnetic resonance, and electron microscopy. Based on this partial structural information (e.g., symmetry, unit cell), a number of trial structures are constructed and optimized at their corresponding thermodynamic conditions. The simulated pattern for each relaxed structure is then compared with the observed one. By doing this iteratively, the structure can be finally resolved. It has been previously demonstrated that structures can be predicted computationally using first-principles [@Oganov-Book-2011; @Oganov2019]. The basic idea of computational structure prediction is to guess the correct crystal structure under specific conditions by computationally sampling a wide range of possible structures via different global optimization techniques (e.g., random search [@Pickard-JPCM-2011], metadynamics [@Martonak-PRL-2003], basin hopping [@LJClusters], evolutionary algorithms [@Oganov-JCP-2006; @XtalOpt], particle swarm optimization [@Wang-PRB-2010]). After many attempts, the most energetically stable structure found is the one most likely to exist. For structure determination from either partial experimental information or pure computation, a number of trial structures are needed. It is generally believed that by beginning with already-symmetric structures, fewer attempts are needed to find the global energy minimum [@Pickard-JPCM-2011; @Lyakhov-CPC-2013]. For inorganic crystals, symmetry constraints have been encoded in many computational structure prediction codes such as AIRSS [@Pickard-JPCM-2011], USPEX [@Lyakhov-CPC-2013], CALYPSO [@CALYPSO] and XtalOpt [@XtalOpt]. For a given crystal with symmetry, the atomic positions are classified by Wyckoff positions (WP) [@1.2]. Two approaches are used to place the atoms into the Wyckoff sites so that the structure satisfies the desired symmetry. One is to pre-generate a set of WPs and then add atoms to these sites [@CALYPSO; @randspg]. The other is to place atoms to the most general WPs, and then merge them to the special sites if there exist close atomic pairs [@Lyakhov-CPC-2013; @zhu2012]. This will be repeated until the desired stoichiometry is achieved. The development of new computational tools has allowed the structures of many new and increasingly complex materials to be anticipated [@Oganov2019]. For the prediction of organic crystals, the role of symmetry is even more pronounced. In the periodically conducted Blind Tests of organic crystal structure prediction organized by the Cambridge Crystallographic Data Centre [@reilly:2016:6th_blind_test], most research groups attempted to reduce the structure generation to a limited range of space group choices with one molecule in the asymmetric unit ($Z^\prime$). This is based on a statistical analysis that most organic crystals tend to crystallize in only a few space groups with $Z^\prime$ = 1 [@Baur-1992]. Currently, there exist a few free packages [@upack; @molpack] which allow the generation of random molecular crystals with $Z^\prime$ = 1. Combined with modern structure search algorithms (including quasi-random [@Case-JCTC-2016], parallel tempering [@Neumann-ANIE-2008], genetic algorithms [@Curtis-JCTC-2018], and evolutionary methods [@zhu2012]), one can perform an extensive search for the plausible structures. Their energies can then be evaluated with different energy models from the empirical to ab-initio level. A recent blind test [@reilly:2016:6th_blind_test] has shown that the combination of effective structure generation and energy ranking scheme can predict not only the structure of simple rigid molecules, but also the molecules representing real-life challenges. Despite the fact that many programs have their own built-in functions to generate crystals with specific space groups or clusters with specific point groups, most of these functions are implemented in the main packages and cannot work in a standalone manner. To our knowledge, there is only one open source code (Randspg [@randspg]) which provides the interface to generate 3D atomic crystal structures. Similarly, most molecular crystal generators only support molecules occupying the general WPs with $Z^\prime$ = 1, except for the recent development of Genarris 2.0 [@tom2019genarris] which is able to deal with structures having a non-integer value of $Z^\prime$ (meaning molecules can occupy special WPs). So far, there is no single code which enables the generation of molecular crystals with arbitrary $Z^\prime$, varying from fractional to multiple integers. While 90% of organic crystals in the Cambridge Structure Database (CSD) have $Z^\prime$=1, recent advances in experimental polymorph searching and crystal engineering highlight the rich variety of multi-component crystals (co-crystals, salts, solvates, etc) as well as crystal structures with multiple molecules in the asymmetric unit. For instance, many well-studied molecules, including aspirin [@Shtukenberg-CGD-2017], resorcinol [@zhu:2016:resorscinol], coumarin [@Shtukenberg-CC-2017], glycine [@Xu-ANIE-2017], DDT [@kahr:2017:DDT_polymorphs], and ROY [@Tan-FD-2018], were found to adopt crystal structures with $Z^\prime>1$. Lastly, neither Randspg nor Genarris supports the generation of low dimensional crystals, which require explicit consideration of layer/rod/point-group (instead of space-group) symmetry operations. Collectively, these cases motivated us to develop a standalone Python program called PyXtal which can be used for customized structure generation for different-dimensional systems including atomic clusters and 1D/2D/3D atomic/molecular crystals. In sections \[algo\] and \[dependence\], we will detail the algorithms and the software dependencies. The basic useages of PyXtal will be introduced in Section \[usage\], followed by two example studies using PyXtal in the context of structure prediction in Section \[examples\]. Finally, we summarize the features of PyXtal and conclude the manuscript in Section \[conclusion\]. Algorithms {#algo} ========== PyXtal adopts the following algorithm to generate a trial structure. First, the user inputs their choice of dimension (0, 1, 2, or 3), symmetry group, stoichiometry, and relative volume of the unit cell. Optionally, additional parameters may be chosen which constrain the unit cell and maximum inter-atomic distance tolerances. This is implemented through the *pyxtal.crystal.random\_crystal* and *pyxtal.molecular\_crystal.molecular\_crystal* Python classes. Next, PyXtal checks if the stoichiometry is compatible with the choice of symmetry group. If the check passes, trial structure generation begins. Figure \[fig:Flowchart\] shows a flowchart of the algorithm. ![PyXtal Structure Generation Flowchart. Generation is based on inputs from the user.[]{data-label="fig:Flowchart"}](Flowchart.png){width="25.00000%"} Each remaining step has a maximum number of attempts. If the generation attempt fails at any point, the algorithm will revert progress for the current step and try again until the maximum limit of attempts is encountered. This ensures that the algorithm stops in a reasonable amount of time, while still giving each generated parameter a chance for success. For certain inputs, structure generation may take many attempts or fail after the maximum number of attempts. Typically, these failures indicate that the input parameters are not likely to produce a realistic structure without fine-tuning the atomic positions. In such cases, a larger unit cell volume or a smaller distance tolerance may prevent failure. Below we discuss the technical details implemented during structure generation. Wyckoff Compatibility Checking ------------------------------ Before generating a trial structure, PyXtal performs a WP compatibility check. Since WPs in different space groups have different multiplicities, this is a required step that ensures compatibility between a stoichiometry and its assigned space group. For example, consider the space group *Pn-3n* (\#222), which has a minimum WP of 2a, followed by 6b. To create a crystal structure with 4 atoms in the unit cell for this symmetry group, the combination of Wyckoff positions must add up to 4. Here, this is not possible. The position 2a cannot be repeated, because it falls on the exact coordinates (1/4, 1/4, 1/4) and (3/4, 3/4, 3/4). A second set of atoms in the 2a position would overlap the atoms in the first position, which is physically impossible. Thus, from our previous discussion, it is necessary to check the input stoichiometry against the WPs of the desired space group. PyXtal implements this by iterating through all possible combinations of WPs within the confinements of the given stoichiometry. As soon as a valid combination is found, the check returns True. Otherwise, if no valid combination is found, the check returns False and the generation attempt raises a warning. Some space groups allow valid combinations of WPs, but do not permit many (or any) positional degrees of freedom within the structure. It may also be the case that the allowed combinations result in atoms which are too close together. In these cases, PyXtal will attempt generation as usual: it will continue to search for a compatible structure until the maximum limit is reached, or until a successful generation occurs. In the event that structure generation repeatedly fails for a given combination of space group and stoichiometry, the user should make note and avoid the combination going forward. Lattice Generation ------------------ The first step in PyXtal’s structure generation is the choice of unit cell. Depending on the symmetry group, a specific type of lattice must be generated. For all crystals, the conventional cell choice is used to avoid ambiguity. Lattice information can be pre-defined by the user in either vector form ($a$, $b$, $c$, $\alpha$, $\beta$, $\gamma$) or in the form of a 3$\times$3 matrix . If lattice information is not provided, PyXtal will attempt to estimate the volume based on the chemical composition, resulting in the generation of a random unit cell which satisfies the input constraints. The most general case is the triclinic cell, from which other cell types can be obtained by applying certain constraints. To generate a triclinic cell, 3 real numbers are randomly chosen (using a Gaussian distribution centered at 0) as the off-diagonal values for a 3x3 shear matrix. By treating this shear matrix as a cell matrix, one obtains 3 lattice angles. For the lattice vector lengths, a random 3-vector between (0,0,0) and (1,1,1) is chosen (using a Gaussian distribution centered at (0.5,0.5,0.5)). The relative values of the x, y, and z coordinates are used for a, b, and c respectively, and scaled based on the required volume. For other cell types, any free parameters are obtained using the same methods as for the triclinic case, and then constraints are applied. In the tetragonal case, for example, all angles will be fixed to 90 degrees. Thus, only a random vector is needed to generate the lattice constants. For low-dimensional systems, not all three unit cell axes are periodic. Therefore, the algorithm must be altered slightly, as described below. For the 2D case, we choose $c$ to be the non-periodic axis by default. For layer groups 3-7 ($P112$, $P11m$, $P11a$, $P112/m$, $P112/a$), $c$ is also the unique axis; for all other layer groups, $a$ is the unique axis. The length of $c$ (the crystal’s “thickness") is an optional parameter which can be specified by the user. If no thickness is given, the algorithm will automatically compute a random value based on a Gaussian distribution centered at the cubic root of the estimated volume. In other words, $c$ will have the same length as the other axes on average. For the 1D case, $c$ is the periodic axis by default. For rod groups 3-7 ($P221$, $Pm11$, $Pc11$, $P2/m11$, $P2/c11$), $a$ is the unique axis; for all other rod groups, $c$ is the unique axis. Instead of choosing a value for the thickness, we constrain the unit cell based on the cross-sectional area of the *a-b* plane. This area can be either specified by the user or generated randomly. As with the 2D and 3D cases, there is no preference for any axis to be longer or shorter than the others unless specified by the user. For 0D clusters, we constrain the atoms to lie within either a sphere or an ellipsoid, depending on the point group. For spherically or polyhedrally symmetric point groups ($C_1$, $C_i$, $D_2$, $D_{2h}$, $T$, $T_h$, $O$, $T_d$, $O_h$, $I$, $I_h$), we define a sphere centered on the origin. For all other point groups (which have a unique rotational axis), we define an ellipsoid with its $c$-axis aligned with the rotational axis. The $a$- and $b$-axes are always of equal length to ensure rotational symmetry about the $c$-axis. The relative lengths for the ellipsoidal axes are chosen in the same way as for the 3D tetragonal case. In order for the 0D case to be compatible with the 1D, 2D, and 3D cases, we encode the spheres and ellipsoids as lattices (a cubic lattice for a sphere, or tetragonal lattice for an ellipsoid). Then, when generating atomic coordinates, we check whether the randomly chosen point lies within the sphere or ellipsoid. If not, we simply retry until it does. Wyckoff Position Selection and Merging -------------------------------------- The central building block for crystals in PyXtal is the WP. Once a space group and lattice are chosen, WPs are inserted one at a time to add structure. In PyXtal, we closely follow the algorithm provided in Ref. [@Lyakhov-CPC-2013] to place the atoms in different WPs. In general, PyXtal starts with the largest available WP, which is the general position of the symmetry group. If the number of atoms required is equal to or greater than the size of the general position, the algorithm proceeds. If fewer atoms are needed, the next largest WP (or set of WPs) is chosen, in order of descending multiplicity. This is done to ensure that larger positions are preferred over smaller ones; this reflects the greater prevalence of larger multiplicities seen in nature. Once a WP is chosen, a random 3-vector between (0,0,0) and (1,1,1) is created. We call this the generating point for the WP. Using the closest projection of this vector onto the WP (the WP being a periodic set of points, lines, or planes), one obtains a set of coordinates in real space (the atomic positions for that WP). Then, the distances between these coordinates are checked. If the atom-atom distances are all greater than a pre-defined limit, the WP is kept and the algorithm continues. If any of the distances are too small, it is an indication that the WP would not occur with the chosen generating point. In this case, the coordinates are merged together into a smaller WP, if possible. This merging continues until the atoms are no longer too close together (see Figure \[fig:WyckoffMerging\]). To merge into a smaller position, the original generating point is projected into each of the remaining WPs. The WP with the smallest translation between the original point and the transformed point is chosen, provided that (1) the new WP is a subset of the original one, and (2) the new points are not too close to each other. If the atoms are still too close together after all possible mergings, the WP is discarded and another attempt is made. ![Wyckoff Position Merging Example. Shown are possible mergings of the general position 8c of the 2D point group 4mm. Moving from 8c to 4b (along the solid arrows) requires a smaller translation than from 8c to 4a (along the dashed arrows). Thus, if the atoms in 8c were too close together, PyXtal would merge them into 4b instead of 4a. The atoms could be further merged into position 1o by following the arrows shown in the bottom right image.[]{data-label="fig:WyckoffMerging"}](merge.pdf){width="45.00000%"} Once a WP is successfully filled, the inter-atomic distances between the current WP and the already-added WPs are checked. If all distances are acceptable, the algorithm continues. More WPs are then added as needed until the desired number of atoms is reached. At this point, either a satisfactory structure has been generated, or the generation has failed. If the generation fails, then choosing either smaller distances tolerances or a larger volume factor might increase the chances of success. However, altering these quantities too drastically may result in less realistic crystals. Common sense and system-specific considerations should be applied when adjusting these parameters. Distance Checking ----------------- To produce structures with realistic bonds and bond lengths, the generated atoms should not be too close together. In PyXtal this means that by default, two atoms should be no closer than the covalent bond length between them. However, for a given application, the user may decide that shorter or longer cutoff distances are appropriate. For this reason, PyXtal has a custom *tolerance matrix* class which allows the user to define the distances allowed between any two atomic species. There are also options to use the metallic bond lengths, or to simply scale the allowed distances by some factor. Because crystals have periodic symmetry, any point in a crystal actually corresponds to an infinite lattice of points. Likewise, any separation vector between two points actually corresponds to an infinite number of separation vectors. For the purposes of distance checking, only the shortest of these vectors are relevant. When a lattice is non-Euclidean, the problem of finding shortest distances with periodic boundary conditions is non-trivial, and the general solution can be computationally expensive [@LatticeProblem]. So instead, PyXtal uses an approximate solution based on assumptions about the lattice geometry: For any two given points, PyXtal first considers only the separation vector which lies within the “central” unit cell spanning between (0,0,0) and (1,1,1). For example, if the original two (fractional) points are (-8.1, 5.2, -4.8) and (2.7, -7.4, 9.3), one can directly obtain the separation vector (-10.8, 12.6, -14.1). This vector lies outside of the central unit cell, so we translate by the integer-valued vector (11.0, -12.0, 15.0) to obtain (0.2, 0.6, 0.9), which lies within the central unit cell. PyXtal also considers those vectors lying within a 3$\times$3$\times$3 supercell centered on the first vector. In this example, these would include (1.2, 1.6, 1.9), (-0.8, -0.4, -0.1), (-0.8, 1.6, 0.9), etc. This gives a total of 27 separation vectors to consider. After converting to absolute coordinates (by dotting the fractional vectors with the cell matrix), one can calculate the Euclidean length of each of these vectors and thus find the shortest distance. Note that this does not work for certain vectors within some highly distorted lattices (see Figure \[fig:SkewedUnitCell\]). Often the shortest Euclidean distance is accompanied by the shortest fractional distance, but whether this is the case or not depends on how distorted the lattice is. However, because randomly generated lattices in PyXtal are required to have no angles smaller than 30 degrees or larger than 150 degrees, this is not an issue. ![Distorted Unit Cell. Due to the cell’s high level of distortion, the closest neighbors for a single point lie more than two unit cells away. In this case, the closest point to the central point is located two cells to the left and one cell diagonal-up. To find this point using PyXtal’s distance checking method, a 5$\times$5$\times$5 unit cell will be created. For this reason, a limit is placed on the distortion of randomly generated lattices.[]{data-label="fig:SkewedUnitCell"}](skew.png){width="40.00000%"} For two given sets of atoms (for example, when cross-checking two WPs in the same crystal), one can calculate the shortest inter-atomic distances by applying the above procedure for each unique pair of atoms. This only works if it has already been established that both sets on their own satisfy the needed distance requirements. Thanks to symmetry, one need not calculate every atomic pair between two WPs. For two WPs, A and B, it is only necessary to calculate either (1) the separations between one atom in A and all atoms in B, or (2) one atom in B and all atoms in A. This is because the symmetry operations which duplicate a point in a WP also duplicate the separation vectors associated with that point. This is also true for a single WP; for example, in a Wyckoff position with 16 points, only 15 (the number of pairs involving one atom) distance calculations are needed, as opposed to 120 (the total number of pairs). This can significantly speed up the calculation for larger WPs. For a single WP, it is necessary to calculate the distances for each unique atom-atom pair, but also for the lattice vectors for each atom by itself. Since the lattice is the same for all atoms in the crystal, this check only needs to be performed on a single atom of each specie. For atomic crystals, this just means ensuring that the generated lattice vectors are sufficiently long. For molecules, the process is slightly more complicated. Depending on the molecule’s orientation within the lattice, the inter-atomic distances can change. Additionally, one must calculate the distances not just between molecular centers, but between every unique atom-atom pair. This increases the number of needed calculations in rough proportion to the square of size of the molecules. As a result, this is typically the largest time cost for generation of molecular crystals. The issue of checking the lattice is also dependent on molecular orientation. Thus, the lattice must be checked for every molecular orientation in the crystal. To do this, the atoms in the original molecule are checked against the atoms in periodically translated copies of the molecule. Here, standard atom-atom distance checking is used. While several approximate methods for inter-molecular distance checking exist, their performance is highly dependent on the molecular shape and number of atoms. The simplest method is to model the molecule as a sphere, in which case only the center-center distances are needed. This works well for certain molecules, like buckminsterfullerene, which have a large number of atoms and are approximately spherical in shape. But a spherical model works poorly for irregularly shaped molecules like benzene (see Figure \[fig:BenzeneBox\]), which may have short separations along the perpendicular axis, but must be further apart along the planar axes. We provide spherical distance checking as an option for the user, but direct atom-atom distance checking is used by default. ![Dependence of shortest distances on molecular orientation. Rotation of the molecules about the $a$ or $b$ (but not the $c$) axes would cause the benzene molecules to overlap. PyXtal checks for overlap whenever a molecular orientation is altered.[]{data-label="fig:BenzeneBox"}](BenzeneBox.png){width="45.00000%"} Molecular Orientations ---------------------- In crystallography, atoms are typically assumed to be spherically symmetric point particles with no well-defined orientation. Since the object occupying a crystallographic WP is usually an atom, it is further assumed that the object’s symmetry group contains the WP’s site symmetry as a subgroup. If this is the case, the only remaining condition for occupation of a WP is the location within the unit cell. However, if the object is instead a molecule, then the WP compatibility is also determined by orientation and shape. To handle the general case, one must ensure that the object is (1) sufficiently symmetric, and is (2) oriented such that its symmetry operations are aligned with the Wyckoff site symmetry. The result is that objects with different point group symmetries are only compatible with certain WPs. For a given molecule and WP, one can find all valid orientations as follows: 1\. Determine the molecule’s point group and point group operations. This is currently handled by Pymatgen’s built-in *PointGroupAnalyzer class* [@pymatgen], which produces a list of symmetry operations for the molecule. 2\. Associate an axis to every symmetry operation. For a rotation or improper rotation, we use the rotational axis. For a mirror plane, we use an axis perpendicular to the plane. Note that inversional symmetry does not add any constraints, since the inversion center is always located at the molecule’s center of mass. 3\. Choose up to two non-collinear axes from the site symmetry and calculate the angle between them. Find all conjugate operation pairs (with the same order and type) in the molecular point symmetry with the same angle between the axes, and store the rotation which maps the pairs of axes onto each other. For example, if the site symmetry were mmm, then we could choose two reflectional axes, say the x- and y- axes or the y- and z- axes. Then, we would look for two reflection operations in the molecular symmetry group. If the angle between these two operation axes is also 90 degrees, we would store the rotation which maps the two molecular axes onto the Wyckoff axes. We would also do this for every other pair of reflections with 90 degrees separating them. 4\. For a given pair of axes, there are two rotations which can map one onto the other. There is one rotation which maps the first axis directly onto the second, and another rotation which maps the first axis onto the opposite of the second axis. Depending on the molecular symmetry, the two resulting orientations may or may not be symmetrically equivalent. So, using the list of rotations calculated in step 3, remove redundant orientations which are equivalent to each other. 5\. For each found orientation, check that the rotated molecule is symmetric under the Wyckoff site symmetry. To do this, simply check the site symmetry operations one at a time by applying each operation to the molecule and checking for equivalence with the untransformed molecule. 6\. For the remaining valid orientations, store the rotation matrix and the number of degrees of freedom. If two axes were used to constrain the molecule, then there are no degrees of freedom. If one axis is used, then there is one rotational degree of freedom, and we store the axis about which the molecule may rotate. If no axes are used (because there are only point operations in the site symmetry), then there are three (stored internally as two) degrees of freedom, meaning the molecule can be rotated freely in 3 dimensions. PyXtal performs these steps for every WP in the symmetry group and stores the nested list of valid orientations. When a molecule must be inserted into a WP, an allowed orientation is randomly chosen from this list. This forces the overall symmetry group to be preserved since symmetry-breaking WPs do not have any valid orientations to choose from. The above algorithm is particularly useful to generate molecular crystals with non-integer number of molecules in the asymmetric unit, which occur frequently for molecules with high point group symmetry. One important consideration is whether a symmetry group will produce inverted copies of the constituent molecules. In many cases, a chiral molecule’s mirror image will possess different chemical or biological properties [@chirality]. For pharmaceutical applications in particular, one may not want to consider crystals containing mirror molecules. By default, PyXtal does not generate crystals with mirror copies of chiral molecules. The user can choose to allow inversion if desired. Dependencies {#dependence} ============ All of the code is written in Python 3. Like many other Python packages, it relies on several external libraries. Numpy [@numpy], Scipy [@scipy] and Pandas [@pandas] are required for the general purposes of scientific computing and data processing. In addition, two materials science libraries, Pymatgen [@pymatgen] and Spglib [@spglib], were used to facilitate the symmetry analysis. Optionally, the code provides an interface with Openbabel [@openbabel] if the users wants to import the molecules from additional file formats other than the plain xyz format. An ASE [@ASE] interface is also enabled if the user wants to do further structure analysis such as structure manipulation or geometry optimization based on ASE. Example Usages {#usage} ============== PyXtal can be either used as a binary executable or stand-alone library for use in Python scripts. Below we introduce the basic usages in brief. \[example\] Command line utilities ---------------------- Currently, several utilities are available to access the different functionality of PyXtal. They include: 1. PyXtal\_symmetry 2. Pyxtal\_atom 3. Pyxtal\_molecule 4. Pyxtal\_test First, the users are advised to run Pyxtal\_test to quickly test if all modules are working correctly after the installation. The rest of the utilities are designed for different analysis purposes. The PyXtal\_symmetry utility allows one to easily access the symmetry information for a given symmetry group using either the group name or international number. $ pyxtal_symmetry -s 36 -- Space group # 36 (Cmc2_1)-- 8b site symm: 1 x, y, z -x, -y, z+1/2 x, -y, z+1/2 -x, y, z x+1/2, y+1/2, z -x+1/2, -y+1/2, z+1/2 x+1/2, -y+1/2, z+1/2 -x+1/2, y+1/2, z 4a site symm: m.. 0, y, z 0, -y, z+1/2 1/2, y+1/2, z 1/2, -y+1/2, z+1/2 PyXtal\_atom and Pyxtal\_molecule can be used to directly generate one trial structure based on the given symmetry group and chemical composition. Below we give the example scripts to generate different types of symmetric objects, including 1. a random C60 cluster with $I_h$ point group symmetry; 2. a trial diamond structure with *Fd-3m* space group symmetry; 3. a crystal of two C60 molecules per primitive unit cell with $Cmc2_1$ symmetry <!-- --> $ pyxtal_atom -e C -n 60 -d 0 -s Ih $ pyxtal_atom -e C -n 2 -s 227 $ pyxtal_molecule -e C60 -n 2 -s 36 The generated structures will be saved to text files in cif format for crystals and xyz format for clusters. PyXtal as a Library ------------------- PyXtal allows the user to generate random crystal structures with given symmetry constraints. There are several parameters which can be specified, but only a few are necessary. Below is an example script to generate 100 random clusters for 36 carbon atoms. ``` {.python language="Python" caption="A" Python="" script="" to="" generate="" 100="" random="" C36="" clusters=""} from pyxtal.crystal import random_cluster from random import choice pgs = range(1, 33) clusters = [] for i in range(100): run = True while run: pg = choice(pgs) cluster = random_cluster(pg, ['C'], [36]) if cluster.valid: clusters.append(cluster) ``` With the generated structures, one can perform further analysis such as geometry optimization and powder X-ray diffraction pattern simulation. PyXtal also provides the preliminary modules for such tasks. Alternatively, the trial structures can be easily adapted to the structural objects for other libraries such as ASE [@ASE] or Pymatgen [@pymatgen], or be dumped to text files in cif, xyz or POSCAR format. More examples can be found in the online documentation <https://pyxtal.readthedocs.io>. Applications {#examples} ============ Our primary purpose of developing PyXtal is to provide more likely trial structures to solve the structural determination problem. It can be useful for at least two cases. First, one can generate the trial structures based on the partial information determined from experiment (e.g., unit cell, symmetry, composition). Secondly, it can be used to determine the ground state structure in a first-principle manner based on global optimization. It has been shown [@Oganov-JCP-2006; @CALYPSO; @randspg] that by beginning with already-symmetric structures, fewer attempts are needed to find the global energy minimum. To demonstrate the general utility of pre-symmetrization, we performed a number of benchmarks for different systems. Below we give two examples for the global structural search on the low-energy Lennard-Jones (LJ) clusters and carbon/silicon allotropes. Clusters with empirical Lennard-Jones potential ----------------------------------------------- Finding the ground state of LJ clusters of given size is an established benchmark for global optimization methods [@LJClusters]. Here, it shown that local optimization, combined with randomly generated symmetric clusters, is sufficient to solve the problem with small sizes of LJ clusters. For the purposes of this benchmark, we focus on three cluster sizes, namely 38, 55, and 75. For each cluster size, 20,000 structures were generated: 10,000 with no pre-defined symmetry, and 10,000 with symmetry chosen randomly from among PyXtal’s 56 built-in point groups[^1]. A potential of $ 4(\frac{1}{r^{12}} - \frac{1}{r^6}) $ was assigned to each atom-atom pair. Each structure was locally optimized using the conjugate gradient (CG) method in SciPy’s *optimize.minimize* function [@scipy]. As shown in Figure \[fig:LJ\], the ground state was found much more frequently when the initial structures possessed some point group symmetry. With pre-symmetrization, the ground state was found 278 times for size 38 clusters, 73 times for size 55, and 1 time for size 75. Without pre-symmetrization, the ground state was not found at all. Though the numbers of hits on the ground states may change in another run, the statistical rule still holds. Second, while the ground state is found more frequently with pre-symmetrization, the average energy is higher. This is because pre-symmetrization spans the possible structure space more effectively, while purely random structures are more clustered around a specific energy range. ![Energy distribution for Lennard Jones clusters with the sizes of (a) 38, (b) 55 and (c) 75. The insets are the corresponding ground state geometries.[]{data-label="fig:LJ"}](LJ-2.pdf){width="45.00000%"} ![The box and whisker plots for the energy distribution of the randomly generated (a) carbon and (b) silicon crystals with 2, 4, 6, 8, 16 atoms per primitive unit cell.[]{data-label="fig:boxplot"}](dft.pdf){width="45.00000%"} Carbon and silicon crystals with ab-initio calculations ------------------------------------------------------- We also combined PyXtal with ab-initio codes to search for the elemental allotropes of carbon and silicon at 0 K and ambient pressure. 1000 random structures each were generated for 2, 4, 6, 8, and 16 atoms in the primitive unit cell. A random space group between 2 and 230 was chosen for each structure. This gave a total of 5000 structures for each element. Each structure was optimized using the PBE-GGA functional [@PBE-PRL-1996] as implemented in the VASP code [@VASP3; @VASP4], following a multiple-step strategy from low, normal, to accurate precision. The final geometries were then calculated with an energy cutoff of 600 eV and 0.15 K-spacing. For carbon, the expected structures of diamond and graphite were found frequently in each run, as well as londsdaelite, $sp3$ carbon with various ring topologies, and various multi-layer graphite-like structures. Similarly, our simulation on silicon yielded the ground state of cubic diamond structures for each of the runs with different numbers of atoms per primitive unit cell, demonstrating that adding symmetry constraints is beneficial to quickly identify the low-energy structures with high symmetry. Moreover, it is again interesting to analyze the energy distribution of the randomly generated structures as shown in Figure \[fig:boxplot\]. For both carbon and silicon, the energy landscape appears to be narrower for size-2 primitive cells. It appears that beyond about 4, the number of atoms in the primitive cell has little influence on the energy distribution. This again suggests that pre-symmetrization is an effective means to prevent the clustering of glassy structures found in pure random generations for large systems [@Lyakhov-CPC-2013]. Therefore, pre-symmetrization provides a better choice for global energy optimization. In addition, pre-symmetrization can provide a more diverse dataset for training machine learning force fields [@Deringer-PRL-2018; @Boron-PRB-2019]. Conclusion ========== In this manuscript, we present a software package PyXtal. The core features of PyXtal have been highlighted, with further documentation available online[^2]. In PyXtal, the symmetry constraints are further refined in two main ways. The first is a merging algorithm [@zhu2012] which controls the distribution of WPs through statistical means. The second is a new algorithm for placing molecules into special WPs. This allows for more realistic and complex structures to be generated without reducing the global symmetry. PyXtal is not a complete structure prediction package; it only generates the trial structures with a given symmetry group. Other tools exist which perform structure generation and other steps in the CSP process [@Lyakhov-CPC-2013; @Pickard-JPCM-2011; @XtalOpt; @CALYPSO]. The main goals in developing PyXtal are: 1) to develop a free, open-source Python package for the materials science community, 2) to handle the generation of symmetric structures described by different symmetry groups from 0D to 3D, 3) to handle molecular WPs in a generalized manner, 4) to provide a tool to look up the symmetry information from the database. We also demonstrated that using the pre-symmetrized structures as the starting seeding structures can effectively improve the success rate of finding the low energy configuration. As such, PyXtal can be interfaced to the other structure prediction codes which require the generation of trial structures. Access to the source code and development information are available on the GitHub page at <https://github.com/qzhu2017/PyXtal>. The code is currently under version 0.0.2 at the time of writing. It is expected to update frequently. Further development and application of the mathematical background should enable more complex structure types to be studied in the future. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge the NSF (I-DIRSE-IL: 1940272) and NASA (80NSSC19M0152) for their financial supports. The computing resources are provided by XSEDE (TG-DMR180040). [^1]: This includes 32 crystallographic point groups and 24 non-crystallographic point groups. The full symmetry information can be accessed by the command of *pyxtal\_symmetry -d 0* [^2]: <https://pyxtal.readthedocs.io>
{ "pile_set_name": "ArXiv" }
--- abstract: 'For the filtering of peaks in periodic signals, we specify polynomial filters that are optimally localized in space. The space localization of functions $f$ having an expansion in terms of orthogonal polynomials is thereby measured by a generalized mean value ${\varepsilon}(f)$. Solving an optimization problem including the functional ${\varepsilon}(f)$, we determine those polynomials out of a polynomial space that are optimally localized. We give explicit formulas for these optimally space localized polynomials and determine in the case of the Jacobi polynomials the relation of the functional ${\varepsilon}(f)$ to the position variance of a well-known uncertainty principle. Further, we will consider the Hermite polynomials as an example on how to get optimally space localized polynomials in a non-compact setting. Finally, we investigate how the obtained optimal polynomials can be applied as filters in signal processing.' author: - 'Wolfgang Erb [^1]' date: '5.08.2010' title: | Optimally space localized polynomials\ with applications in signal processing --- [**AMS Subject Classification**]{}(2000): 42C05, 92C55, 94A12, 94A17\ [**Keywords: orthogonal polynomials, space localization, filtering of peaks in signals, uncertainty principles**]{} Introduction ============ For the detection of peaks in mass spectrometry data, window functions are often used to preprocess the incoming signals, for instance, to perform a baseline correction or to filter out disturbing higher frequencies (cf. [@Nguyen2006], [@Yang2009]). In the following, we consider $2\pi$-periodic signals $f$ filtered by a convolution with a window function $h$, i.e. the filtered signal $F_h f$ is given by $$\label{equation-filteroperator} F_h f(t) := (f \ast h)(t) = \frac{1}{2\pi}\int_{-\pi}^{\pi} f(s) h(t-s) ds.$$ For peak detection purposes, two properties of the window function $h$ are important. First of all, the function $h$ should be localized in frequency in order to filter out the high frequencies. On the other hand, the window $h$ should also be well localized in space such that the convolution operator $F_h$ is still able to determine the peaks of the signal $f$. However, the uncertainty principle states that it is impossible that the function $h$ is well-localized both in space and frequency (see, for instance, [@Erb2010], [@FollandSitaram1997], [@Gröchenig2003]). Therefore, in search for an optimal filter $h$ for peak detection, one has always a trade off between denoising the signal $f$ and determining the position of the peaks of $f$. The main objective of this article is to investigate the space localization of polynomial filters. If the window function $h$ is a trigonometric polynomial of degree $n$, the frequency domain of $h$ is bounded. Therefore, a polynomial filter $h$ in a peak detection process automatically performs a low-pass filtering of the signal $f$. It remains to analyze the space localization of $h$. In this regard, one has to specify how space localization of a function $f$ is measured. Beside the trigonometric setting, we will consider general systems of orthogonal polynomials in this article. For functions $f$ having an expansion in orthogonal polynomials, we will use a functional ${\varepsilon}(f)$ to measure the space localization of $f$. We will study an optimization problem including this functional ${\varepsilon}(f)$ that allows us to construct polynomials and band-limited functions that are optimally localized in the space domain. In particular, for Jacobi polynomials, it will turn out that the functional ${\varepsilon}(f)$ is related to the position variance ${\operatorname{var}}_S(f)$ of a well-known uncertainty principle and that the optimally space localized polynomials minimize also the term for the position variance ${\operatorname{var}}_S(f)$. By now, research in this direction has been done mainly for trigonometric polynomials on the unit circle and spherical harmonics on the unit sphere ${{ \mathbb S}}^d$. In [@Rauhut2005], Rauhut used the position variance of the Breitenberger uncertainty principle to construct optimally localized polynomials on the unit circle. On the unit sphere, the works of Mhaskar et al. [@MhaskarNarkovichPrestinWard2000] and La[í]{}n Fern[á]{}ndez [@Fernandez2007] led to optimally space localized polynomials and polynomial wavelets on the unit sphere ${{ \mathbb S}}^d$. One aim of this article is to extend these results to orthogonal expansions on the interval $[-1,1]$. These more general results are then used to construct new window functions for peak filtering purposes in the trigonometric setting. In the following, we will mainly consider the Hilbert space $L^2([-1,1],w)$ with the inner product $$\langle f,g \rangle_{w} = \int_{-1}^1 f(x) \overline{g(x)} w(x) dx,$$ where the weight function $w$ is a nonnegative continuous function on $[-1,1]$. Then, for $f \in L^2([-1,1],w)$, we define the functional ${\varepsilon}(f)$ by $$\label{equation-meanvalue} {\varepsilon}(f) := \int_{-1}^{1} x |f(x)|^2 w(x) dx.$$ Exactly this generalized mean value ${\varepsilon}(f)$ is the starting point for our investigations. In the following first section, we will see that the mean value ${\varepsilon}(f)$ determines a measure for the localization of the function $f$ on the boundary values of the interval $[-1,1]$. Then, we will study the functional ${\varepsilon}(f)$ for polynomial subspaces of $L^2([-1,1],w)$. In particular, in Theorem \[Theorem-optimalpolynomial\] and in Corollary \[Corollary-explicitformoptimalpolynomials\], we will give representations of those polynomials ${\mathcal{P}_n}$ that maximize ${\varepsilon}(f)$, i.e. those polynomials that are optimally localized at the right hand boundary of the interval $[-1,1]$ with respect to ${\varepsilon}(f)$. In the third section, we will emphasize on the Jacobi polynomials. In Theorem \[Theorem-minequivalenttomax\], it will be shown that the position variance ${\operatorname{var}}_S(f)$ of an uncertainty principle for Jacobi expansions is minimized by the polynomials ${\mathcal{P}_n}$. In the fourth section, we will consider the Hermite polynomials on the real line and determine optimally space localized polynomials in this setting. The obtained results are an example of how it is possible to generalize the theory of Section \[Section-optimallylocalizedpolynomials\] and \[Section-explicitexpressions\] to a non-compact setting. Finally, in the last section we will turn back to the peak filtering application mentioned at the beginning. We will investigate how the optimally space localized polynomials of this article are related to well-known polynomial window functions and how they can be applied as filters for the peak detection of a signal. Optimally space localized polynomials {#Section-optimallylocalizedpolynomials} ===================================== We start out by introducing particular polynomial subspaces of $L^2([-1,1],w)$. Therefore, we denote by $\{p_l\}_{l=0}^\infty$ the family of polynomials that are orthonormal on $[-1,1]$ with respect to the inner product $\langle \cdot, \cdot \rangle_w$. Further, we assume that the polynomials $p_l$ of degree $l$ are normalized such that the coefficient of $x^l$ is positive. Then, the family $\{p_l\}_{l=0}^\infty$ defines a complete orthonormal set in the Hilbert space $L^2([-1,1],w)$ (cf. [@Szegö Section 2.2]). \[definition-polynomialspacesJacobi\] As subspaces of the Hilbert space $L^2([-1,1],w)$, we consider the following three polynomial spaces: 1. The space spanned by the polynomials $p_l$, $0 \leq l \leq n$: $${\Pi_n}:= \left\{P:\;P(x)= \sum_{l=0}^n c_l p_l(x),\;c_0, \ldots, c_n \in {{\mathbb C}}\right\}.$$ 2. The space spanned by the polynomials $p_l$, $m \leq l \leq n$: $${\Pi_{n}^m}:= \left\{P:\;P(x)= \sum_{l=m}^n c_l p_l(x),\;c_m, \ldots, c_n \in {{\mathbb C}}\right\}.$$ 3. The space spanned by a polynomial ${\displaystyle}{\mathcal{R}}(x) = p_m(x)+\sum_{l=0}^{m-1} e_l p_l(x)$ of degree $m$ and the polynomials $p_l$, $m+1 \leq l \leq n$: $${\Pi_n^{\mathcal{R}}}:= \left\{P:\;P(x) = c_m {\mathcal{R}}(x) + \sum_{l=m+1}^n c_l p_l(x),\;c_m, \ldots, c_n \in {{\mathbb C}}\right\}.$$ Further, we define the unit spheres of the spaces ${\Pi_n}$, ${\Pi_{n}^m}$ and ${\Pi_n^{\mathcal{R}}}$ as $$\begin{aligned} {{\mathbb S}_n}&:= \left\{P \in {\Pi_n}: \; \|P\|_{w} = 1\right\}, \\ {{\mathbb S}_{n}^m}&:= \left\{P \in {\Pi_{n}^m}: \; \|P\|_{w} = 1\right\}, \\ {{\mathbb S}_n^{\mathcal{R}}}&:= \left\{P \in {\Pi_n^{\mathcal{R}}}: \; \|P\|_{w} = 1\right\}.\end{aligned}$$ Clearly, ${\Pi_{n}^m}\subset {\Pi_n}$ and ${\Pi_n^{\mathcal{R}}}\subset {\Pi_n}$. In the literature, the spaces ${\Pi_{n}^m}$ are sometimes called wavelet spaces and considered in a more general theory on polynomial wavelets and polynomial frames, see [@MhaskarPrestin2005] and the references therein. For special choices of ${\mathcal{R}}$, the polynomials in the spaces ${\Pi_n^{\mathcal{R}}}$ play an important role in the theory of polynomial approximation. In particular, if polynomial reproduction is requested, a common choice for the polynomial ${\mathcal{R}}$ is the Christoffel-Darboux kernel of degree $m$ (see [@FilbirMhaskarPrestin2009], [@Mhaskar]). The standardization $e_m = 1$ for the highest expansion coefficient of the polynomial ${\mathcal{R}}$ causes no loss of generality and is a useful convention for the upcoming calculations. The first goal of this section is to study the localization of the polynomials in the spaces ${\Pi_n}$, ${\Pi_{n}^m}$ and ${\Pi_n^{\mathcal{R}}}$ at the right hand boundary of the interval $[-1,1]$ and to determine those polynomials that are in some sense best localized. As an analyzing tool for the localization of a function $f \in L^2([-1,1])$ at the point $x = 1$, we consider the mean value ${\varepsilon}(f)$ as defined in . If $\|f\|_{w} = 1$, then $-1 < {\varepsilon}(f) < 1$, and the more the mass of the $L^2$-density $f$ is concentrated at the boundary point $x=1$, the closer the value ${\varepsilon}(f)$ gets to $1$. Therefore, the value ${\varepsilon}(f)$ can be interpreted as a measure on how well the function $f$ is localized at the right hand boundary of the interval $[-1,1]$. We say that $f$ is localized at $x=1$ if the value ${\varepsilon}(f)$ approaches $1$. Now, our aim is to find those elements of the polynomial spaces ${\Pi_n}$, ${\Pi_{n}^m}$ and ${\Pi_n^{\mathcal{R}}}$ that are optimally localized at the boundary point $x=1$. In particular, we want to solve the following optimization problems: $$\begin{aligned} {\mathcal{P}_n}&= \arg\max_{P \in {{\mathbb S}_n}} {\varepsilon}(P), \label{optimalpolynomiala}\\ {\mathcal{P}_n^m}&= \arg\max_{P \in {{\mathbb S}_{n}^m}} {\varepsilon}(P), \label{optimalpolynomialb}\\ {\mathcal{P}_n^{\mathcal{R}}}&= \arg\max_{P \in {{\mathbb S}_n^{\mathcal{R}}}} {\varepsilon}(P). \label{optimalpolynomialc}\end{aligned}$$ Since the linear spaces ${\Pi_n}$, ${\Pi_{n}^m}$ and ${\Pi_n^{\mathcal{R}}}$ are finite-dimensional, the unit spheres ${{\mathbb S}_n}$, ${{\mathbb S}_{n}^m}$ and ${{\mathbb S}_n^{\mathcal{R}}}$ are compact subsets and the functional ${\varepsilon}$ is bounded and continuous on the respective polynomial space. Hence, it is guaranteed that solutions of the optimization problems , and exist. The optimization problem has a well-known solution which can be found in [@Mhaskar Theorem 1.3.3]. The solutions for and given below can be considered as novel. The functional ${\varepsilon}(f)$ is by far not the only possible way to measure the space localization of a function. In the literature, there exist various other forms in this direction. In the Landau-Pollak-Slepian theory (cf. [@FollandSitaram1997], [@Landau1985], [@Slepian1983]) an optimization problem similar as in is investigated. Results in terms of polynomial and exponential growth of polynomials can be found in the articles [@FilbirMhaskarPrestin2009] and [@IvanovPetrushevXu2010]. Results concerning the Shannon information entropy of orthogonal polynomials are summarized in the survey article [@AptekarevDehesaMartinezFinkelshtein2010]. In order to describe the optimal polynomials, we need the notion of associated and of scaled co-recursive associated polynomials. First of all, we know that the orthonormal polynomials $p_l$ satisfy the following three-term recurrence relation (cf. [@Gautschi Section 1.3.2]) $$\begin{aligned} \label{equation-recursionorthonormal} b_{l+1} p_{l+1}(x) &= (x - a_l) p_l(x) - b_l p_l(x), \quad l=0,1,2,3, \ldots \\ p_{-1}(x) &= 0, \qquad p_0(x) = \frac{1}{b_0}, \notag\end{aligned}$$ with coefficients $a_l \in {{\mathbb R}}$ and $b_l > 0$. \[definition-associatedJacobi\] For $m \in {{\mathbb N}}$, the associated polynomials $p_l(x,m)$ on the interval $[-1,1]$ are defined by the shifted recurrence relation $$\begin{aligned} \label{equation-recursionassociatedsymmetric} b_{m+l+1}\, p_{l+1}(x,m) &= (x - a_{m+l})\, p_l(x,m) - b_{m+l}\, p_{l-1}(x,m), \quad l=0,1,2, \ldots , \\ p_{-1}(x,m) &= 0, \qquad p_0(x,m) = 1. \notag\end{aligned}$$ Further, for $\gamma \in {{\mathbb R}}$ and $\delta \geq 0$, we define the scaled co-recursive associated polynomials $p_l(x,m,\gamma,\delta)$ on $[-1,1]$ by the three-term recurrence relation $$\begin{aligned} \label{equation-recursionassociatedscaledsymmetric} b_{m+l+1}\, p_{l+1}(x,m,\gamma,\delta) &= (x - a_{m+l})\, p_l(x,m,\gamma,\delta) - b_{m+l}\, p_{l-1}(x,m,\gamma,\delta), \notag\\ & \qquad l =1,2,3,4 \ldots , \\ p_0(x,m,\gamma,\delta) &= 1, \quad p_{1}(x,m,\gamma,\delta) = \frac{\delta x - a_m - \gamma}{\beta_{m+1}}. \notag\end{aligned}$$ The three-term recurrence relation of the co-recursive associated polynomials $p_{l+1}(x,m,\gamma,\delta)$ corresponds to the three-term recurrence relation of the associated polynomials $p_{l+1}(x,m)$ except for the formula of the initial polynomial $p_{1}(x,m,\gamma,\delta)$. For $c = 0$, $\gamma = 0$ and $\delta = 1$, we have the identities $p_l(x,0) = p_l(x,0,0,1) = b_0\, p_l(x)$. The polynomials $p_l(x,m)$ and $p_l(x,m, \gamma, \delta)$ can be described with help of the symmetric Jacobi matrix ${\mathbf{J}}_n^m$, $0 \leq m \leq n $, defined by $$\label{equation-Jacobimatrix} {\mathbf{J}}_n^m = \left(\begin{array}{cccccc} a_m & b_{m+1} & 0 & 0 & \cdots & 0 \\ b_{m+1} & a_{m+1} & b_{m+2} & 0 & \cdots & 0 \\ 0 & b_{m+2} & a_{m+2} & b_{m+3} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0\\ 0 & \cdots & 0& b_{n-2} & a_{n-1} & b_{n-1} \\ 0 & \cdots & \cdots & 0 & b_{n-1}& a_n \end{array}\right).$$ If $m=0$, we write ${\mathbf{J}}_n$ instead of ${\mathbf{J}}_n^0$. Then, in view of the three-term recurrence formulas and , the polynomials $p_l(x,m)$ and $p_l(x,m,\gamma, \delta)$, $l \geq 1$, can be written as (cf. [@Ismail Theorem 2.2.4]) $$p_l(x,m) = \det(x \mathbf{1}_{l} - {\mathbf{J}}_{m+l-1}^m ), \label{equation-relation3termJacobimatrixassociated}$$ and $$p_l(x,m,\gamma,\delta) = \det \left( x \left(\begin{array}{cc} \delta & 0 \\ 0 & \mathbf{1}_{l-1} \end{array}\right) - {\mathbf{J}}_{m+l-1}^m - \left(\begin{array}{cc} \gamma & 0 \\ 0 & \mathbf{0}_{l-1} \end{array}\right)\right), \label{equation-relation3termJacobimatrixscaled}$$ where $\mathbf{1}_{l-1}$ denotes the $(l-1)$-dimensional identity matrix and $\mathbf{0}_{l-1}$ the $(l-1)$-dimensional zero matrix. Next, we give a characterization of the functional ${\varepsilon}(P)$ in terms of the expansion coefficients $c_l$ of the polynomial $P = \sum_{l=n}^m c_l p_l$. \[Lemma-characterizationofepsP\] For the polynomial $P(x) = {\displaystyle}\sum_{l=0}^n c_l p_l(x)$, we have $$\begin{aligned} {\varepsilon}(P) &= {\mathbf{c}}^H {\mathbf{J}}_{n} {\mathbf{c}}, & \text{if} \quad P \in {\Pi_n}, \\ {\varepsilon}(P) &= \tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}}, & \text{if} \quad P \in {\Pi_{n}^m}, \\ {\varepsilon}(P) &= \tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}} + ({\varepsilon}({\mathcal{R}})-a_m) |c_m|^2, & \text{if} \quad P \in {\Pi_n^{\mathcal{R}}},\end{aligned}$$ with the coefficient vectors ${\mathbf{c}}= (c_0, \ldots, c_n)^T$ and $\tilde{{\mathbf{c}}} = (c_m, \ldots, c_n)^T$. Using the three-term recurrence formula (\[equation-recursionorthonormal\]) and the orthonormality relation of the polynomials $p_l$, we get for $P \in {\Pi_n}$ $$\begin{aligned} {\varepsilon}(P) &= \int_{-1}^1 x \Big|\sum_{l=0}^n c_l p_l(x)\Big|^2 w(x) dx =\int_{-1}^1 \Big(\sum_{l=0}^n c_l x p_l(x)\Big) \overline{\Big(\sum_{l=0}^n c_l p_l(x)\Big)}w(x) dx \\ &= \int_{-1}^1 \Big(\sum_{l=0}^n c_l \big(b_{l+1} p_{l+1}(x) + a_l p_l(x)+ b_{l} p_{l-1}(x) \big)\Big) \overline{\Big(\sum_{l=0}^n c_l p_l(x)\Big)}w(x) dx \\ &= \sum_{l=0}^n a_l |c_l|^2 + \sum_{l=0}^{n-1}(b_{l+1} c_l \bar{c}_{l+1} + b_{l+1} \bar{c}_l c_{l+1}) = {\mathbf{c}}^H {\mathbf{J}}_{n} {\mathbf{c}}.\end{aligned}$$ If $c_0 = \ldots = c_{m-1} = 0$, we get the assertion for polynomials $P$ in the space ${\Pi_{n}^m}$. If $P \in {\Pi_n^{\mathcal{R}}}$, then $P$ has the representation $$P(x) = c_m \left( p_m(x)+\sum_{l=0}^{m-1} e_l p_l(x) \right)+ \sum_{l=m+1}^{n} c_l p_l(x),$$ where the polynomial ${\mathcal{R}}$ is given by ${\mathcal{R}}(x) = p_m(x)+\sum_{l=0}^{m-1} e_l p_l(x)$. Inserting this representation in the upper formula for ${\varepsilon}(P)$, yields the identity ${\varepsilon}(P) = ({\varepsilon}({\mathcal{R}})-a_m)|c_m|^2 + \tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}}$. Using the characterization of ${\varepsilon}(P)$ in Lemma \[Lemma-characterizationofepsP\], we proceed to the solution of the optimization problems , and . \[Theorem-optimalpolynomial\] The solutions of the optimization problems , and are given by $$\begin{aligned} {\mathcal{P}_n}(x) &= \kappa_1 \sum_{l=0}^n p_l(\lambda_{n+1})\, p_l(x), \displaybreak[0] \label{equation-optimalpolynomialJacobia}\\ {\mathcal{P}_n^m}(x) &= \kappa_2 \sum_{l=m}^n p_{l-m}(\lambda_{n-m+1}^m,m)\, p_l(x), \displaybreak[0] \label{equation-optimalpolynomialJacobib}\\ {\mathcal{P}_n^{\mathcal{R}}}(x) &= \kappa_3 \left( {\mathcal{R}}(x) + \sum_{l=m+1}^n p_{l-m}(\lambda_{n-m+1}^{{\mathcal{R}}},m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}}) p_l(x)\right), \label{equation-optimalpolynomialJacobic}\end{aligned}$$ where $p_l(x,m)$ and $p_l(x,m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}})$ denote the associated and the scaled co-recursive associated polynomials as given in Definition \[definition-associatedJacobi\] with the shift term $\gamma_{{\mathcal{R}}} := {\varepsilon}({\mathcal{R}})-a_m$ and the scaling factor $\delta_{{\mathcal{R}}} := \|{\mathcal{R}}\|_{w}^2$.\ The values $\lambda_{n+1}$, $\lambda_{n-m+1}^m$ and $\lambda_{n-m+1}^{{\mathcal{R}}}$ denote the largest zero of the polynomials $p_{n+1}(x)$, $p_{n-m+1}(x,m)$ and $p_{n-m+1}(x,m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}})$ in the interval $[-1,1]$, respectively. The constants $\kappa_1$, $\kappa_2$ and $\kappa_3$ are chosen such that the optimal polynomials lie in the respective unit sphere and are uniquely determined up to multiplication with a complex scalar of absolute value one. The maximal value of ${\varepsilon}$ in the respective polynomial space is given by $$\begin{aligned} M_{n} &:= \max_{P\in {{\mathbb S}_n}}{\varepsilon}(P) = \lambda_{n+1},\\ M_{n}^m &:= \max_{P\in {{\mathbb S}_{n}^m}}{\varepsilon}(P) = \lambda_{n-m+1}^{m},\\ M_n^{{\mathcal{R}}} &:= \max_{P\in {{\mathbb S}_n^{\mathcal{R}}}}{\varepsilon}(P) = \lambda_{n-m+1}^{{\mathcal{R}}}.\end{aligned}$$ We start out by determining the optimal solution ${\mathcal{P}_n^m}$ for the optimization problem . The formula for the optimal polynomial ${\mathcal{P}_n}$ follows as a special case if we set $m=0$. First of all, Lemma \[Lemma-characterizationofepsP\] states that the mean value ${\varepsilon}(P)$ of a polynomial $P(t) = \sum_{l=m}^n c_l p_l(x)$ can be written as ${\varepsilon}(P) = \tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}}$ with the coefficient vector $\tilde{{\mathbf{c}}} = (c_m, \cdots, c_n)^T$. Thus, maximizing ${\varepsilon}(P)$ with respect to a normed polynomial $P \in {{\mathbb S}_{n}^m}$ is equivalent to maximize the quadratic functional $\tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}}$ subject to $|\tilde{{\mathbf{c}}}|^2 = c_m^2 + c_{m+1}^2 + \cdots + c_n^2 = 1$. If $\lambda_{n-m+1}^{m}$ denotes the largest eigenvalue of the symmetric Jacobi matrix ${\mathbf{J}}_{n}^m$, we have $$\label{equation-extremalmatrix} \tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}} \leq \lambda_{n-m+1}^{m}|\tilde{{\mathbf{c}}}|^2$$ and equality is attained for the eigenvectors corresponding to $\lambda_{n-m+1}^{m}$. Now, the largest eigenvalue of the Jacobi matrix ${\mathbf{J}}_{n}^m$ corresponds exactly with the largest zero of the associated polynomial $p_{n-m+1}(x,m)$ (cf. [@Gautschi Theorem 1.31]). Using the recursion formula (\[equation-recursionassociatedsymmetric\]) of the associated polynomials $p_{l}(x,m)$ with $c_m = 1$ the eigenvalue equation ${\mathbf{J}}_n^m \tilde{{\mathbf{c}}} = \lambda_{n-m+1}^{m} \tilde{{\mathbf{c}}}$ yields $$\begin{aligned} c_l = p_{l-m}(\lambda_{n-m+1}^{m},m), \quad l = m, \ldots n.\end{aligned}$$ Finally, we have to normalize the coefficients $c_l$, $m \leq l \leq n$, such that $|\tilde{{\mathbf{c}}}|^2 =1$. This is done by the absolute value of the constant $\kappa_2$. The uniqueness (up to a complex scalar with absolute value $1$) of the optimal polynomial ${\mathcal{P}_n}$ follows from the fact that the largest zero of $p_{n-m+1}(x,m)$ is simple (see [@Chihara Theorem 5.3]). The formula for $M_{n}^m$ follows directly from the estimate in (\[equation-extremalmatrix\]). We consider now the third polynomial space ${\Pi_n^{\mathcal{R}}}$. Lemma \[Lemma-characterizationofepsP\] states that in this case the mean value ${\varepsilon}(P)$ of $P(x) = c_m {\mathcal{R}}(x)+\sum_{l=m+1}^n c_l p_l(x)$ can be written as ${\varepsilon}(P) = \tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}} + ({\varepsilon}({\mathcal{R}})-a_m) |c_m|^2$, with the coefficient vector $\tilde{{\mathbf{c}}} = (c_m, \cdots, c_n)^T$. Maximizing ${\varepsilon}(P)$ with respect to a polynomial $P \in {{\mathbb S}_n^{\mathcal{R}}}$ is therefore equivalent to maximize the quadratic functional $\tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}} + ({\varepsilon}({\mathcal{R}})-a_m) |c_m|^2$ subject to $(\|{\mathcal{R}}\|_{w}^2-1) |c_m|^2+|\tilde{{\mathbf{c}}}|^2 = 1$. Using a Lagrange multiplier $\lambda$ and differentiating the Lagrange function, we obtain the identity $${\mathbf{J}}_n^m \tilde{{\mathbf{c}}} + \gamma_{{\mathcal{R}}} (c_m, 0 , \cdots, 0)^T = \lambda \big(\delta_{{\mathcal{R}}} c_m, c_{m+1}, \cdots, c_n\big)^T$$ as a necessary condition for the maximum, where $\gamma_{{\mathcal{R}}} = {\varepsilon}({\mathcal{R}})-a_m$ and $\delta_{{\mathcal{R}}} = \|{\mathcal{R}}\|_{w}^2$. By the equation , this system of equations is related to the three-term recursion formula (\[equation-recursionassociatedscaledsymmetric\]) of the scaled co-recursive associated polynomials $p_l(x,m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}})$. In particular, the value $\lambda$ corresponds to a root of $p_{n-m+1}(x,m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}})$. Moreover, the maximum of $\tilde{{\mathbf{c}}}^H {\mathbf{J}}_{n}^m \tilde{{\mathbf{c}}} + \gamma_{{\mathcal{R}}} |c_m|^2$ is attained for the largest root $\lambda = \lambda_{n-m+1}^{{\mathcal{R}}}$ of $p_{n-m+1}(x,m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}})$ and the corresponding eigenvector $$\tilde{{\mathbf{c}}} = \kappa_3 \Big( 1, p_{1}(\lambda_{n-m+1}^{{\mathcal{R}}},m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}}), \ldots, p_{n-m}(\lambda_{n-m+1}^{{\mathcal{R}}},m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}}) \Big)^T,$$ where the constant $\kappa_3$ is chosen such that the condition $(\delta_{{\mathcal{R}}}-1) |c_m|^2+|\tilde{{\mathbf{c}}}|^2 = 1$ is satisfied. The uniqueness of the polynomial ${\mathcal{P}_n^{\mathcal{R}}}$ (up to a complex scalar of absolute value one) follows from the simplicity of the largest root $\lambda_{n-m+1}^{{\mathcal{R}}}$ of the polynomials $p_l(x,m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}})$ (see [@Chihara Theorem 5.3]). From the above argumentation it is also clear that the maximal value $M_n^{{\mathcal{R}}}$ is precisely the largest eigenvalue $\lambda_{n-m+1}^{{\mathcal{R}}}$. Explicit expression for the optimally space localized polynomials {#Section-explicitexpressions} ================================================================= Our next goal is to find explicit expressions for the optimal polynomials ${\mathcal{P}_n}$, ${\mathcal{P}_n^m}$ and ${\mathcal{P}_n^{\mathcal{R}}}$ derived in Theorem \[Theorem-optimalpolynomial\]. To this end, we need a Christoffel-Darboux type formula for the associated polynomials $p_l(x,m)$ and $p_l(x,m,\gamma,\delta)$. \[Lemma-ChristoffelDarbouxassociated\] Let $p_l(x,m)$ and $p_l(x,m,\gamma,\delta)$ be the associated and the scaled co-recursive associated polynomials as defined in and . Then, for $x \neq y$, the following Christoffel-Darboux type formulas hold: $$\begin{aligned} \label{equation-ChristoffelDarbouxassociated} \sum_{k=m}^n & p_k(x) p_{k-m}(y,m) \\ &= b_{n+1}\frac{p_{n+1}(x) p_{n-m}(y,m)-p_{n-m+1}(y,m)p_n(x)}{x-y} + b_m \frac{p_{m-1}(x)}{x-y}, \notag \displaybreak[0] \\ \sum_{k=m}^n & p_k(x) p_{k-m}(y,m,\gamma,\delta) \label{equation-ChristoffelDarbouxscaledassociated} \\ &= b_{n+1}\frac{p_{n+1}(x) p_{n-m}(y,m,\gamma,\delta)-p_{n-m+1}(y,m,\gamma,\delta)p_n(x)}{x-y} \notag \\ & \hspace{1cm}+ \frac{p_m(x)((\delta-1) y-\gamma)}{x-y}+ b_m \frac{p_{m-1}(x)}{x-y}. \notag\end{aligned}$$ We follow the lines of the proof of the original Christoffel-Darboux formula (see [@Chihara Theorem 4.5]). By (\[equation-recursionorthonormal\]) and (\[equation-recursionassociatedsymmetric\]), we have for $k \geq m$ the identities $$\begin{aligned} x p_k&(x)p_{k-m}(y,m) \\&= b_{k+1} p_{k+1}(x)p_{k-m}(y,m)+ a_k p_k(x) p_{k-m}(y,m) + b_k p_{k-1}(x) p_{k-m}(y,m),\\ y p_k&(x)p_{k-m}(y,m) \\&= b_{k+1} p_{k}(x)p_{k-m+1}(y,m)+ a_k p_k(x) p_{k-m}(y,m) + b_k p_{k}(x) p_{k-m-1}(y,m).\end{aligned}$$ Subtracting the second equation from the first, we get $$\begin{aligned} (x-y)& p_k(x)p_{k-m}(y,m) \\ & = b_{k+1}\big(p_{k+1}(x)p_{k-m}(y,m) - p_{k}(x)p_{k-m+1}(y,m)\big) \\ & \quad - b_{k} \big(p_{k}(x)p_{k-m-1}(y,m) - p_{k-1}(x)p_{k-m}(y,m)\big).\end{aligned}$$ Let $$F_k(x,y) = b_{k+1} \frac{p_{k+1}(x)p_{k-m}(y,m) - p_{k}(x)p_{k-m+1}(y,m)}{x-y}.$$ Then, the last equation can be rewritten as $$p_k(x)p_{k-m}(y,m)= F_k(x,y) - F_{k-1}(x,y), \quad k \geq m,$$ where $F_{m-1}(x,y) = - b_m p_{m-1}(x)$. Summing the latter from $m$ to $n$, we obtain (\[equation-ChristoffelDarbouxassociated\]).\ Analogously, we get for the scaled co-recursive associated polynomials $$\begin{aligned} p_k(x)p_{k-m}(y,m,\gamma,\delta) &= G_k(x,y) - G_{k-1}(x,y), \quad k \geq m+1, \\ p_m(x)p_{0}(y,m,\gamma,\delta) &= p_m(x), \\\end{aligned}$$ where $$\begin{aligned} G_k(x,y) &= b_{k+1}\frac{p_{k+1}(x)p_{k-m}(y,m,\gamma,\delta) - p_{k}(x)p_{k-m+1}(y,m,\gamma,\delta)}{x-y}, \\ G_m(x,y) &= \frac{b_{m+1}p_{m+1}(x) - p_{m}(x)(\delta y-a_m - \gamma)}{x-y}, \quad k \geq m+1.\end{aligned}$$ Then, summing from $m$ to $n$, we get $$\begin{aligned} \sum_{k=m}^n p_k&(x)p_{k-m}(y,m,\gamma,\delta) = \sum_{k=m+1}^n (G_{k}(x,y)-G_{k-1}(x,y)) + p_m(x) \\ = & G_n(x,y) - \frac{ b_{m+1} p_{m+1}(x) + p_{m}(x)(\delta y-a_m - \gamma)}{x-y} + \frac{p_m(x)(x-y)}{x-y} \\ =& G_n(x,y) + \frac{p_m(x)((\delta-1) y-\gamma)}{x-y}+ b_m \frac{p_{m-1}(x)}{x-y}.\end{aligned}$$ Hence, we obtain formula . As a direct consequence of the Christoffel-Darboux type formulas in Lemma \[Lemma-ChristoffelDarbouxassociated\], we get the following explicit formulas for the optimal polynomials in Theorem \[Theorem-optimalpolynomial\]: \[Corollary-explicitformoptimalpolynomials\] The optimal polynomials ${\mathcal{P}_n}$, ${\mathcal{P}_n^m}$ and ${\mathcal{P}_n^{\mathcal{R}}}$ in Theorem \[Theorem-optimalpolynomial\] have the explicit form $$\begin{aligned} {\mathcal{P}_n}(x) &= \kappa_1 b_{n+1} \frac{p_{n+1}(x) p_{n}(\lambda_{n+1})}{x -\lambda_{n+1}}, \displaybreak[0]\\ {\mathcal{P}_n^m}(x) &= \kappa_2 \frac{ b_{n+1}p_{n+1}(x) p_{n-m}(\lambda_{n-m+1}^m,m)+ b_m p_{m-1}(x )}{x -\lambda_{n-m+1}^{m}}, \displaybreak[0]\\ {\mathcal{P}_n^{\mathcal{R}}}(x) &= \kappa_3 \left( {\mathcal{R}}(x) + \frac{b_{n+1} p_{n+1}(x) p_{n-m}(\lambda_{n-m+1}^{{\mathcal{R}}},m,\gamma_{{\mathcal{R}}},\delta_{{\mathcal{R}}})} {x -\lambda_{n-m+1}^{{\mathcal{R}}}} \right. \displaybreak[0] \\ & \hspace{2cm}+ \left. \frac{p_m(x )((\delta_{{\mathcal{R}}}-1) \lambda_{n-m+1}^{{\mathcal{R}}}-\gamma_{{\mathcal{R}}})+ b_m p_{m-1}(x )} {x -\lambda_{n-m+1}^{{\mathcal{R}}}}\right),\end{aligned}$$ where the constants $\kappa_1$, $\kappa_2$, $\kappa_3$ and the roots $\lambda_{n+1}$ $\lambda_{n-m+1}^{m}$ and $\lambda_{n-m+1}^{{\mathcal{R}}}$ are given as in Theorem \[Theorem-optimalpolynomial\]. Optimally space localized polynomials for Jacobi expansions {#Section-optimalpolynomials} =========================================================== In this section, we will see that in the case of the Jacobi polynomials the generalized mean value ${\varepsilon}(f)$ is related to an uncertainty principle and that the optimal polynomials ${\mathcal{P}_n}$, ${\mathcal{P}_n^m}$ and ${\mathcal{P}_n^{\mathcal{R}}}$ minimize the term ${\operatorname{var}}_S(f)$ for the position variance of the uncertainty principle. In the following, the weight function $w = w_{\alpha\beta}$ under consideration is the Jacobi weight function $$w_{\alpha\beta}(x) = (1-x)^{\alpha}(1+x)^{\beta}, \qquad x \in [-1,1], \quad \alpha, \beta \geq -\frac{1}{2}.$$ The corresponding orthonormal polynomials $p_n^{(\alpha,\beta)}(x)$ are called Jacobi polynomials and satisfy the differential equation $L_{\alpha\beta} p_n^{(\alpha,\beta)} = -n(n + \alpha + \beta + 1) p_n^{(\alpha,\beta)}$, where the second-order differential operator $L_{\alpha\beta}$ is given as (cf. [@Szegö Theorem 4.2.2.]) $$L_{\alpha\beta} f(x) = (1-x^2) \frac{d^2}{dx^2} f(x) + (\beta-\alpha + x (\alpha+\beta+2)) \frac{d}{dx} f(x).$$ The next theorem states a well-known uncertainty principle for functions having an expansion in terms of Jacobi polynomials. \[Theorem-uncertaintyJacobi\] Let $f\in C^2([-1,1]) \cap L^2([-1,1],w_{\alpha\beta})$ such that $\|f\|_{w_{\alpha\beta}} = 1$. Further, let $$(\alpha-\beta)+(\alpha+\beta+2){\varepsilon}(f) \neq 0.$$ Then, the following uncertainty inequality holds: $$\label{equation-uncertaintyJacobi} \frac{1-{\varepsilon}(f)^2}{|\frac{\alpha-\beta}{\alpha+\beta+2}+{\varepsilon}(f)|^2} \cdot \langle - L_{\alpha\beta} f, f \rangle_{w_{\alpha\beta}} > \frac{(\alpha+\beta+2)^2}{4}.$$ The constant $\frac{(\alpha+\beta+2)^2}{4}$ on the right hand side of $\eqref{equation-uncertaintyJacobi}$ is optimal. Theorem \[Theorem-uncertaintyJacobi\] has been proven for ultraspherical expansions in [@RoeslerVoit1997] and was generalized to the Jacobi case in [@LiLiu2003]. Hereby, the notation in [@LiLiu2003] differs slightly from the notation above. For a more detailed discussion of Theorem \[Theorem-uncertaintyJacobi\] see also [@Erb2010], [@ErbDiss], [@GohGoodman2004] and [@Selig2002]. The terms $$\begin{aligned} \label{equation-positionvarianceJacobi} {\operatorname{var}}_{S}(f) &:= \frac{1-{\varepsilon}(f)^2}{\big(\frac{\alpha-\beta}{\alpha +\beta+2}+ {\varepsilon}(f)\big)^2}, \\ {\operatorname{var}}_{F}(f) &:= \langle - L_{\alpha\beta} f, f \rangle_{w_{\alpha\beta}}\end{aligned}$$ in inequality are called the position and the frequency variance of the function $f$, respectively. As the generalized mean value ${\varepsilon}(f)$, also the position variance ${\operatorname{var}}_{S}(f)$ defines a measure for the localization of the function $f$ at the boundary points of the interval $[-1,1]$. In particular, the more mass of the $L^2$-density $f$ is concentrated at the boundary points, the smaller the position variance ${\operatorname{var}}_S(f)$ gets. The next Theorem shows that both measures are in principle equivalent. The only thing one has to take account of is that, in contrast to ${\varepsilon}(f)$, the position variance ${\operatorname{var}}_S(f)$ does not differ between the two boundary points. Therefore, one has to restrict the set of admissible functions in the optimization problem: $$\begin{aligned} {\mathcal{L}}_{n} &:= \{P \in {{\mathbb S}_n}: \;{\varepsilon}(P) > \lambda_1 \}, \\ {\mathcal{L}}_{n}^m &:= \{P \in {{\mathbb S}_{n}^m}: \;{\varepsilon}(P) > \lambda_1 \}, \\ {\mathcal{L}}_n^{{\mathcal{R}}} &:= \{P \in {{\mathbb S}_n^{\mathcal{R}}}: \;{\varepsilon}(P) > \lambda_1 \}, \\\end{aligned}$$ where $\lambda_1 = \frac{\beta-\alpha}{2+\alpha+\beta}$ corresponds to the sole root of the Jacobi polynomial $p_1^{(\alpha,\beta)}(x)$ of degree $1$. \[Theorem-minequivalenttomax\] If the sets ${\mathcal{L}}_{n}$, ${\mathcal{L}}_{n}^{m}$ and ${\mathcal{L}}_{n}^{{\mathcal{R}}}$ are nonempty, then $$\begin{aligned} \arg\min_{P \in {\mathcal{L}}_{n}} {\operatorname{var}}_{S}(P) &= \arg\max_{P \in {\mathcal{L}}_{n}} {\varepsilon}(P) = {\mathcal{P}_n},\\ \arg\min_{P \in {\mathcal{L}}_{n}^m} {\operatorname{var}}_{S}(P) &= \arg\max_{P \in {\mathcal{L}}_{n}^m} {\varepsilon}(P) = {\mathcal{P}_n^m},\\ \arg\min_{P \in {\mathcal{L}}_n^{{\mathcal{R}}}} {\operatorname{var}}_{S}(P) &= \arg\max_{P \in {\mathcal{L}}_n^{{\mathcal{R}}}} {\varepsilon}(P) = {\mathcal{P}_n^{\mathcal{R}}}.\end{aligned}$$ Hence, from all polynomials in the sets ${\mathcal{L}}_{n}$, ${\mathcal{L}}_{n}^m$, ${\mathcal{L}}_{n}^{\mathcal{R}}$, the optimal polynomials ${\mathcal{P}_n}$, ${\mathcal{P}_n^m}$, ${\mathcal{P}_n^{\mathcal{R}}}$ minimize the position variance ${\operatorname{var}}_S$. We consider the space variance ${\operatorname{var}}_{S}$ as a function of $\lambda = {\varepsilon}(f)$. We have $$\begin{aligned} {\operatorname{var}}_{S}(\lambda) &= \frac{1-\lambda^2}{(\lambda-\lambda_1)^2},\\ \frac{d {\operatorname{var}}_{S}}{d\lambda} (\lambda) &= \frac{-2(\lambda-\lambda_1)\lambda-2(1-\lambda^2)}{(\lambda-\lambda_1)^3} = \frac{-2(1-\lambda_1 \lambda)}{(\lambda-\lambda_1)^3}.\end{aligned}$$ Therefore, the derivative $\frac{d}{d\lambda} {\operatorname{var}}_{S}$ is strictly decreasing on the open interval $(\lambda_1,1)$ and strictly increasing on $(-1,\lambda_1)$. So, for $P \in {\mathcal{L}}_{n}, {\mathcal{L}}_{n}^m,{\mathcal{L}}_n^{{\mathcal{R}}}$, maximizing ${\varepsilon}(P)$ yields the same result as minimizing the position variance ${\operatorname{var}}_{S}(P)$. \[Remark-nonemptinessofsetsJacobi\] Whereas it can not be guaranteed that the sets ${\mathcal{L}}_{n}^m$ and ${\mathcal{L}}_n^{{\mathcal{R}}}$ are nonempty, the non-emptiness of the sets ${\mathcal{L}}_{n}$, $n \geq 1$, is a consequence of the interlacing property of the zeros of the Jacobi polynomials (cf. [@Szegö Theorem 3.3.2], [@Chihara Theorem 5.3]). Namely, this interlacing property implies that ${\varepsilon}({\mathcal{P}_n}) = \lambda_{n+1} > \lambda_n > \ldots > \lambda_1$. It can be shown that the uncertainty product ${\operatorname{var}}_S({\mathcal{P}_n}) \cdot {\operatorname{var}}_F({\mathcal{P}_n})$ in of the optimal polynomials ${\mathcal{P}_n}$ is uniformly bounded by a constant independent of the degree $n$. Hence, the polynomials ${\mathcal{P}_n}$ are not only well localized in space, but also in space and frequency. The quite technical proof can be found in [@ErbDiss]. \[example-optimallocalizedchebyshev\] As a final example, we consider the orthonormal Chebyshev polynomials $t_n$ of first kind, i.e., the Jacobi polynomials $p_n^{(\alpha,\beta)}$ with $\alpha = \beta = -\frac{1}{2}$ and the weight function $w_{\alpha\beta}(x) = 1$. The orthonormal Chebyshev polynomials are explicitly given as (see [@Gautschi p. 28-29]) $$t_0(x) = {\textstyle}\frac{1}{\sqrt{\pi}}, \quad t_n(x) = {\textstyle}\sqrt{\frac{2}{\pi}} \cos n t, \quad n \geq 1,$$ where $x = \cos t$. The largest zero of the Chebyshev polynomials $t_{n+1}$ is given by $\lambda_{n+1} = \cos \frac{\pi}{2n+2}$ (see [@Szegö (6.3.5)]). The normalized associated polynomials $t_n(x,m)$, $m \geq 1$, correspond to the Chebyshev polynomials $u_n$ of the second kind given by (see [@Gautschi p. 28-29]) $$u_n(x) = \sqrt{\frac{2}{\pi}} \frac{\sin (n+1) t}{\sin t}, \quad n \geq 0.$$ The largest zero of the polynomials $u_{n+1}$ is given by $\lambda_{n+1} = \cos \frac{\pi}{n+2}$. So, in the case of the Chebyshev polynomials of first kind, we get for the optimally space localized polynomials the formulas $$\begin{aligned} \mathcal{T}_n(x) &= \frac{\kappa_1}{\pi} \left( 1 + 2 \sum_{k=1}^n \cos\frac{k \pi }{2n+2} \cos kt\right) = \frac{\kappa_1}{\pi} \frac{\cos (n+1)t \, \cos \frac{n\pi}{2n+2}}{\cos t -\cos\frac{\pi}{2n+2}}. \label{equation-optimalchebyshev}\\ \mathcal{T}_n^m (x) &= \frac{2\kappa_2}{\pi } \left(\sum_{k=m}^n \frac{\sin\frac{(k-m+1) \pi }{n-m+2} }{\sin\frac{\pi }{n-m+2}} \cos kt \right) = \frac{2\kappa_2}{\pi}\frac{\cos (\frac{n-m+2 }{2} t) \cos (\frac{n+m }{2} t)}{ \cos t - \cos \frac{\pi }{n-m+2}}. \label{equation-optimalchebyshevwavelet}\end{aligned}$$ These optimal polynomials $\mathcal{T}_n$ and $\mathcal{T}_n^m(x)$ in combination with the Breitenberger uncertainty principle on the unit circle were intensively studied by Rauhut et al. in [@PrestinQuakRauhutSelig2003] and [@Rauhut2005]. Optimally space localized polynomials for Hermite expansions ============================================================ As an example of an orthogonal expansion in a non-compact setting we consider the Hermite polynomials on the real line. The aim of this section is to construct polynomials having an expansion in terms of the Hermite polynomials that are optimally localized at the point $x = 0$. In the following, we will see that most of the theory of the previous sections can be applied also in this case, although with slight modifications. The Hilbert space under consideration is now $L^2({{\mathbb R}},w_H)$ with the weight function $w_H(x) = e^{-x^2}$. The corresponding orthonormal polynomials $(h_l)_{l=0}^\infty$, defining an orthonormal basis of $L^2({{\mathbb R}},w_H)$, are called the (orthonormal) Hermite polynomials on ${{\mathbb R}}$. As in Definition , we introduce the polynomial spaces ${\Pi_n}$, ${\Pi_{n}^m}$ and ${\Pi_n^{\mathcal{R}}}$, and the corresponding unit spheres ${{\mathbb S}_n}$, ${{\mathbb S}_{n}^m}$ and ${{\mathbb S}_n^{\mathcal{R}}}$ for the Hermite polynomials $h_n$. The goal is, similar as in Section \[Section-optimallylocalizedpolynomials\], to find those polynomials from ${{\mathbb S}_n}$, ${{\mathbb S}_{n}^m}$ and ${{\mathbb S}_n^{\mathcal{R}}}$ that minimize the position variance $${\operatorname{var}}_S(f) := \int_{{{\mathbb R}}} x^2 |f(x)|^2 e^{-x^2} dx.$$ Since in the setting of the Hermite polynomials the calculations will be more complex, we will omit the case $P \in {\Pi_n^{\mathcal{R}}}$. Also, we will assume that $n$ and $m$ are even integers. The minimization problems then read as follows: $$\begin{aligned} {\mathcal{H}_n}&= \arg\min_{P \in {{\mathbb S}_n}} {\operatorname{var}}_S(P), \label{optimalhermitea}\\ {\mathcal{H}_n^m}&= \arg\min_{P \in {{\mathbb S}_{n}^m}} {\operatorname{var}}_S(P). \label{optimalhermiteb}\end{aligned}$$ To get the explicit solutions, we will make use of the Laguerre polynomials $p_l^{(\alpha)}$ that form an orthonormal basis of the Hilbert space $L^2([0,\infty),w_{\alpha})$ with the weight function $w_{\alpha}(x) = x^{\alpha} e^{-x}$, $\alpha > -1$. The orthonormal Laguerre polynomials $p_l^{(\alpha)}$ and the Hermite polynomials $h_l$ are correlated by the following two formulas (see [@Ismail Section 4.6]): $$\begin{aligned} h_{2l}(x) &= p_l^{(-1/2)}(x^2), & l = 0,1,2, \ldots, \label{equation-correlationhermiteLaguerreeven}\\ h_{2l+1}(x) &= x \, p_l^{(1/2)}(x^2), & l= 0,1,2, \ldots. \label{equation-correlationhermiteLaguerreodd}\end{aligned}$$ The next Lemma gives a characterization of the position variance ${\operatorname{var}}_S(P)$ in terms of the expansions coefficients of a polynomial $P$. \[Lemma-characterizationofvarS\] For a polynomial $P(x) = {\displaystyle}\sum_{l=0}^n c_l h_l(x)$, we get the formulas $$\begin{aligned} {\operatorname{var}}_S(P) &= {\textstyle}{\mathbf{c}}_e^H {\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}} {\mathbf{c}}_e + {\mathbf{c}}_o^H {\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1} {\mathbf{c}}_o, & \text{if} \quad P \in {\Pi_n}, \\ {\operatorname{var}}_S(P) &= {\textstyle}\tilde{{\mathbf{c}}}_e^H {\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_e + \tilde{{\mathbf{c}}}_o^H {\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_o, & \text{if} \quad P \in {\Pi_{n}^m},\end{aligned}$$ with the coefficient vectors $$\begin{aligned} {\mathbf{c}}_e &= (c_0, c_2, \ldots, c_n)^T, & {\mathbf{c}}_o &= (c_1, c_3, \ldots, c_{n-1})^T, \\ \tilde{{\mathbf{c}}}_e &= (c_m, c_{m+2}, \ldots, c_n)^T, & \tilde{{\mathbf{c}}}_o &= (c_{m+1}, c_{m+3}, \ldots, c_{n-1})^T,\end{aligned}$$ and the matrices ${\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}}$ and ${\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1}^{\frac{m}{2}}$ corresponding to the Jacobi matrices of the associated Laguerre polynomials $p_l^{(-\frac{1}{2})}(x,\frac{m}{2})$ and $p_l^{(\frac{1}{2})}(x,\frac{m}{2})$, respectively. Using the correlations and between the Hermite and Laguerre polynomials as well as the three-term recurrence formulas (\[equation-recursionorthonormal\]) of the Laguerre polynomials $p_l^{(-\frac{1}{2})}(x)$ and $p_l^{(\frac{1}{2})}(x)$, we get for the polynomial $P(x) = {\displaystyle}\sum_{l=0}^n c_l h_l(x)$ the formula $$\begin{aligned} {\operatorname{var}}_S& (P) = \int_{{\mathbb R}}x^2 \Big|\sum_{l=0}^n c_l h_l(x)\Big|^2 e^{-x^2} dx \\ &= \int_{{\mathbb R}}\Big(\sum_{l=0}^{\frac{n}{2}} c_{2l} x^2 p_l^{(- \frac{1}{2})}(x^2)+ \sum_{l=0}^{\frac{n}{2}-1} c_{2l+1} x^3 p_l^{(\frac{1}{2})}(x^2)\Big) \overline{\Big(\sum_{l=0}^n c_l h_l(x)\Big)} e^{-x^2} dx \\ &= \int_{{\mathbb R}}\Big(\sum_{l=0}^{\frac{n}{2}} c_{2l} {\textstyle}\big(b_{l+1}^{(-\frac{1}{2})} p_{l+1}^{(-\frac{1}{2})}(x^2) + a_l^{(-\frac{1}{2})} p_l^{(-\frac{1}{2})}(x^2)+ b_{l}^{(-\frac{1}{2})} p_{l-1}^{(- \frac{1}{2})}(x^2) \big) \\ & \quad + \sum_{l=0}^{\frac{n}{2}-1} c_{2l+1} x \big( {\textstyle}b_{l+1}^{(\frac{1}{2})} p_{l+1}^{(\frac{1}{2})}(x^2) + a_l^{(\frac{1}{2})} p_l^{(\frac{1}{2})}(x^2)+ b_{l}^{(\frac{1}{2})} p_{l-1}^{(\frac{1}{2})}(x^2) \big)\Big) \overline{\Big(\sum_{l=0}^n c_l h_l(x)\Big)} e^{-x^2} dx \\ &= \int_{{\mathbb R}}\Big(\sum_{l=0}^{\frac{n}{2}} c_{2l}{\textstyle}\big(b_{l+1}^{(-\frac{1}{2})} h_{2l+2}(x) + a_l^{(-\frac{1}{2})} h_{2l}(x) + b_{l}^{(-\frac{1}{2})} h_{2l-2}(x) \big) \\ & \quad + \sum_{l=0}^{\frac{n}{2}-1} c_{2l+1} {\textstyle}\big(b_{l+1}^{(\frac{1}{2})} h_{2l+3}(x) + a_l^{(\frac{1}{2})} h_{2l+1}(x)+ b_{l}^{(\frac{1}{2})} h_{2l-1}(x) \big)\Big) \overline{\Big(\sum_{l=0}^n c_l h_l(x)\Big)} e^{-x^2} dx.\end{aligned}$$ Next, using the orthonormality relations of the Hermite polynomials $h_l$, we can conclude for $P \in {\Pi_n}$ $$\begin{aligned} {\operatorname{var}}_S(P) &= {\mathbf{c}}_e^H {\textstyle}{\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}} {\mathbf{c}}_e + {\mathbf{c}}_o^H {\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1} {\mathbf{c}}_o.\end{aligned}$$ If $c_0 = \ldots = c_{m-1} = 0$, we get the assertion for the polynomials $P$ in ${\Pi_{n}^m}$. The solutions of the optimization problems and now read as follows. \[Theorem-optimalhermite\] The polynomials solving the minimization problems and are given by $$\begin{aligned} {\mathcal{H}_n}(x) &= \kappa_1 \sum_{l=0}^{\frac{n}{2}} p_l^{(-\frac{1}{2})}(\lambda_{\frac{n}{2}+1})\, h_{2l}(x), \displaybreak[0] \label{equation-optimalpolynomialhermitea}\\ {\mathcal{H}_n^m}(x) &= \kappa_2 \sum_{l=\frac{m}{2}}^\frac{n}{2} {\textstyle}p_{l-\frac{m}{2}}^{(-\frac{1}{2})}(\lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}},\frac{m}{2})\, h_{2l}(x), \displaybreak[0] \label{equation-optimalpolynomialhermiteb}\end{aligned}$$ where $p_l^{(-\frac{1}{2})}(x,\frac{m}{2})$ denote the associated Laguerre polynomials with parameter $\alpha = -\frac{1}{2}$. The values $\lambda_{\frac{n}{2}+1}$ and $\lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}}$ denote the smallest zero of the polynomials $p_{\frac{n}{2}+1}^{(-\frac{1}{2})}(x)$ and $p_{\frac{n-m}{2}+1}^{(-\frac{1}{2})}(x,\frac{m}{2})$, respectively. The constants $\kappa_1$ and $\kappa_2$ are chosen such that the optimal polynomials lie in the respective unit sphere and are uniquely determined up to multiplication with a complex scalar of absolute value one. The minimal value of ${\operatorname{var}}_S(P)$ in the respective polynomial spaces is given by $$\begin{aligned} M_{n} &:= \min_{P\in {{\mathbb S}_n}}{\operatorname{var}}_S(P) = \lambda_{\frac{n}{2}+1},\\ M_{n}^m &:= \min_{P\in {{\mathbb S}_{n}^m}}{\operatorname{var}}_S(P) = \lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}}.\end{aligned}$$ In the following, we will determine the optimal solution ${\mathcal{H}_n^m}$ for the minimization problem . The formula for the optimal polynomial ${\mathcal{H}_n}$ follows as a special case when $m=0$. By Lemma \[Lemma-characterizationofvarS\], the position variance ${\operatorname{var}}_S(P)$ of a polynomial $P(x) = \sum_{l=m}^n c_l h_l(x)$ can be written as ${\operatorname{var}}_S(P) = {\textstyle}\tilde{{\mathbf{c}}}_e^H {\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_e + \tilde{{\mathbf{c}}}_o^H {\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_o$ with the coefficient vectors $\tilde{{\mathbf{c}}}_e$ and $\tilde{{\mathbf{c}}}_e$ given in Lemma \[Lemma-characterizationofvarS\]. Hence, minimizing ${\operatorname{var}}_S(P)$ with respect to a normed polynomial $P \in {{\mathbb S}_{n}^m}$ is equivalent to minimize the quadratic functional $${\textstyle}\tilde{{\mathbf{c}}}_e^H {\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_e + \tilde{{\mathbf{c}}}_o^H {\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_o \quad \text{subject to} \quad |\tilde{{\mathbf{c}}}_e|^2+|\tilde{{\mathbf{c}}}_o|^2 = c_m^2 + c_{m+1}^2 + \cdots + c_n^2 = 1.$$ \[equation-minimizationproblemcoefficients\] The minimization problem has a block matrix structure with the two matrices ${\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}}$ and ${\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1}^{\frac{m}{2}}$. To solve the problem, we have to determine the smallest eigenvalue of both matrices. The eigenvalues of the two matrices ${\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}}$ and ${\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1}^{\frac{m}{2}}$ correspond exactly with the zeros of the Laguerre polynomials $p_{\frac{n-m}{2}+1}^{(-\frac{1}{2})}(x,\frac{m}{2})$ and $p_{\frac{n-m}{2}}^{(\frac{1}{2})}(x,\frac{m}{2})$. By a functional analytic method based on the Hellmann-Feynman Theorem (see [@Ismail1987], [@Ismail Section 7.4] and [@ErbTookos2010]) it follows that the smallest zero of the associated Laguerre polynomials $p_l^{(\alpha)}(x,m)$ is an increasing function of the parameter $\alpha$. This result together with the fact that the smallest eigenvalue of $p_{\frac{n-m}{2}+1}^{(-\frac{1}{2})}(x,\frac{m}{2})$ is strictly smaller than the one of $p_{\frac{n-m}{2}}^{(-\frac{1}{2})}(x,\frac{m}{2})$ (see the interlacing property of the orthogonal polynomials [@Szegö Theorem 3.3.2]) implies that the matrix ${\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}}$ is the one with the smallest eigenvalue. Therefore, if $\lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}}$ denotes the smallest eigenvalue of the Jacobi matrix ${\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}}$, we get $$\label{equation-extremalmatrixHermite} \tilde{{\mathbf{c}}}_e^H {\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_e + \tilde{{\mathbf{c}}}_o^H {\mathbf{J}}(\frac{1}{2})_{\frac{n}{2}-1}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_o \geq \lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}} (|\tilde{{\mathbf{c}}}_e|^2+ |\tilde{{\mathbf{c}}}_o|^2)$$ and equality is attained for the eigenvectors corresponding to $\lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}}$. Using the recursion formula (\[equation-recursionassociatedsymmetric\]) of the associated polynomials $p_{l}^{(-\frac{1}{2})}(x,\frac{m}{2})$ with $c_m = 1$ the eigenvalue equation ${\mathbf{J}}(-\frac{1}{2})_{\frac{n}{2}}^{\frac{m}{2}} \tilde{{\mathbf{c}}}_e = \lambda_{{\frac{n-m}{2}}+1}^{{\frac{m}{2}}} \tilde{{\mathbf{c}}_e}$ yields $$\begin{aligned} c_{2l} &= {\textstyle}p_{l-\frac{m}{2}}(\lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}},\frac{m}{2}), & {\textstyle}l = \frac{m}{2}, \ldots, \frac{n}{2},\\ c_{2l+1} &= 0, & {\textstyle}l = \frac{m}{2}, \ldots, \frac{n}{2}-1.\end{aligned}$$ Finally, we have to normalize the coefficients $c_l$, $m \leq l \leq n$, such that $|\tilde{{\mathbf{c}}}_e|^2+|\tilde{{\mathbf{c}}}_o|^2 =1$. This is done by the absolute value of the constant $\kappa_2$. The uniqueness (up to a complex scalar with absolute value $1$) of the optimal polynomial ${\mathcal{P}_n}$ follows from the fact that the smallest zero of the polynomial $p_{\frac{n-m}{2}+1}^{(-\frac{1}{2})}(x,\frac{m}{2})$ is simple (see [@Chihara Theorem 5.3]). The formula for $M_{n}^m$ follows directly from the estimate in (\[equation-extremalmatrixHermite\]). Using once again the relation between the even Hermite polynomials and the Laguerre polynomials and the Christoffel-Darboux type formulas ot Lemma \[Lemma-ChristoffelDarbouxassociated\], we get the following explicit formulas for the optimal polynomials in Theorem \[Theorem-optimalhermite\]: \[Corollary-explicitformoptimalhermite\] The optimal polynomials ${\mathcal{H}_n}$ and ${\mathcal{H}_n^m}$ in Theorem \[Theorem-optimalhermite\] have the following explicit form: $$\begin{aligned} {\mathcal{H}_n}(x) &= \kappa_1 b_{n+1}^{(-\frac{1}{2})} p_{\frac{n}{2}}^{(-\frac{1}{2})}(\lambda_{\frac{n}{2}+1}) \frac{h_{n+1}(x)}{x^2 - \lambda_{\frac{n}{2}+1}}, \displaybreak[0]\\ {\mathcal{H}_n^m}(x) &= \kappa_2 \frac{ b_{n+1}^{(-\frac{1}{2})} p_{\frac{n-m}{2}}^{(-\frac{1}{2})}(\lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}},\frac{m}{2}) h_{n+1}(x) + b_m^{(-\frac{1}{2})} h_{m-1}(x)}{x^2 -\lambda_{\frac{n-m}{2}+1}^{\frac{m}{2}}}, \displaybreak[0]\end{aligned}$$ where the constants $\kappa_1$, $\kappa_2$ and the roots $\lambda_{\frac{n}{2}+1}$, $\lambda_{{\frac{n-m}{2}}+1}^{{\frac{m}{2}}}$ are given as in Theorem \[Theorem-optimalhermite\]. Construction of polynomial filters for the detection of peaks in periodic signals ================================================================================= In this final section, we will give some examples on how the optimal polynomials of Section \[Section-optimalpolynomials\] can be applied as filters for the detection of peaks. To this end, we consider continuous $2\pi$-periodic signal functions $f \in L^2([-\pi,\pi))$ and trigonometric polynomial filters $h \in \Pi_n$. Our goal is to find trigonometric polynomials $h$ which are well suited to work out the peaks of the signal $f$. Since the filtering operator $F_h$ defined in acts as a convolution operator on $f$, most of the mass of the polynomial $h$ has to be concentrated at the point $t = 0$ in order to filter out the peaks of the signal $f$. If we further assume that $h$ is even, i.e., $h(t) = h(-t)$, then $h(\arccos(x))$ is defined on the interval $[-1,1]$ and is a polynomial of degree $n$ in the variable $x = \cos t$. Moreover, if the polynomial $h(\arccos(x))$ is localized at $x = 1$, then the trigonometric polynomial $h$ is localized at $t = 0$. Therefore, the optimally space localized polynomials ${\mathcal{P}_n}(\cos t)$ of Theorem \[Theorem-optimalpolynomial\] are natural choices for polynomial filters in peak analysis. Experimenting with different weight functions $w$ in Theorem \[Theorem-optimalpolynomial\] and Corollary \[Corollary-explicitformoptimalpolynomials\], it is possible to construct a whole bunch of well-localized polynomial filters with different properties. We give here just some easy examples using the Jacobi weight function $w_{\alpha\beta}$. For $\alpha = -\frac{1}{2}$, $\beta = -\frac{1}{2}$, and $\alpha = \frac{1}{2}$, $\beta = -\frac{1}{2}$, we get the two filter kernels $$\begin{aligned} \label{equation-optimalfilter1} h_n^{(1)}(t) &:= \mathcal{T}_n(\cos t ) = C_n^{(1)} \frac{\cos (n+1)t}{\cos t -\cos \frac{\pi}{2n+2}}, \\ h_n^{(2)}(t) &:= C_n^{(2)} \frac{\sin (n+\frac{3}{2})t}{\sin \frac{t}{2}} \frac{1}{\cos t -\cos \frac{\pi}{n+\frac{3}{2}}}, \label{equation-optimalfilter2}\end{aligned}$$ where the constants $C_n^{(1)}$ and $C_n^{(2)}$ denote normalizing factors such that the respective polynomials are normed in the $L^2$-norm. The optimal polynomial $h_n^{(1)}(t)$ was already computed in Example \[example-optimallocalizedchebyshev\]. In approximation theory, the trigonometric polynomials $h_n^{(1)}$ are known as Rogosinski kernels (cf. [@Lasser p. 112-114]), in signal analysis they are well-known as cosine windows (cf. [@Harris1978]). The Rogosinski filter $h^{(1)}_6$.\ ![The polynomial filters $h^{(1)}_6$ and $h^{(2)}_6$ of degree $6$.[]{data-label="Figure-optimalcosine"}](optimalm1o2cm1o2.png "fig:"){width="\textwidth"} The polynomial filter $h^{(2)}_6$\ ![The polynomial filters $h^{(1)}_6$ and $h^{(2)}_6$ of degree $6$.[]{data-label="Figure-optimalcosine"}](optimal1o2cm1o2.png "fig:"){width="\textwidth"}\ The filter $h_n^{(2)}(t)$ is computed in the same way as $h_n^{(1)}(t)$ using the explicit representation of the Chebyshev polynomials of third kind (for the definition, see [@Gautschi Section 1.5.1]). Figure \[Figure-optimalcosine\] illustrates that compared to the filter $h_n^{(1)}$ the trigonometric polynomial $h_n^{(2)}$ has a wider peak at $t = 0$ but less mass at the ends $t = \pi$ and $t = -\pi$. This is due to the fact that in the case of the filter $h_n^{(2)}$ we optimize over all polynomials $P \in \Pi_n$ with $$\int_{-1}^1 |P(x)|^2 (1-x)^{\frac{1}{2}}(1+x)^{-\frac{1}{2}} dx = 1,$$ i.e. the particular optimization problem favours polynomials that have more mass concentrated at $x = 1$. ![Filtering a noisy signal with the optimal polynomial filter $h_n^{(1)}$.[]{data-label="Figure-peakdetectionOP"}](peakdetectionOP.png "fig:"){width="\textwidth"}\ If the peaks of the signal $f$ lie on a low-frequency carrier signal or if some baseline correction has to be done, it is reasonable to additionally filter out the low frequencies of $f$. In this case, the polynomial filter $h$ has to be restricted to a frequency band $[m,n] \subset {{\mathbb N}}$. The corresponding optimal filters are given in equation . In case of the Chebyshev polynomials $t_l(x)$ of first kind, the optimal band-limited polynomials $\mathcal{T}_n^{m}$ are given in Example \[example-optimallocalizedchebyshev\] as $$h_{n,m}^{(1)}(t) := \mathcal{T}_{n}^{m}(\cos t) = C_{n,m}^{(1)} \frac{\cos (\frac{n-m+2 }{2} t) \cos (\frac{n+m }{2} t)}{ \cos t - \cos \frac{\pi }{n-m+2}}.$$ ![Filtering a noisy signal with the optimal wavelet filter $h_{n,m}^{(2)}$.[]{data-label="Figure-peakdetectionOPw"}](peakdetectionOPw.png "fig:"){width="\textwidth"}\ Finally, we consider polynomial window functions $h$ with the additional property $h(\pi) = h(-\pi) = 0$. If $h$ is assumed to be a symmetric trigonometric polynomial, then $h$ has the form $h(t) = g(t) (1+\cos t)$, where $g$ is a symmetric trigonometric polynomial of degree $n-1$. Now, as above, we can play the same game with the polynomial $g$, i.e., we can insert optimally space localized polynomials as candidates for $g$. Using Jacobi weights with $\alpha = -\frac{1}{2}$, $\beta = \frac{3}{2}$ and $\alpha = \frac{1}{2}$, $\beta = \frac{1}{2}$, we can introduce the filter kernels $$\begin{aligned} \label{equation-optimalfilter3} h_n^{(3)}(t) &:= C_n^{(3)} \frac{p_{n}^{(-\frac{1}{2},\frac{3}{2})}(\cos t) (1+\cos t) }{\cos t -\lambda_{n}}, \\ h_n^{(4)}(t) &:= C_n^{(4)} \frac{\sin (n+1)t}{\sin t} \frac{1+\cos t}{\cos t -\cos \frac{\pi}{n+1}}, \label{equation-optimalfilter4}\end{aligned}$$ where $\lambda_n$ denotes the largest zero of the Jacobi polynomial $p_{n}^{(-\frac{1}{2},\frac{3}{2})}$. The filter $h_n^{(3)}$ is defined such that the functional $\int_{-1}^1 x |P(x)|^2 (1+x)^2 dx$ is maximized over all polynomials $P$ under the constraint $\int_{-1}^1 |P(x)|^2 (1+x)^2 dx = 1$. The filter polynomial $h_n^{(4)}$ is well-known in signal analysis under the name Hann window (see [@Harris1978]). In fact, computing the Fourier coefficients of $h_n^{(4)}$, one gets $\hat{h}_n^{(4)}(j) = c \cos^2(\frac{\pi j}{2n+2})$ for $j = -n, \ldots,n$. The polynomial window $h^{(3)}_6$.\ ![The polynomial filters $h^{(3)}_6$ and $h^{(4)}_6$ of degree $6$.[]{data-label="Figure-optimalHann"}](optimalm1o2c3o2.png "fig:"){width="\textwidth"} The Hann window $h^{(4)}_6$\ ![The polynomial filters $h^{(3)}_6$ and $h^{(4)}_6$ of degree $6$.[]{data-label="Figure-optimalHann"}](optimalhann.png "fig:"){width="\textwidth"}\ [10]{} , 6 (2010), 1355–1365. . Gordon and Breach, Science Publishers, New York, 1978. Uncertainty principles on compact [R]{}iemannian manifolds. , 2 (2010), 182–197. . Dissertation, Technische Universität München, 2010. vailable at *http://mediatum2.ub.tum.de/doc/976465*. Applications of the monotonicity of extremal zeros of orthogonal polynomials in interlacing and optimization problems. (2010). vailable at *www.helmholtz-muenchen.de/en/ibb/research/*. On a filter for exponentially localized kernels based on [J]{}acobi polynomials. (2009). doi: 10.1016/j.jat.2009.01.004. The uncertainty principle: a mathematical survey. , 3 (1997), 207–233. . Oxford University Press, Oxford, 2004. Uncertainty principles and asymptotic behavior. , 1 (2004), 69–89. In [*[Advances in Gabor analysis]{}*]{} (2003), H. Feichtinger and T. Strohmer, Eds., [Birkhäuser, Basel, Applied and Numerical Harmonic Analysis]{}, pp. 11–30. On the use of windows for harmonic analysis with the discrete [F]{}ourier transform. , 1 (1978), 51–83. The variation of zeros of certain orthogonal polynomials. (1987), 111–118. . Cambridge University Press, Cambridge, 2005. Sub-exponentially localized kernels and frames induced by orthogonal expansions. (2010), 361–397. Optimally space-localized band-limited wavelets on $\mathbb{S}^{q-1}$. , 1 (2007), 68–79. An overview of time and frequency limiting. In [*Fourier Techniques and Applications*]{} (1985), J. Price, Ed., Plenum, New York, pp. 201–220. . Marcel Dekker, New York, 1996. Uncertainty principles for [J]{}acobi expansions. , 2 (2003), 652–663. . World Scientific Publishing, Singapore, 1996. Polynomial frames on the sphere. , 4 (2000), 387–403. Polynomial frames: a fast tour. In [*Approximation Theory XI. Gatlinburg, 2004*]{} (2005), C. K. Chui, L. L. Schumaker, and M. Neamtu, Eds., Nashboro Press, Brentwood, TN, pp. 287–318. Gaborlocal: peak detection in mass spectrum by gabor filters and gaussian local maxima. (2008), 85–96. On the connection of uncertainty principles for functions on the circle and on the real line. , 4 (2003), 387–409. est time localized trigonometric polynomials and wavelets. , 1 (2005), 1–20. An uncertainty principle for ultraspherical expansions. (1997), 624–634. Uncertainty principles revisited. (2002), 164–176. (1983), 379–393. . American Mathematical Society, Providence, Rhode Island, 1939. Comparison of public peak detection algorithms for [MALDI]{} mass spectrometry data analysis. , 4 (2009). vailable at *http://www.biomedcentral.com/1471-2105/10/4*. [^1]: Institute of Mathematics, University of Lübeck, Wallstrasse 40, 23560 Lübeck, Germany. [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that the multiplicity of a plane analytic $1-$form is a bound for the number of Puiseux exponents of a (formal or convergent) branch. This is true whether the associated foliation is dicritical or not.' address: 'Dpt. of Mathematics, Univ. of Oviedo, Oviedo, Spain.' author: - 'P. Fortuny Ayuso' title: On the number of Puiseux exponents of an invariant branch of a vector field --- Introduction. The Newton-Puiseux polygon ======================================== Among the problems related to the complexity of the invariant curves of a germ of singular analytic foliation in the plane ($1-$form or, equivalently, analytic vector field) —the most famous one being the Poicaré Problem, see [@Poincare2], [@Cerveau-Neto-1991] and [@Carnicer], for example— one of the open questions is whether the number of Puiseux exponents of such a curve can be bounded in terms of the local invariants of the singularity of the foliation. In this note we prove that the multiplicity of the singularity of the $1-$form is such a bound: an invariant branch can have at most as many Puiseux exponents as the minimum order of the coefficients of the $1-$form plus one. The main tool is the Newton-Puiseux polygon, whose construction can be found in [@Ince] and, more adapted to the modern notation, in [@CanoJ]. We give, in this introduction, the most concise summary of its construction, for the sake of completeness. Notice that we omit general arguments about existence and convergence because they are of no use to us. The Newton-Puiseux construction ------------------------------- In the most general case we shall need, we consider a formal $1-$form $$\label{eq:1-form} \omega = a(x,y) dx + b(x,y) dy$$ where $a(x,y)$ and $b(x,y)$ are power series in $y$ whose coefficients belong to some ring of formal power series $\mathbb{C}[[x^{1/n}]]$ for some $n\in \mathbb{N}$. We assume $a(0,0)=b(0,0)=0$ (i.e. the form is *singular*). Let $\Gamma=\sum f_kx^{k/m}$ be a formal power series with $k\in \mathbb{N}$ for $k\geq m$ and $m\in \mathbb{N}$ (i.e. $\Gamma$ is a Puiseux expansion of a formal branch transverse to $x=0$). We say that $\Gamma$ is *invariant* for $\omega$ if $$\textstyle a\big(x, \sum f_k x^{k/m}\big) + b\big(x, \sum f_kx^{k/m}\big)\big(\sum k f_kx^{k/m-1}\big) = 0.$$ Given $\omega$, we construct its *cloud of points* as the set $$\mathcal{C}(\omega) = \left\{ (i,j)\in \frac{\mathbb{Z}_{\geq -1}}{n} \times \mathbb{N}\,:\, a_{i,j}\neq 0 \mbox{ or } b_{i+1,j-1}\neq 0 \right\}$$ where $a(x,y)=\sum a_{ij}x^iy^j$ and $b(x,y)=\sum b_{ij}x^iy^j$. The *Newton Polygon* of $\omega$ is the following set: $$\mathcal{N}(\omega) = \mathrm{convex\ envelope} \left( \left\{ (i,j) + \mathbb{R}_{\geq 0}\times \mathbb{R}_{\geq 0} \,:\, (i,j) \in \mathcal{C}(\omega) \right\} \right).$$ For a rational number $\mu\in \mathbb{Q}$, with $\mu\geq 1$, let $L_{\mu}$ be the unique line of slope $-1/\mu$ (we say that $L_{\mu}$ has *co-slope* $\mu$) in $\mathbb{R}^2$ which meets $\mathcal{N}(\omega)$ only at its topological border and let $(\tau,0)$ be the point at which $L_{\mu}$ meets the $OX$ axis. Let $\omega_{\mu}=a(x, cx^{\mu} + \overline{y})dx + b(x, cx^{\mu} + \overline{y}) d (cx^{ \mu}+\overline{y})$ be the $1-$form corresponding to the change of variables $y= cx^{\mu} + \overline{y}$. The following results are well-known [@CanoJ]: If there is an invariant curve whose Puiseux expansion starts with $cx^{\mu}$ then the Newton polygon $\mathcal{N}(\omega_{\mu})$ of $\omega_{\mu}$ meets $OX$ only at points with abscissa strictly greater than $\tau$, if at all. Considering the branch $\Gamma\equiv \sum_{k\geq m} f_kx^{k/m}$, we may define, recursively, $\omega_{m-1}=\dots=\omega_1=\omega_0=\omega$ and, for $k\geq m$: $$w_k = w_{k-1}(x, f_k x^{k/m} + \overline{y})$$ and, by recurrence, we know that if $\Gamma$ is invariant for $\omega$ then for all $k$, the line $L_{k/m}=L_{k/m}(\omega_{k-1})$ meets $OX$ strictly to the left of $\mathcal{N}(\omega_{k})$ (the following Newton polygon). Moreover, each time a coefficient $f_{k}$ gives rise to a side on the following polygon $\mathcal{N}(\omega_{k})$, the next coefficient $f_{k+1}$ comes from a point strictly lower than the previous one: \[lem:lower-side\] If $L_{k/m}$ meets $\mathcal{N}(\omega_{k})$ on a side, then the highest point of $L_{(k+1)/m}\cap \mathcal{N}(\omega_{k})$ is strictly lower than the highest point of $L_{k/m}\cap \mathcal{N}(\omega_{k-1})$. The multiplicity bounds the number of Puiseux exponents ======================================================= Let $\omega$ and $\Gamma$ be as above (admitting rational exponents in $x$, but with a common denominator). We define the $y-$order of $\omega$ as the the ordinate of the highest point $(i,j)\in \mathcal{N}(\omega)\cap L_1$, where $L_1$ is the line of co-slope $1$ meeting $\mathcal{N}(\omega)$ on its border. Consider $\omega_k$ for $k\in \mathbb{N}$, as above. For each $k\in \mathbb{N}$, let $q_k$ be the product of the denominators of the Puiseux exponents up of $\Gamma$ to $k/m$. The *multiplicity of $\omega$* is the smallest order of $a(x,y), b(x,y)$ plus one. \[the:bound-number-puiseux-exponents\] If the branch $\Gamma$, transverse to $x=0$, is invariant for $\omega$ and has $r$ Puiseux exponents, then the $y-$order of $\omega$ is at least $r$. As a consequence, the multiplicity of $\omega$ is at least the largest number of Puiseux exponents of an invariant branch. Before proceeding, we require two elementary results. \[lem:strict-decrease\] Let $P=(i,j)$ be the highest point of the line $L_{k/m}$ meeting the Newton polygon of $\omega_{k-1}$ at its border. Assume $f_{k}\neq 0$ and that $q_{k}=sq_{k-1}$ with $s>1$. If $j>1$, then the Newton polygon of $\omega_{k}$ contains both $(i,j)$ and $(i+(j-1) \frac{k}{m}, 1)$ if $s\geq j$ or $(i+(s-1) \frac{k}{m}, j-(s-1))$ otherwise. Let $t=\max\{1,j-(s-1)\}$. As $s>1$, the points $(i+l \frac{k}{m}, j-l)$ do not belong to the cloud of points of $\omega_{k-1}$, for $l=1,\dots, j-t$. The fact that $P$ is the highest vertex of the polygon in $L_{k/m}$ implies that either $a_{ij}\neq 0$ or $b_{i+1,j-1}\neq 0$. In any case, the coordinate change $y=f_{k}x^{k/m}+\overline{y}$ gives rise, for the point $(i+(j-t) \frac{k}{m}, t)$ *only* to the terms $$\label{eq:point-one-below-1} f_{k}^{j-t}\left(\binom{j}{j-t} a_{ij} + \frac{k}{m}\binom{j-1}{j-t-1} b_{i+1,j-1}\right)x^{i+(j-t)k/m}\overline{y}^{t}dx$$ and $$\label{eq:point-one-below-2} f_{k}^{j-t}\binom{j-1}{j-t}b_{i+1,j-1}x^{i+(j-t)k/m+1}\overline{y}^{t-1}d\overline{y}$$ (notice that $1\leq j-t\leq j-1$ and $j-t-2\geq 0$). In order for this point not to appear in the new Newton Polygon, both expressions must be $0$. We know that $f_{k}\neq 0$, so that necessarily, $b_{i+1,j-1}=0$, because must be $0$ and this implies that $a_{ij}=0$ in , which prevents $P$ from being in the Newton polygon of $\omega$, a contradiction. As a consequence, the highest point of $L_{(k+1)/m}$ is at height either $j-(s-1)$ or $1$. Because the segment joining $P=(i,j)$ and $Q=(i+t \frac{k}{m},j-t)$ in the previous proof has co-slope $k/m$, the only way to continue following $\Gamma$ as a solution of $\omega$, by Lemma \[lem:lower-side\] is either using a vertex which is $Q$ or lower, or a side which starts at $Q$ or lower, which gives the result. The theorem is now proved by recurrence. As $\Gamma$ is not tangent to the $OY$ axis, its Puiseux expansion $\Gamma\equiv \sum f_kx_k^{k/m}$ starts with $k\geq m$. Let $j$ be the $y-$order of $\omega=\omega_0=\dots=\omega_{m-1}$. By Lemma \[lem:strict-decrease\], the first Puiseux exponent $\mu_1 = k_1/m$ gives rise, in the next Newton polygon (that of $\omega_{k_1}$) to a side of co-slope $\mu_1$ whose lowest vertex has height strictly less than $j$ unless $j=1$. By recurrence, one sees that, if $r\geq j$, then the $j-1-$th Puiseux exponent gives rise (on the appropriate Newton polygon) to a side whose highest vertex is at height $1$. At this point, it is well known [@CanoJ] that only one more Puiseux exponent can appear, and we are done. As the $y-$order is less than or equal to the order of an analytic differential form, the consequence follows easily. [1]{} J. Cano. On the series defined by differential equations, with an extension of the [P]{}uiseux [P]{}olygon construction to these equations. , (13):103–117, 1993. M. Carnicer. The [Poincar[é]{}]{} [Problem]{} in the non-dicritical case. , 140:289–294, 1994. D. Cerveau and A. Lins Neto. Holomorphic foliations in [${\mathbb C}{\mathbb P}(2)$]{} having an invariant algebraic curve. , 41(4):883–903, 1991. E. L. Ince. . Dover, New York, 1956. H. Poincar[é]{}. Sur l’int[é]{}gration alg[é]{}brique des [é]{}quations diff[é]{}rentielles du premier ordre et du premier degr[é]{} ([I]{} and [II]{}). , 5 and 11:161–191 and 193–239, 1891,1897.
{ "pile_set_name": "ArXiv" }
--- author: - 'Sudip Vhaduri and Christian Poellabauer, ' bibliography: - 'reference\_short.bib' title: 'Summary: Multi-modal Biometric-based Implicit Authentication of Wearable Device Users' --- [Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{}
{ "pile_set_name": "ArXiv" }
--- author: - 'Arindam Bhattacharya,' - 'Ian Moult,' - 'Iain W. Stewart,' - and Gherardo Vita bibliography: - 'bibliography.bib' title: Helicity Methods for High Multiplicity Subleading Soft and Collinear Limits --- Introduction {#sec:intro} ============ The factorization properties of multi-leg gauge theory amplitudes in the soft and collinear limits are essential for our theoretical understanding of these amplitudes, as well as for the calculation of multi-jet observables at hadron colliders. While the leading power soft and collinear limits have been extensively studied, very little is known about the subleading power factorization properties of multi-leg amplitudes, or multi-jet observables. There has recently been significant progress in understanding the structure of power corrections in the soft and collinear limits [@Manohar:2002fd; @Beneke:2002ph; @Pirjol:2002km; @Beneke:2002ni; @Bauer:2003mga; @Hill:2004if; @Mannel:2004as; @Lee:2004ja; @Bosch:2004cb; @Beneke:2004in; @Tackmann:2005ub; @Trott:2005vw; @Dokshitzer:2005bf; @Laenen:2008ux; @Laenen:2008gt; @Paz:2009ut; @Benzke:2010js; @Laenen:2010uz; @Freedman:2013vya; @Freedman:2014uta; @Bonocore:2014wua; @Larkoski:2014bxa; @Bonocore:2015esa; @Kolodrubetz:2016uim; @Bonocore:2016awd; @Moult:2016fqy; @Boughezal:2016zws; @DelDuca:2017twk; @Balitsky:2017flc; @Moult:2017jsg; @Goerke:2017lei; @Balitsky:2017gis; @Beneke:2017ztn; @Feige:2017zci; @Moult:2017rpl; @Chang:2017atu; @Boughezal:2018mvf; @Ebert:2018lzn; @Bahjat-Abbas:2018hpv], including the first all order resummation of power suppressed logarithms for collider observables with soft and collinear radiation [@Moult:2018jjd] and more recently for the case of threshold [@Beneke:2018gvs]. However, complete calculations of the all orders structure of power suppressed terms have so far focused on the case of two back-to-back jets, corresponding to color singlet production at the LHC, or dijet production in $e^+e^-$. Both for improving our theoretical understanding, as well as for practical applications for observables at the LHC, it is important to be able to extend these calculations to the multi-jet case. Compact expressions for multi-point amplitudes are typically expressed using the spinor-helicity formalism [@DeCausmaecker:1981bg; @Berends:1981uq; @Gunion:1985vca; @Xu:1986xb], and color ordering techniques [@Berends:1987me; @Mangano:1987xk; @Mangano:1988kk; @Bern:1990ux]. See e.g. [@Dixon:1996wi; @Dixon:2013uaa] for reviews. Due to the success of unitarity [@Bern:1994zx; @Bern:1994cg] and recursion [@Britto:2004ap; @Britto:2005fq] based techniques, a wealth of tree, one- and two-loop multi-point amplitudes are known in QCD. However, for the most part, this wealth of data has not been exploited in the study of subleading power corrections to collider observables. In this paper we provide a method to directly and efficiently compute subleading power logarithms for multi-jet event shape observables using known spinor amplitudes. First, we study the expansion of the two-particle collinear limit to subleading powers in terms of spinor helicity variables, providing a convenient parametrization in terms of standard kinematic variables. Then, we exploit consistency relations derived in soft collinear effective theory (SCET) [@Bauer:2000ew; @Bauer:2000yr; @Bauer:2001ct; @Bauer:2001yt; @Bauer:2002nz] to show that the leading logarithms at subleading power for a broad class of multi-jet event shape observables can be computed using only the two-particle collinear limit, to any order in $\alpha_s$. The two particle collinear limit is particularly convenient from the perspective of multi-jet calculations, since it avoids the complicated phase space integrals that appear in soft limits. We use several simple examples to show explicitly how this can be done in an efficient manner. These techniques should enable a rapid extension of the availability of power corrections to multi-jet processes. By extending to the multi-point case, we are also able to improve our theoretical understanding of subleading power corrections and factorization, since features that are specific to two back-to-back jet directions no longer apply. In particular, we observe that at subleading powers, generic multipoint amplitudes exhibit power law, instead of logarithmic divergences. The proper treatment of these power law divergences in terms of distributions leads to derivatives of the parton distribution functions (PDFs) in hadron collider observables. An interesting feature about multi-point amplitudes is that these singularities arise already at the squared amplitude level, even if the corresponding phase space integrals are not themselves singular. This is a generic features whose treatment at fixed order provides the first step towards understanding their all orders structure for generic amplitudes. An outline of this paper is as follows. In we discuss the parametrization of the two-particle collinear limit in spinor-helicity variables, showing how we can efficiently expand amplitudes to subleading powers in the collinear limit, and giving several concrete examples. In we show how we can use consistency relations derived in SCET to extract subleading power logarithms for multi-jet event shape observables from the two-particle collinear limit. In we discuss the treatment of power law divergences which appear in the power expansion of amplitudes. We conclude in , and provide an outlook for a number of applications of the techniques discussed here. Subleading Power Expansions of Spinors {#sec:spinor} ====================================== In this section, we describe in detail the subleading power expansion of spinor helicity variables, focusing on the behavior and parametrization of the two particle collinear limit at subleading powers. While soft limits have been studied extensively (see e.g. [@Strominger:2017zoo] for a recent review), subleading power collinear limits are much less well studied, and therefore parametrizations of spinors in these limits are less widely known in the literature. A convenient parametrization of the two particle collinear limit at subleading powers was given in [@Stieberger:2015kia; @Nandan:2016ohb]. In this section, we generalize this parametrization, and make it explicit in terms of the standard momenta that are useful for calculations of observables in the collinear limit. In we will apply this expansion to extract subleading power logarithms in event shape observables. Subleading Power Collinear Limit {#sec:toolbox} -------------------------------- Here we will consider the subleading power expansion of the two-particle collinear limit. We assume that we have two particles with momenta $p_1$ and $p_2$ that are collinear along a direction $n$. It is convenient to work in lightcone coordinates, decomposing a given momentum $k$ as $(n\cdot k, {{\bar n}}\cdot k, k_\perp) \equiv (k^+, k^-, k_\perp)$. Here $\bar n$ is an auxiliary lightlike vector. As a concrete example we can take the vectors to be $n^\mu =(1,0,0,1)$ and ${{\bar n}}^\mu=(1,0,0,-1)$. We then define particles collinear to the $n$ direction to have the momentum scaling $$\begin{aligned} \label{eq:collinear} (k^-,k^+,k_{\perp})\sim Q(\lambda^0,\lambda^2,\lambda)\,,\end{aligned}$$ where $Q$ is some typical hard scale for the energy of the collinear radiation and $\lambda \ll 1$ is our power expansion parameter. Note that $\lambda$ is a scaling parameter that determines the size of various contributions, and hence does not itself show up in expanded amplitudes. With this momentum scaling, it is straightforward to expand amplitudes expressed in terms of standard Mandelstam invariants. However, we would also like to be able to expand amplitudes expressed in terms of spinor helicity variables. We follow the notation of [@Dixon:1996wi]. To expand particles $1$ and $2$ in the two particle collinear limit, we parametrize the full spinors as $$\begin{aligned} \label{eq:def} |1\rangle &=c\ |p\rangle - \epsilon s\ |r\rangle\,,\\ |2\rangle &=s\ |p\rangle + \epsilon' c\ |r\rangle\,, \nn\end{aligned}$$ where $p$ and $r$ are momenta along $n$ and $\bar{n}$ respectively, and the $|p\rangle$ term dominates. The parameters $c$ and $s$ are such that $c^2+s^2=1$, and $\epsilon$ and $\epsilon'$ are complex parameters involving small combinations of momenta in which we will expand, with $\epsilon,\epsilon'\sim \lambda$. Both $\epsilon$ and $\epsilon'$ are needed in order to take generic collinear limits. The special case with $\epsilon=\epsilon'$ corresponds to an additional kinematic restriction (discussed below), in which case [Eq. ]{} is identical to the decomposition in Refs. [@Stieberger:2015kia; @Nandan:2016ohb]. For square brackets we have the analogous decomposition $$\begin{aligned} \label{eq:sqdef} |1] &=c\ |p] - \epsilon^{*} s\ |r]\,,\\ |2] &=s\ |p] + \epsilon^{\prime *} c\ |r]\,, \nn\end{aligned}$$ where again the $|p]$ term dominates, and the $^*$ indicates complex conjugation. Inverting , one obtains $$\begin{aligned} \label{eq:sys1} |p\rangle &=\frac{\epsilon' c}{\epsilon' c^2 + \epsilon s^2}\: |1\rangle + \frac{s\epsilon}{\epsilon' c^2 + \epsilon s^2} \: |2\rangle \,, & |p] &=\frac{\epsilon^{\prime *} c}{\epsilon^{\prime *} c^2 + \epsilon^* s^2}\: |1] + \frac{s\epsilon^*}{\epsilon^{\prime *} c^2 + \epsilon^* s^2} \: |2] \,, \\ |r\rangle &= \frac{c}{\epsilon' c^2 + \epsilon s^2}\: |2\rangle -\frac{s}{\epsilon' c^2 + \epsilon s^2}\: |1\rangle \,, & |r] &= \frac{c}{\epsilon^{\prime *} c^2 + \epsilon^* s^2}\: |2] -\frac{s}{\epsilon^{\prime *} c^2 + \epsilon^* s^2}\: |1] \,. \nn\end{aligned}$$ Now, we solve for the quantities $p,r,c,s,\epsilon,\epsilon^{\prime}$ in terms of $p_1,p_2$. Without loss of generality, we shall take $n$ and $\bar{n}$ to be the four vectors $(1,\hat{n})$ and $(1,-\hat{n})$ and use the following representations for the spinors [@Dixon:1996wi] $$\begin{aligned} \label{eq:dirnot} |k\rangle&=\frac{1}{\sqrt{2}}\begin{pmatrix} \sqrt{k^-}\\ \sqrt{k^+}e^{i\phi_k}\\ \sqrt{k^-}\\ \sqrt{k^+}e^{i\phi_k} \end{pmatrix}\,,& |k]&=\frac{1}{\sqrt{2}}\begin{pmatrix} \sqrt{k^+}e^{-i\phi_k}\\ -\sqrt{k^-}\\ -\sqrt{k^+}e^{-i\phi_k}\\ \sqrt{k^-} \end{pmatrix}\,, & e^{i\phi_k}& =\frac{k^x+ik^y}{\sqrt{k^+ k^-}} \,.\end{aligned}$$ These correspond to using the Dirac basis of the gamma matrices, and by default we assume that the convention for spinor momentum labeling is always outgoing. Thus these momenta are positive for outgoing particles and negative for incoming particles.[^1] Solving [Eq. ]{} to obtain the $p,r,c,s,\epsilon,\epsilon^\prime$ we obtain $$\begin{aligned} p &= \left(p_1^-+p_2^-\right)\frac{n}{2}\,,& r &= \left(p_1^-+p_2^-\right)\frac{\bar{n}}{2}\,, \\ c &= \sqrt{\frac{p_1^-}{p_1^-+p_2^-}}\equiv\sqrt{x}\,,& s &= \sqrt{\frac{p_2^-}{p_1^-+p_2^-}}=\sqrt{1-x}\,, \nn \\ \epsilon &= -\sqrt{\frac{p_1^+}{p_2^-}}\ e^{i\phi_1} \equiv -\zeta\, e^{i\phi_1} \,,& \epsilon^\prime &= \sqrt{\frac{p_2^+}{p_1^-}}\ e^{i\phi_2} \equiv \zeta'\, e^{i\phi_2} \,, \nn\end{aligned}$$ where $e^{i\phi_j}=(p_j^x+ip_j^y)/\sqrt{p_j^+p_j^-}$ for $j=1,2$, and $\epsilon\sim\epsilon'\sim \lambda$ are the expansion parameters. Note that the scaling of collinear momenta makes it manifest that $p$, $r$, $c$, and $s$ are $\mathcal{O}(\lambda^0)$ quantities, and thus [Equation ]{}, allows us to safely expand amplitudes as a power series in $\epsilon$ and $\epsilon'$ which are the only ${\cal O}(\lambda)$ variables. One also observes the appearance of the energy fractions $x$ and $(1-x)$ of 1 and 2 respectively. The series thus obtained will be the limit of the amplitude when particles $1,2$ become collinear. For only final state collinear particles (or only initial state collinear particles), we can exploit the freedom of choosing $\hat n$ in order to make the total transverse momentum for all particles that are being taken in the collinear limit of [Eq. ]{}, to be zero. For two final state particles this implies $p_1^\perp = -p_2^\perp$. Here $p_1^->0$, $p_2^->0$, and we define the momentum fraction as $$\begin{aligned} x\equiv \frac{p_1^-}{p_1^-+p_2^-} \,,\end{aligned}$$ where $0\le x\le 1$. With these assumptions, $$\begin{aligned} c&=\sqrt{x} \,, &s& = \sqrt{1-x} \,, \\ e^{i\phi}&=e^{i\phi_2}=-e^{i\phi_1} \,, & \zeta &= \zeta' \,, &\epsilon &= \zeta\, e^{i\phi} = \epsilon^{\prime} \,. \nn\end{aligned}$$ In this case the exact spinor decompositions become simpler: $$\begin{aligned} \label{eq:deffinal} |1\rangle &=c\ |p\rangle - \epsilon s\ |r\rangle\,, & |p\rangle &=c\ |1\rangle + s\ |2\rangle\,, \\ |2\rangle &=s\ |p\rangle + \epsilon c\ |r\rangle\,, & \epsilon\: |r\rangle &=-s\ |1\rangle + c\ |2\rangle \nn \,, \\ |1] &=c\ |p] - \epsilon^{*} s\ |r]\,, & |p] &=c\ |1] + s\ |2]\,, \nn \\ |2] &=s\ |p] + \epsilon^{*} c\ |r]\,, & \epsilon^{*}\, |r] &=-s\ |1] + c\ |2] \,. \nn\end{aligned}$$ Since here $\epsilon'=\epsilon$, the expansion has also been reduced to a single small parameter $\epsilon$. Another interesting case is the collinear limit between an emission with outgoing momentum $p_1$ and an initial state particle with outgoing momentum $p_2$. Here $p_1^->0$ and $p_2^-<0$ and we can define the momentum fraction $(1-z)$ for the emission relative to the initial particle, via $$\begin{aligned} 1 - z = \frac{p_1^-}{-p_2^-} \,,\end{aligned}$$ where $0\le z \le 1$. In this case its natural to choose $\hat n$ so that we have $p_2^\perp=0$ (rather than the sum of the two $\perp$-momenta). With these assumptions we have $$\begin{aligned} c &= \sqrt{1-\frac{1}{z}} = i \sqrt{\frac{1}{z} -1} \,, &s & = \frac{1}{\sqrt{z}} \,, & \epsilon' & = \zeta' e^{i\phi_2} = 0 \,.\end{aligned}$$ We also have $|2\rangle = (1/\sqrt{z}) |p\rangle$, and $|2]=(1/\sqrt{z}) |p]$, so that these spinors are already aligned with the collinear direction. In this situation there is still an expansion for $|1\rangle$ and $|1]$ from [Eqs.  and ]{}, and once again, the expansion is in the single parameter $\epsilon$. Employing these parametrizations of the spinors, we can efficiently expand in the two particle collinear limits. The usual leading power simplification that arises for an amplitude in the collinear limit can be illustrated with the MHV four-point gluon amplitude. In the limit where $12$ are collinear we have $$\begin{aligned} \label{eq:MHVexpn} A(1^-2^+3^-4^+)_{1\parallel 2} = \frac{\langle 13\rangle^4} {\langle 12\rangle \langle 23 \rangle \langle 34 \rangle \langle 41 \rangle} \Big|_{1\parallel 2} &= \frac{c^3}{s\langle 12\rangle} \frac{ \langle p3\rangle^4} { \langle p3 \rangle \langle 34 \rangle \langle 4p \rangle} + \ldots \,,\end{aligned}$$ where the splitting function $c^3/(s\langle 12) \rangle) =c^3/(s\epsilon \langle pr\rangle) \sim \lambda^{-1}$ makes the displayed term ${\cal O}(\lambda^{-1})$. This result is valid for both outgoing and incoming particles. The terms in the ellipses in [Eq. ]{} are terms of higher power in the collinear limit. In the next few sections we illustrate results for subleading terms in the collinear limit through a couple of examples. Example: $H\to \bar qq \bar Q Q$ -------------------------------- As an illustrative example, we shall derive the subleading collinear limits for the process of decay of a color singlet into 4 partons, which has all particles outgoing. For concreteness and simplicity, we shall take the singlet to be a Higgs, and the 4 partons to be two quark-antiquark pairs with differing flavors. At tree-level, only the following helicity confugurations contribute [@Kauffman:1996ix]: $$\begin{aligned} \label{eq:hqqQQamp} A(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)=\frac{1}{2}\left(\frac{\langle 24\rangle^2}{\langle 12\rangle\langle 34\rangle}+\frac{[13]^2}{[12][34]}\right)\,,\\ A(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)=-\frac{1}{2}\left(\frac{\langle 23\rangle^2}{\langle 12\rangle\langle 34\rangle}+\frac{[14]^2}{[12][34]}\right)\,. \nn\end{aligned}$$ The conjugate helicity configurations can be obtained using parity. To illustrate the types of structures that the subleading power expansion yields we will consider two choices for the pairs of particles going collinear. In one case there will be no leading power collinear limit, and in the other case there is a leading power collinear, which gives a more complicated result. We begin by analyzing the behavior of the amplitude when quark 1 and antiquark 4 become collinear. This particular collinear limit has no leading power, $\mathcal{O}(\lambda^{-1})$, term in the amplitude since there is no spinor product with $14$ in the denominator of [Eq. ]{}. This makes extracting the next-to-leading behavior in the collinear limit straightforward, since one can just use the standard leading power expressions for the spinors, namely $$\begin{aligned} |1\rangle &=\sqrt{x}\ |p\rangle\,,& |4\rangle &=\sqrt{1-x}\ |p\rangle\,,\\ |1] &=\sqrt{x}\ |p]\,,& |4] &=\sqrt{1-x}\ |p]\,. \nn\end{aligned}$$ Substituting these into the amplitudes, we get the following expansion in $\lambda$ $$\begin{aligned} A(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)_{{{1 \parallel 4}}}&=0\times\mathcal{O}(\lambda^{-1})-\frac{1}{2}\left(\sqrt{\frac{x}{1-x}}\ \frac{\langle p2\rangle}{\langle p3\rangle}+\sqrt{\frac{1-x}{x}}\ \frac{[3p]}{[2p]} \right)+\mathcal{O}(\lambda)\,, \nn \\ A(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)_{{{1 \parallel 4}}}&=0\times\mathcal{O}(\lambda^{-1})+\frac{\langle 23\rangle^2}{2\sqrt{x(1-x)}\langle p2\rangle\langle p3\rangle}+\mathcal{O}(\lambda)\,. \end{aligned}$$ Note that these results are expressed entirely in terms of the collinear spinors $|p\rangle$, $|p]$, the momentum fraction $x$, and the spinors for the other directions. These subleading power expressions take a very simple form, due to the fact that there was no leading power term. In the first case where the quark and antiquark have opposite helicities, we see that the amplitude behaves like that for a scalar in the direction $p$. In the second case when they have the same helicity, it behaves like an amplitude for a particle with spin 1 along $p$. It would be interesting to understand this in more generality. Some work in this direction, involves representing subleading power collinear limits of gluon amplitudes in terms of mixed Einstein-Yang-Mills amplitudes [@Stieberger:2015kia]. For helicity configurations that do not have a leading power limit, it is also simple to get the power suppressed squared amplitude in the collinear limit to $\mathcal{O}(\lambda^2)$, since this comes only from the interference of the two $\mathcal{O}(\lambda)$ suppressed amplitudes. Neglecting any color structures, we find that the amplitude squared have the following subleading $\mathcal{O}(\lambda^0)$ terms: $$\begin{aligned} \label{eq:hqqQQsq} |A(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)|^2_{{{1 \parallel 4}}}&=0\times\mathcal{O}(\lambda^{-2})+0\times\mathcal{O}(\lambda^{-1})+\frac{\left[(1-x)\ s_{p2}+x\ s_{p3}\right]^2}{4\ x(1-x)s_{p2}s_{p3}}+\mathcal{O}(\lambda)\,, \nn \\ |A(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)|^2_{{{1 \parallel 4}}}&=0\times\mathcal{O}(\lambda^{-2})+0\times\mathcal{O}(\lambda^{-1})+\frac{s_{23}^2}{4\ x(1-x)s_{p2}s_{p3}}+\mathcal{O}(\lambda)\,.\end{aligned}$$ In this case, they involve only Mandelstam invariants with the direction $p$, as well as the momentum fraction $x$, but do not otherwise involve the substructure of the splitting. These can now be trivially integrated over the collinear phase space to obtain subleading power corrections for an event shape observable, as we will describe in . The previous limit was particularly simple due to the fact that it did not have a leading power term. To illustrate a slightly more complicated example, we examine the behavior of the amplitudes in (\[eq:hqqQQamp\]) when the $12$ quarks become collinear. This collinear limit has a leading power term, which is governed by the standard leading power collinear factorization. We must therefore keep all the subleading terms in the expansion of the spinors. In this case the required substitutions are $$\begin{aligned} \langle 12\rangle&=\zeta e^{i\phi}\langle pr\rangle\,,& [12]&=\zeta e^{-i\phi}[pr]\,, \\ \langle 1i\rangle&=\sqrt{x}\ \langle pi\rangle-\zeta e^{i\phi}\sqrt{1-x}\ \langle ri\rangle\,,& [1i]&=\sqrt{x}\ [pi]-\zeta e^{-i\phi}\sqrt{1-x}\ [ri],\ \text{for}\ i=3,4\,, \nn \\ \langle 2i\rangle&=\sqrt{1-x}\ \langle pi\rangle+\zeta e^{i\phi}\sqrt{x}\ \langle ri\rangle\,,& [2i]&=\sqrt{1-x}\ [pi]+\zeta e^{-i\phi}\sqrt{x}\ [ri],\ \text{for}\ i=3,4 \,. \nn\end{aligned}$$ Plugging these in to the amplitudes and expanding, we arrive at the following structure for the amplitudes $$\begin{aligned} A=A^{(0)}+A^{(1)}+A^{(2)}+\cdots\,,\end{aligned}$$ where the leading power term $A^{(0)}\sim\mathcal{O}(\lambda^{-1})$, and each successive term acquires a power in $\lambda$, so $A^{(n)}\sim\mathcal{O}(\lambda^{-1+n})$. The leading power amplitudes obey the well known factorization into a splitting function and a lower point amplitude $$\begin{aligned} \label{eq:LPsplitfact} A^{(0)}=\sum_{{h}=\pm}\text{Split}_{-{h}}(a,b;x) A_{n-1}(\ldots, p^{h}, \ldots)\,.\end{aligned}$$ where the tree level splitting amplitudes can be found summarized in Appendix II of Ref. [@Bern:1994zx]. For our example, the lower point amplitudes we require are $$\begin{aligned} A(p^+;3_q^+,4_{\bar q}^-;5_H)=\frac{[p3]^2}{[34]}\,, \qquad A(p^-;3_q^+,4_{\bar q}^-;5_H)=\frac{\langle p4\rangle^2}{\langle34\rangle}\,,\end{aligned}$$ and the relevant splitting functions are given by $$\begin{aligned} \text{Split}_{+}(q,\bar{q};x)=\frac{(1-x)}{\langle q\bar{q}\rangle}\,, \qquad \text{Split}_{-}(q,\bar{q};x)=\frac{x}{[q\bar{q}]}\,.\end{aligned}$$ We therefore have $$\begin{aligned} A^{(0)}(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)_{{{1 \parallel 2}}}&=\frac{(1-x)e^{-i\phi}\ \langle p4\rangle^2}{2 \zeta \langle pr\rangle\langle 34\rangle}+\frac{x\ e^{i\phi}[3p]^2}{2\zeta [rp][43]}\\ &=\frac{(1-x)e^{-i\phi}}{2\zeta \langle pr\rangle}\ A(p^-;3_Q^+,4_{\bar Q}^-;5_H)+\frac{x\ e^{i\phi}}{2\zeta [pr]}\ A(p^+;3_Q^+,4_{\bar Q}^-;5_H)\,,\nn\end{aligned}$$ and $$\begin{aligned} A^{(0)}(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)_{{{1 \parallel 2}}}&=-\bigg[\frac{(1-x)e^{-i\phi}\ \langle p3\rangle^2}{2 \zeta\langle pr\rangle\langle 34\rangle}+\frac{x\ e^{i\phi}[4p]^2}{2\zeta[rp][43]} \bigg] \\ &= \frac{(1-x)e^{-i\phi}}{2\zeta\langle pr\rangle}\ A(p^-;3_Q^-,4_{\bar Q}^+;5_H)+\frac{x\ e^{i\phi}}{2\zeta[pr]}\ A(p^+;3^-_{Q},4^+_{\bar Q}) \nn \,,\end{aligned}$$ which implies $$\begin{aligned} A^{(0)}(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)_{{{1 \parallel 2}}}&=\sum_{{h}=\pm}\text{Split}_{-{h}}(1_q^+,2_{\bar{q}}^-;x)\ A(p^{h};3_Q^+,4_{\bar Q}^-;5_H), \\ A^{(0)}(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)_{{{1 \parallel 2}}}&=\sum_{{h}=\pm}\text{Split}_{-{h}}(1_q^+,2_{\bar{q}}^-;x)\ A(p^{h};3_Q^-,4_{\bar Q}^+;5_H)\,,\nn\end{aligned}$$ as expected from [Eq. ]{}. More interesting are the subleading power terms. We find $$\begin{aligned} A^{(1)}(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)_{{{1 \parallel 2}}}&=\sqrt{x(1-x)}\bigg[\frac{\langle p4\rangle\langle r4\rangle}{\langle pr\rangle\langle 34\rangle}-\frac{[3p][3r]}{[rp][43]}\bigg]\,,\\ A^{(1)}(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)_{{{1 \parallel 2}}}&=-\sqrt{x(1-x)}\bigg[\frac{\langle p3\rangle\langle r3\rangle}{\langle pr\rangle\langle 34\rangle}-\frac{[4p][4r]}{[rp][43]}\bigg]\,, \nn \end{aligned}$$ and $$\begin{aligned} A^{(2)}(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)_{{{1 \parallel 2}}}&=\zeta \frac{xe^{i\phi}\ \langle r4\rangle^2}{2\langle pr\rangle\langle 34\rangle}+\zeta\frac{(1-x)\ e^{-i\phi}[3r]^2}{2[rp][43]}\,, \\ A^{(2)}(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)_{{{1 \parallel 2}}}&=-\zeta \frac{xe^{i\phi}\ \langle r3\rangle^2}{\langle pr\rangle\langle 34\rangle}-\zeta \frac{(1-x)\ e^{-i\phi}[4r]^2}{2[rp][43]}\,. \nn\end{aligned}$$ These amplitudes have an interesting structure. First, note that they depend on both the $p$ and $r$ directions. These $\cO(\lambda^2)$ suppressed amplitudes have the interesting feature that they factorize as $$\begin{aligned} \label{eq:sub_PT} A^{(2)}=\sum_{{h}=\pm}\text{Split}_{-{h}}(a,b;x) A_{n-1}(\ldots, r^{h}, \ldots)\,,\end{aligned}$$ namely onto a lower point amplitude but involving the residual vector $r$. It would be interesting to understand in general the factorization structure of these amplitudes, even at tree level. Some work in this direction at tree level has been done in [@Stieberger:2015kia; @Nandan:2016ohb]. It seems that this depends significantly on whether or not there exists a leading power collinear limit, since [Eq. ]{} does not hold for our earlier example. However, for our purposes, it is sufficient to be able to expand the spinor amplitudes to subleading power, whether or not a general formula can be constructed. To compute the cross section to $\cO(\lambda^2)$, we must now consider the different interference terms, which gives a more complicated result. Noting that the color structure is identical for all helicity configurations, we simply sum over all possible configurations to get: $$\sum_{\text{all configs}}|A|^2=(|A|^2)^{(0)}+(|A|^2)^{(1)}+(|A|^2)^{(2)}+\dots$$ where the order of the various terms here is given by $(|A|^2)^{(k)}\sim \mathcal{O}(\lambda^{-2+k})$. The leading term at $\mathcal{O}(\lambda^{-2})$ is given by: $$\begin{aligned} (|A|^2)^{(0)} &=\frac{(1-2x+2x^2)(s_{p3}^2+s_{p4}^2)-4x(1-x)s_{p3}s_{p4}}{2\zeta^2 s_{pr}s_{34}} \\ &+\frac{4x(1-x)\ \text{Re}(s_{p4}\langle p|3|r]e^{-i\phi}-s_{p3}\langle p|4|r]e^{-i\phi})^2}{\zeta^2 s_{pr}^2s_{34}^2} \,, \nn\end{aligned}$$ where $\text{Re}(z)$ denotes the real part of $z$. The factors $\langle p|k|r]e^{-i\phi}$ turn out to appear frequently in the squared amplitudes, so it is worth getting some intuition by evaluating it explicitly. We have that $$\begin{aligned} \langle p|k|r] &= \bra{p-}\gamma_{\mu}\ket{r-} k^{\mu} \,.\end{aligned}$$ Using the Dirac basis for the gamma matrices, we have that $$\begin{aligned} \ket{p-}&=\frac{1}{\sqrt{2}}\begin{pmatrix}\xi\\ -\xi \end{pmatrix}\ \ \ \text{ where }\xi=\begin{pmatrix}0\\ -\sqrt{p^{-}}\end{pmatrix} \,,\\ |r-\rangle&=\frac{1}{\sqrt{2}}\begin{pmatrix}\eta\\ -\eta \end{pmatrix}\ \ \ \text{ where }\eta=\begin{pmatrix}\sqrt{r^{+}}\\ 0\end{pmatrix} \nn \,.\end{aligned}$$ Thus, it follows that $$\begin{aligned} \bra{p-}\gamma^{0}\ket{r-}&=\xi^{\dagger}\eta=0 \,, \\ \bra{p-}\gamma^{i}\ket{r-}&=-\xi^{\dagger}\sigma^{i}\eta \,. \nn\end{aligned}$$ This enables us to obtain the following expression $$\begin{aligned} \langle p|k|r] &= \sqrt{p^{-}r^{+}}(k_x+ik_y) \,,\end{aligned}$$ where $(k_x,k_y)$ are the components of the transverse momentum. Thus, we see that a particular simple expression follows for the following term$$\begin{aligned} \text{Re}(\langle p|k|r]e^{-i\phi}) &=\sqrt{p^-r^+}(k_x\cos \phi+k_y\sin\phi) \nn \\ &= \sqrt{s_{pr}}\ |k_{T}| (\hat{k}\cdot\hat{\phi})\end{aligned}$$ where $|k_{T}|$ is the magnitude of the transverse momentum, and $\hat{k}$ and $\hat{\phi}$ are unit vectors in the plane transverse to $\hat{n}$. We thus gain some intuition for the appearance of the factor. Moreover, it becomes apparent that: - If the expression appears linearly, it will vanish upon integration to obtain the cross section, since it is odd in $\hat{\phi}$. - Secondly, it captures the effect of projecting the momentum from other sectors onto transverse components in the $n$ sector. We now write the $(|A|^2)^{(1)}\sim\mathcal{O}(\lambda^{-1})$ term : $$\begin{split} (|A|^2)^{(1)}=\sqrt{x(1-x)}(1-2x) \Bigg\{ \frac{2(s_{p3}+s_{p4})\left[|p_{3T}|(\hat{p_{3}}\cdot\hat{\phi})+|p_{4T}|(\hat{p_{4}}\cdot\hat{\phi})\right]}{ \zeta \sqrt{s_{pr}}s_{34}}\\ -\frac{4\ \left(s_{p4}|p_{3T}|(\hat{p_{3}}\cdot\hat{\phi})-s_{p3}|p_{4T}|(\hat{p_{4}}\cdot\hat{\phi})\right)(s_{p4}s_{r3}-s_{p3}s_{r4})}{\zeta s_{pr}^{3/2}s_{34}^2} \Bigg\} \end{split}$$ which as argued in the previous part vanishes upon integration. Finally the most interesting term is the subsubleading term $(|A|^2)^{(2)}\sim\mathcal{O}(\lambda^0)$ : $$\begin{aligned} (|A|^2)^{(2)} &=1-\frac{(1-2x+2x^2)(s_{p4}s_{r3}+s_{p3}s_{r4})}{ s_{pr}s_{34}} \\ &+\frac{2x(1-x)(s_{p3}s_{r3}+s_{p4}s_{r4})}{s_{pr}s_{34}}+\frac{4 x(1-x)\left[|p_{3T}|(\hat{p_{3}}\cdot\hat{\phi})+|p_{4T}|(\hat{p_{4}}\cdot\hat{\phi})\right]^2}{s_{34}} \nn \\ &+\frac{4x(1-x) \left[s_{p4}|p_{3T}|(\hat{p_{3}}\cdot\hat{\phi})-s_{p3}|p_{4T}|(\hat{p_{4}}\cdot\hat{\phi})\right]\left[s_{r4}|p_{3T}|(\hat{p_{3}}\cdot\hat{\phi})-s_{r3}|p_{4T}|(\hat{p_{4}}\cdot\hat{\phi})\right]}{s_{pr}s_{34}^2} \nn \\ &+\frac{(1-2x)^2(s_{p4}s_{r3}-s_{p3}s_{r4})^2}{s_{pr}^2s_{34}^2} \,. \nn\end{aligned}$$ Since $\hat \phi$ appears quadratically here, this amplitude does not vanish when integrated over the angle $\phi$. Using the parametrization of this section, one can efficiently expand any amplitude expressed in terms of spinors in the two particle collinear limit. As we will show below, this is in fact sufficient to derive the leading logarithms at subleading power for event shape observables at any order in $\alpha_s$. Subleading Power Logarithms in Event Shape Observables {#sec:log} ====================================================== Having understood how to expand spinor amplitudes in the subleading power collinear limit, we would like to apply this to the calculation of subleading power logarithms for multi-jet observables. While our expansion techniques can be used quite generally, as an example of particular interest, we will consider the $N$-jet event shape $N$-jettiness, ${\mathcal{T}}_N$ [@Stewart:2010tn]. The $N$-jettiness observable has received significant recent attention since it can be used to formulate a subtraction scheme for performing NNLO calculations with jets in the final state, known as $N$-jettiness subtractions [@Boughezal:2015aha; @Gaunt:2015pea], which have been used to compute $W/Z/H/\gamma+$ jet at NNLO [@Boughezal:2015dva; @Boughezal:2015aha; @Boughezal:2015ded; @Boughezal:2016dtm; @Campbell:2017dqk], as well as inclusive photon production [@Campbell:2016lzl]. The $N$-jettiness observable is defined as [@Stewart:2010tn] $$\begin{aligned} \label{eq:TauN_def} {\mathcal{T}}_N &= \sum_{k \in \text{event}} \min_i \Bigl\{ \frac{2 q_i\cdot p_k}{Q_i} \Bigr\} = \sum_{j} {\mathcal{T}}_{N_j} \,,\qquad {\mathcal{T}}_{N_j} = \sum_{\ell \in \text{coll}_j} \frac{2 q_j\cdot p_\ell}{Q_j} \,,\end{aligned}$$ where in the first equality the sum over $k$ runs over the total number of final state particles and the minimum runs over $i = \{a, b, 1, \ldots, N\}$. In the remaining terms, the sum over $j$ runs over the different collinear sectors $j = \{a, b, 1, \ldots, N\}$ and the sum over $\ell \in \text{coll}_j $ runs over the number of particles in the collinear sector $j$ as determined by the minimization. This observable can therefore be viewed as projecting the radiation in the event onto $N$ axes plus the two beam directions, as shown in . While more general measures are possible, the above choice $d_i(p_k) = (2q_i\cdot p_k)/Q_i$ is convenient for theoretical calculations, because it is linear in the momenta $p_k$ [@Jouttenus:2011wh; @Jouttenus:2013hs]. The $q_i$ are massless reference momenta corresponding to the momenta of the hard partons present at Born level, $$q_i^\mu = E_i n_i^\mu \,,\qquad n_i^\mu = (1, \hat n_i) \,,\qquad {\lvert\hat n_i\rvert} = 1 \,.$$ In particular, the reference momenta for the incoming partons are given by $$\label{eq:def_q} q_{a,b}^\mu = x_{a,b} \frac{E_{\rm cm}}{2}\, n^\mu_{a,b} \,,\qquad n_{a,b}^\mu = (1, \pm \hat z) \,,$$ where $$\begin{aligned} \label{eq:beamref} 2E_a &= x_a E_{\rm cm} = n_b \cdot (q_1 + \cdots + q_N + q_L) = Q e^Y \,, \nn \\ 2E_b &= x_b E_{\rm cm} = n_a \cdot (q_1 + \cdots + q_N + q_L) = Q e^{-Y} \,, \nn \\ Q^2 &= x_a x_b E_{\rm cm}^2 \,, \qquad Y = \frac{1}{2}\ln\frac{x_a}{x_b} \,.\end{aligned}$$ Here, $q_L$ is the total momentum of any additional color-singlet particles in the Born process, and $Q$ and $Y$ now correspond to the total invariant mass and rapidity of the Born system. A more detailed discussion of the construction of the $q_i$ in the context of fixed-order calculations and $N$-jettiness subtractions can be found in Ref. [@Gaunt:2015pea]. We will discuss this in detail below only for $1$-jettiness, which is the case of interest here. For $\tau_N={\mathcal{T}}_N/Q \ll 1$, with $Q$ a typical hard scale, one is forced into the soft and collinear limits, and one can expand the cross section in powers of $\tau_N$ as $$\begin{aligned} \frac{{\mathrm{d}}\sigma}{{\mathrm{d}}\tau_N} &= \frac{{\mathrm{d}}\sigma^{(0)}}{{\mathrm{d}}\tau_N} + \frac{{\mathrm{d}}\sigma^{(2)}}{{\mathrm{d}}\tau_N}+ \frac{{\mathrm{d}}\sigma^{(4)}}{{\mathrm{d}}\tau_N} + \dotsb \,.\end{aligned}$$ The first term in this expression, ${\mathrm{d}}\sigma^{(0)}/{\mathrm{d}}\tau_N$ contains the most singular terms, with the scaling $$\begin{aligned} \frac{{\mathrm{d}}\sigma^{(0)}}{{\mathrm{d}}\tau_N} &\sim \delta(\tau_N)+ \biggl[\frac{ {\mathcal{O}(1)} \ln^j\tau_N }{\tau_N}\biggr]_+ \,,\end{aligned}$$ with various values of $j\geq 0$. These are referred to as the leading power terms, and a factorization formula [@Stewart:2009yx] describing these terms has been derived in SCET [@Bauer:2000ew; @Bauer:2000yr; @Bauer:2001ct; @Bauer:2001yt; @Bauer:2002nz]. It takes the schematic form $$\begin{aligned} \label{eq:sigma} \frac{{\mathrm{d}}\sigma^{(0)}}{{\mathrm{d}}\tau_N} &= \int\!{\mathrm{d}}x_a\, {\mathrm{d}}x_b\, {\mathrm{d}}\Phi_{N}(q_a \!+ q_b; q_1, \ldots)\, \\\nn &\quad \times \sum_{\kappa} \tr\,\bigl[ {\widehat{H}}_{\kappa}(\{q_i\}) {\widehat{S}}_\kappa \bigr] \otimes \Bigl[ B_{\kappa_a} B_{\kappa_b} \prod_J J_{\kappa_J} \Bigr] \,.\end{aligned}$$ Here $B$ are beam functions, $J$ are jet functions, and $S$ is the soft function, and $\kappa$ denotes different partonic channels. The kinematic dependence on the jet directions is described by the hard function, $H$, which is the infrared finite part of the squared matrix element for the $N$-jet process. We will compare this kinematic dependence to what we find later for the power corrections. Beyond the terms described by this factorization formula, there are terms which scale as $$\begin{aligned} \label{eq:scaling_lam2} \tau_N \frac{{\mathrm{d}}\sigma^{(2k)}}{{\mathrm{d}}\tau_N} &\sim {\mathcal{O}(\tau_N^k\, \ln^j\!\tau_N )} \,,\end{aligned}$$ with $k\geq 1$, $j\geq 0$. Since they are suppressed by powers of the observable, $\tau_N^k$, we refer to them as power corrections. It has been shown that the calculation of these power suppressed terms can significantly improve the performance of $N$-jettiness subtractions. This has been explicitly illustrated in the case of color singlet production in [@Moult:2016fqy; @Boughezal:2016zws; @Moult:2017jsg; @Boughezal:2018mvf; @Ebert:2018lzn]. However, one would like to extend this to the case of multiple jets in the final state, where they are most needed. The calculation of the power corrections for ${\mathcal{T}}_N$ in the case of multiple jets is quite complicated. For applications, one would like to compute it to NNLO, namely with two additional emissions. Due to the presence of the multiple regions inherent in the $N$-jettiness definition, the multiparticle phase space becomes very complicated. Fortunately, in [@Moult:2016fqy] consistency relations were derived that show that the leading logarithms at subleading power can be computed to any order in $\alpha_s$ by considering virtual corrections to the subleading power two-particle collinear limits. While this was shown in the context of color singlet production, it also holds more generally, as discussed in . This implies that the parametrization of for the two particle collinear phase space is in fact sufficient to obtain the full leading log result at subleading power. This is a remarkable simplification, as it enables the calculation to be performed at any order in $\alpha_s$ as a sum over two particle collinear limits, instead of having to consider complicated multi-particle phase space integrals. In this section we will show how to efficiently extract the leading logarithmic subleading power corrections for a process for which the helicity amplitudes are known by exploiting the consistency relations and applying the methods explained in [Sec. \[sec:toolbox\]]{}. In we review the consistency relations, which will allow us to derive all our results from the two-particle collinear limit. In [Sec. \[sec:phasespace\]]{} we set up the phase space for integrating over the two particle collinear limit, and then in we give an explicit example for $H\to q\bar q gg$. Since the goal of this paper is to illustrate the method, rather than perform a complete calculation for a particular process, we will illustrate it on simple tree level amplitudes. However, since we show that the leading logarithm can be extracted from the two particle collinear phase space, at higher orders one simply would consider the two-particle collinear limit of a more complicated amplitude. We also note that for the complete calculation of the power corrections for the $N$-jettiness observable, one must consider not only power corrections arising from i) the expansion of the matrix element, but also power corrections arising from ii) the expansion of the phase space, ii) the flux factors, and iv) the measurement definition. These sources of power corrections, and techniques for systematically organizing their expansion have been discussed in great detail in [@Ebert:2018lzn]. These later three types of corrections are primarily a bookkeeping exercise, and while they do have the same importance for the final result, they are not associated with the subleading power expansion of the amplitude. In this section, we focus purely on case i), illustrating how the leading logarithms can be extracted from expanding spinor amplitudes in the two-particle collinear limits. The application to a full process of interest will be carried out in a future publication. Consistency Relations {#sec:consistency} --------------------- We begin by reviewing the consistency relations derived in [@Moult:2016fqy], which will reduce the problem of computing the subleading power leading logarithms of multi-jet event shape observables to the calculation of phase space integrals over two-particle collinear limits. The results of [@Moult:2016fqy] were presented in the context of color singlet production, but apply more generally, since they follow from the properties of SCET amplitudes and observables, and do not depend on the particular hard process. We consider the fixed order calculation of the cross section in SCET. In SCET, each particle is either soft, collinear, or hard, and each graph gives a result with a homogeneous scaling in $\tau_N$, depending on the number of soft, collinear and hard particles. Explicitly, we can write the $n$-loop result for the cross section at subleading power, which we denote with the super script $(2,n)$, as $$\begin{aligned} \label{eq:constraint_setup} \frac{{\mathrm{d}}\sigma^{(2,n)}}{{\mathrm{d}}\tau_N} = & \sum_{\kappa}\sum_{i=0}^{2n-1} \frac{c_{\kappa,i}}{\epsilon^i} \left( \frac{\mu^{2n}}{Q^{2n} \tau_N^{m(\kappa)}} \right)^\epsilon \{ f f, f' f, f'' f, f'f' \} \nonumber \\ & + \dots \,.\end{aligned}$$ Here the ellipses include UV renormalization, and collinear PDF renormalization, which are not leading logarithmic effects, and so we will not discuss them further. In the first line, we have included a number of different PDF structures, including derivatives, which can exist in the final result. In [Equation ]{} we suppress all flavor indices on the coefficients $c$, and on the PDFs, and we will continue to do this throughout this section. For example, it is implicit that the PDF structures $f'_a f_b$ and $f_a f'_b$ both occur, etc. The origin of these terms will be discussed in . The consistency relations will hold separately for each different structure. Their arguments have been dropped, since they are not relevant for the current discussion. In this expression $\kappa$ and $\gamma$ label the scalings obtained from the contributing particles, i.e., hard, collinear, or soft, and $m(\kappa)\geq 1$ is an integer. To be concrete, at one loop ($n=1$), we have a single particle, which can be either soft, or collinear. We therefore have $$\begin{aligned} \label{eq:classes1} \text{soft:} \qquad &\kappa=s\,, \qquad m(\kappa) =2\,,\nn \\ \text{collinear:} \qquad &\kappa=c\,, \qquad m(\kappa)=1 \,.\end{aligned}$$ At two loops ($n=2$), as relevant for NNLO we have the following possibilities $$\begin{aligned} \label{eq:classes2} \text{hard-collinear:}\qquad &\kappa = hc\,, \qquad m(\kappa) =1\,,\nn \\ \text{hard-soft:} \qquad &\kappa= hs\,, \qquad m(\kappa) =2\,,\nn \\ \text{ collinear-collinear:} \qquad & \kappa=cc\,, \qquad m(\kappa) =2\,,\nn \\ \text{collinear-soft:} \qquad & \kappa= cs\,, \qquad m(\kappa) =3\,,\nn \\ \text{soft-soft:} \qquad & \kappa= ss\,,\qquad m(\kappa) =4\,. \end{aligned}$$ The $c_{\kappa,i}$ in [Equation ]{} are coefficients of the poles in $\epsilon$ arising from the graphs with the different scalings, and differ for the different cases $\{f f, f' f, f'' f, f'f' \}$. The main insight which allows for a dramatic simplification is that the pole terms must cancel, which places a number of relations on the values of the $c_{\kappa,i}$. In particular, at one loop, one finds $$\label{eq:one_loop_constraint} c_{s,1}=-c_{c,1} \,.$$ The leading logarithmic result at NLO for a given channel and PDF structure can then be written as $$\begin{aligned} \label{eq:constraints_final_NLO} \frac{{\mathrm{d}}\sigma^{(2,2)}}{{\mathrm{d}}\tau_N} &= \ln \tau_N ~\left(c^{(ff)}_{c,1} f f + c^{(f'\!f)}_{c,1} f' f + c^{(f''\!f)}_{c,1} f'' f+ c^{(f'\!f')}_{c,1} f'f' \right) \,,\end{aligned}$$ implying that one need only compute either the soft contributions, or the collinear contributions. At two loops, we have $$\begin{aligned} \label{eq:constraints_summary} c_{hc,3} &= \frac{c_{cs,3}}{3} = -c_{ss,3}= - \frac{1}{3} (c_{hs,3}+c_{cc,3}) \,,\end{aligned}$$ as well as a number of additional relations that were given in [@Moult:2016fqy], but that are not relevant for the current discussion. These relations apply in each color channel, and for each combination of PDFs. For a particular contribution, we can then write the leading logarithm purely in terms of the hard-collinear coefficient $$\begin{aligned} \label{eq:constraints_final} \frac{{\mathrm{d}}\sigma^{(2,2)}}{{\mathrm{d}}\tau_N} &= \ln^3 \tau_N~ \left(c^{(ff)}_{hc,3} f f + c^{(f'\!f)}_{hc,3} f' f + c^{(f''\!f)}_{hc,3} f'' f+ c^{(f'\!f')}_{hc,3} f'f' \right) \,.\end{aligned}$$ Again, the different PDF structures in parentheses indicate that this will hold for each structure. This implies that one need only consider a two particle collinear limit with hard virtual loops, and that no multi-particle phase space integrals need to be performed. One can simply perform the two particle phase space integral of the amplitude expanded in the two-particle collinear limit, for which we have given a convenient parametrization in terms of spinors. Collinear Phase Space Integral {#sec:phasespace} ------------------------------ Having shown that we can extract everything from the two particle collinear limits, in this section, we show how we can easily extract the leading logarithm at next to leading power for the matrix element corrections for the $N$-jettiness observable. For concreteness we will consider the case of color singlet production in association with $N$-jets in $pp$ collisions, other cases like the decay of a color singlet into an arbitrary number of jets can be worked out along the same lines. Following the notation of Ref. [@Gaunt:2015pea], to study color singlet production in association with $N$-jets in $pp$ collisions at Born level we take two incoming beams with momentum $q_a^\mu$ and $q_b^\mu$, $N$-jets with momenta $\{q_i^\mu\}_{i=1}^N$ and some non hadronic final states (the color singlet) which we take to have total momentum $q^\mu$ and we refer to the complete Born phase space $\Phi_N$ as \_[N]{} = \_N(q\_a + q\_b;q\_1,…,q\_N,q) \_L(q)\_k s\_k, where $ \Phi_N(q_a + q_b;q_1,\dots,q_N,q)$ is the standard $N$-body Lorentz invariant phase space, the phase space integral $ {\mathrm{d}}\Phi_L(q)$ describes the kinematics of the non-hadronic final states and $\sum_k s_k$ includes any symmetry, color and/or averaging factors, which differ for each partonic channel. We define the *born measurement* ${\hat{\cM}_\text{born}}$ to fix all kinematic variables that are not zero at leading order, hence the cross section ${\mathrm{d}}\sigma/({\mathrm{d}}{\hat{\cM}_\text{born}}{\mathrm{d}}{\mathcal{T}}_N)$ which is fully differential in both the Born measurement ${\hat{\cM}_\text{born}}$ and ${\mathcal{T}}_N$ for such a process at LO is by definition = \_[N]{} ||\^2 [\_]{}(\_N) (\_N - \_N\[\_N\]) = \_0([\_]{}) (\_N). Now let’s consider an emission with momentum $k^\mu$ collinear to one of the collinear directions. For definiteness, let’s consider an emission collinear to the $N$-th jet. We will later perform a sum over all collinear directions. The phase space for color singlet + $N$-jets and an additional emission, $\Phi_{N+1}$, can be written as a function of the born phase space for color singlet + $N$-jets, $\Phi_N$, and the two particle phase space $\Phi_2$ via $$\begin{aligned} \label{eq:PSdecomp} \int {\mathrm{d}}\Phi_{N+1}(q;p_1,\dots,p_{N},k) &= \int {\mathrm{d}}\Phi_{2}({\tilde{P}};p_{N},k) {\mathrm{d}}\Phi_{N}(q;p_1,\dots,p_{N-1},{\tilde{P}}) (2\pi)^3 {\mathrm{d}}m_0^2 \,,\end{aligned}$$ where $m_0^2$ is the virtuality of ${\tilde{P}}$ (ie. in ${\mathrm{d}}\Phi_N$ we have $\delta({\tilde{P}}^2 - m_0^2)$) and the two particle phase space[^2] is \_[2]{}(;p\_[N]{},k) = \^+(p\_N\^2) \^+(k\^2) (2)\^4\^[(4)]{}(- p\_N -k) . One can then use the born measurement ${\hat{\cM}_\text{born}}$ to fix all the integrals in ${\mathrm{d}}\Phi_{N}(q;p_1,\dots,p_{N-1},{\tilde{P}}) $. If we are interested in the differential cross section in ${\mathcal{T}}_N$, then the phase space is constrained by the ${\mathcal{T}}_N$ measurement function \[eq:tau\_Nmeas\] \_[\_N]{} (\_N - \_N\[{k,\_N}\]) (\_N - \_j \_[N\_j]{} ), which follows from the ${\mathcal{T}}_N$ definition of [Eq. ]{}. As explained in [@Stewart:2010tn], at leading power with a single collinear emission we have \_[N\_j]{}|\_ = t\_j/Q, where $t_j$ is the virtuality of the collinear sector $j$ where the emission lies. In our case all $\{p_j\}_{j\neq N}$ can be taken as purely collinear such that their collinear sector has no virtuality. However, even if we choose our axis such that the ${\tilde{P}}^\mu$ momentum has no perpendicular momentum, ${\tilde{P}}_\perp =0$, the vector ${\tilde{P}}$ still has an invariant mass. With these choices we therefore have \_[N\_[j]{}]{} = 0j N , \_[N\_[N]{}]{} = \^+. Thus the measurement takes the form \_[\_N]{} = (\_N - \^+). Note that in general $\delta_{{\mathcal{T}}_N}$ can have an expansion in $\lambda$. Since we are interested only in the leading power phase space we will keep always the leading term for $\delta_{{\mathcal{T}}_N}$ and we will refer to it as $\delta^{(0)}_{{\mathcal{T}}_N}$. The $d$-dimensional phase space for $k$ in lightcone coordinates is given by $$\begin{aligned} \label{eq:kPS} \int\frac{{\mathrm{d}}^d k}{(2\pi)^d}\delta^+(k^2) &= \frac{1}{(4\pi)^2}\int \frac{{\mathrm{d}}k^+ {\mathrm{d}}k^-}{(k^+k^-)^\epsilon} \int \frac{{\mathrm{d}}\Omega_{2-2\epsilon}}{(2\pi)^{1-2\epsilon}}\,. \end{aligned}$$ If the integrand is spherically symmetric we can do the solid angle integral using = \^. However, as we have seen in section [Sec. \[sec:spinor\]]{}, in general the amplitudes with multiple collinear directions can depend on $\phi$ at subleading power, in that case we use $$\begin{aligned} \label{eq:kPSphi} \int\frac{{\mathrm{d}}^d k}{(2\pi)^d}\delta^+(k^2) &= \frac{\varpi^\epsilon}{(4\pi)^2}\int \frac{{\mathrm{d}}k^+ {\mathrm{d}}k^-}{(k^+k^-)^\epsilon} \int_0^{2\pi} \frac{{\mathrm{d}}\phi}{2\pi}\,.\end{aligned}$$ We call $Q$ the energy (or large component) of the jet momentum ${\tilde{P}}$ and $x$ the fraction of it that the emission takes away. In this way we change variable \[eq:changeofvarx\] k\^- x Q,\_0\^Q = Q\^[1-]{}\_0\^1 and one can show [^3] that the $N$-jettiness measurement fixes the $k^+$ component via \[eq:deltaT\] \_[\_N]{}\^[(0)]{} = ( k\^+ - (1-x)\_N). Combining [Eqs.  and ]{} we get \^[(0)]{}\_[\_N]{} = Q(\_N Q)\^[-]{} \_0\^1 . Therefore, the leading power phase space for $N$-jets + one collinear emission inside the $i$-th jet reads$$\begin{aligned} \label{eq:collinearphasespacesingle} \int {\mathrm{d}}\Phi_{N+1}\delta_{{\mathcal{T}}_{N_i}}&= \int {\mathrm{d}}\Phi^{(0)}_N\, \left(\frac{\varpi}{{\mathcal{T}}_{N} Q}\right)^{\epsilon}\int_0^1 \frac{{\mathrm{d}}x}{x^\epsilon(1-x)^{\epsilon}} \int_0^{2\pi} \frac{{\mathrm{d}}\phi}{(2\pi)} \frac{Q}{(4\pi)^{2}} + \cO({\mathcal{T}}_{N}) \nn\\ &\equiv \int {\mathrm{d}}\Phi^{(0)}_N\, {\mathrm{d}}\Phi^{(0)}_{2,i}({\mathcal{T}}_{N}) + \cO({\mathcal{T}}_{N})\,,\end{aligned}$$ where we defined \^[(0)]{}\_[2,i]{}(\_[N]{}) ()\^\_0\^1 \_0\^[2]{} , as the two particle phase space resulting from one emission inside the $i$-th jet constrained by the ${\mathcal{T}}_N$ measurement. Note that in general the ${\mathcal{T}}_N$ measurement [Eq. ]{} gets contributions from the radiation $k^\mu$ being collinear to *any* collinear direction in the event. Since we are considering only one emission on top of the Born configuration at a time we are always able to isolate the contribution to the ${\mathcal{T}}_N$ measurement to one collinear sector, hence the leading power phase space for $N$-jets + one collinear emission inside any jet reads\[eq:collinearphasespacemultiple\] \_[N+1]{}(\_N - \_j \_i { })= \_[N]{}\^[(0)]{} \_[i=1]{}\^[N]{} \_[2,i]{}(\_[N]{}). A special case to consider separately is when the emission is collinear to one of the beams which contains the incoming particles. In that case the parton distribution functions enter the collinear phase space. For concreteness let’s take $k^\mu\parallel q_a^\mu$. The steps follow closely those already done above so we won’t repeat them. The main difference is that, since[^4] $q_a^\mu = \frac{x_a}{z_a} {E_\mathrm{cm}}\frac{n^\mu}{2}$, we have \_[\_[N\_a]{}]{}\^[(0)]{} = (\_[N]{} - e\^[-Y]{} k\^+),k\^- = x\_a , and one makes the choice of using a different change of variable for $k^-$ namely $k^- = x_a {E_\mathrm{cm}}\frac{(1-z_a)}{z_a}$. In this way the PDF related to the beam direction to which $k^\mu$ is collinear enters the collinear integral, and we have \_[2,a]{}(\_[N\_a]{}) f\_a(x\_a)= ()\^ \_[x\_a]{}\^1 f\_i() \_0\^[2]{} , where the $f_a(x_a)$ factor is needed to keep the same normalization as in [Eq. ]{}. We now want to combine [Eq. ]{} with the power correction to the matrix element squared in order to get the subleading component of the fully differential cross section due to the matrix element correction. In order to do so, let us define the matrix element squared expansion as two particles go collinear in the $q_{j}$ as ||\^2 = \_[\~\^[-2]{}]{} + \_[\~\^[0]{}]{} + (), and we have assumed that ${(|A|^{2})}_{j}^{(1)}$ vanishes upon integration over $\phi$. We also note that ${(|A|^{2})}^{(i)}$ is only a function of the born variables contained in ${\hat{\cM}_\text{born}}$, ${\mathcal{T}}_N$, and $x,\phi$ [(|A|\^[2]{})]{}\^[(i)]{}=[(|A|\^[2]{})]{}\^[(i)]{}([\_]{},\_N,x,). In the following, we are going to leave understood the explicit dependence on ${\hat{\cM}_\text{born}}$ in ${(|A|^{2})}^{(i)}$. Using the result for the leading power phase space, the subleading component of the fully differential cross section due to the matrix element correction for an emission inside a jet reads $$\begin{aligned} \label{eq:XSCollinearJet} \frac{{\mathrm{d}}\sigma_{{(|A|^{2})},\text{jet}}^{(2)}}{{\mathrm{d}}{\hat{\cM}_\text{born}}{\mathrm{d}}{\mathcal{T}}_N} &= \sum_{k k^\prime=\{q,g\}}\frac{f_k(x_a)f_{k^\prime}\left(x_b\right)}{2{E_\mathrm{cm}}^4 x_a x_b} \sum_{{j}=1}^N \int {\mathrm{d}}\Phi_{2,{j}}{(|A|^{2})}_{{j}}^{(2)}(x,\phi)\\ &= \sum_{k k^\prime=\{q,g\}}\frac{f_k(x_a)f_{k^\prime}\left(x_b\right)}{2{E_\mathrm{cm}}^4 x_a x_b} \sum_{{j}=1}^N \left(\frac{\mu^2}{{\mathcal{T}}_N Q}\right)^{\epsilon}\int_0^1 \frac{{\mathrm{d}}x}{x^\epsilon(1-x)^{\epsilon}} \int_0^{2\pi} \frac{{\mathrm{d}}\phi}{(2\pi)} Q\left(\frac{\alpha_s}{4\pi}\right) {(|A|^{2})}_{{j}}^{(2)}(x,\phi)\,,\nn\end{aligned}$$ where we extracted for convenience the coupling and its $\overline{\text{MS}}$ scale $\mu$ from the matrix element squared since we will use helicity amplitudes which are typically given with the coupling understood. We conclude by considering the case where the radiation $k^\mu$ is collinear to one of the beams, in this case we have $$\begin{aligned} \label{eq:XSCollinearBeam} \frac{{\mathrm{d}}\sigma_{{(|A|^{2})},\text{beam}}^{(2)}}{{\mathrm{d}}{\hat{\cM}_\text{born}}{\mathrm{d}}{\mathcal{T}}_N} &= \sum_{k k^\prime=\{q,g\}} \frac{f_{k^\prime}\left(x_b\right)}{2{E_\mathrm{cm}}^4 x_a x_b} \left(\frac{\mu^2}{{\mathcal{T}}_N Q}\right)^{\epsilon} \int_{x_a}^1 \frac{{\mathrm{d}}z_a}{z_a } f_k \left(\frac{x_a}{z_a}\right) \frac{z_a^\epsilon}{(1-z_a)^{\epsilon}} \\ &\times \int_0^{2\pi} \frac{{\mathrm{d}}\phi}{(2\pi)} Q\left(\frac{\alpha_s}{4\pi}\right) {(|A|^{2})}^{(2)}_a(z_a,\phi) + (a \leftrightarrow b)\,, \nn \end{aligned}$$ where $x_{a,b}$ are Born variables fixed by ${\mathrm{d}}{\hat{\cM}_\text{born}}$ and ${(|A|^{2})}_{a(b)}^{(2)}$ is the one emission matrix element squared when the radiation is collinear to the incoming $n$(${{\bar n}}$) direction (and we remind the reader that the subscript $(2)$ indicates that this is the subleading power term in the collinear expansion which is suppressed by $\cO(\lambda^2)$ w.r.t. the leading term). Together [Eqs.  and ]{} give the necessary expressions to obtain the leading logarithms. Example of Extracting Subleading Power Logarithms {#sec:log_example} ------------------------------------------------- In this section we give two examples of extracting the logarithms of the event shape. This is meant to serve two purposes. First, it will illustrate how simple it is to extract logarithms of the event shape from the two-particle collinear limit. Second, although we will not compute the complete result for the power corrections for $N$-jettiness for a particular process, the results will illustrate the general structure appearing in such results, which is already quite interesting. ### Full Matrix Element Corrections for $gq \to Hq$ {#sec:example_Tau0} Let us start by analyzing the case of Higgs production in gluon fusion at next to leading power. In this case the power correction to the fully differential cross section has been calculated at LL in Ref. [@Moult:2017jsg] both at NLO and NNLO.[^5] Reproducing the contribution to this result coming from the matrix element corrections will give us the occasion to illustrate the techniques presented in this paper in a known example and to cross check the result. Note that we will be following the notation of [@Ebert:2018lzn] where the separation of the phase space contributions and the matrix element corrections are given explicitly. The master formula for the matrix element corrections to the fully differential cross section of color singlet production in the collinear limit is given by[^6] [@Ebert:2018lzn] $$\begin{aligned} \label{eq:sigma_NLO_NLP_coll_FULL} \frac{{\mathrm{d}}\sigma_{n, {(|A|^{2})}^{(2)}}^{(2)}}{{\mathrm{d}}Q^2 {\mathrm{d}}Y {\mathrm{d}}{\mathcal{T}}} & = \int_{x_a}^1 \frac{{\mathrm{d}}z_a}{z_a} \, \frac{f_a\left(x_a/z_a\right)\, f_b(x_b)}{2 x_a x_b {E_\mathrm{cm}}^4} \frac{ z_a^{\epsilon}}{(1-z_a)^{\epsilon}}\frac{\bigl(Q {\mathcal{T}}\bigr)^{-{\epsilon}} Q(4\pi)^{{\epsilon}}}{\Gamma(1-{\epsilon}) (4\pi)^2 }{(|A|^{2})}^{(2)}(Q, Y, z_a) \,,\end{aligned}$$ where the Born measurement has been chosen to be the color singlet invariant mass $Q$ and rapidity $Y$. Note that [Eq. ]{} correctly matches [Eq. ]{} up to our slightly different conventions here for the inclusion of coupling, $\overline{\text{MS}}$ scale and factors in ${(|A|^{2})}^{(2)}$. We now want to apply the techniques of [Sec. \[sec:spinor\]]{} to compute ${(|A|^{2})}^{(2)}(Q, Y, z_a)$ and extract the leading logarithmic term by taking the soft limit of the amplitude and plug it in [Eq. ]{}. For conciseness we limit ourselves to the $gq \to Hq$ channel.\ The relevant amplitudes are [@Schmidt:1997wr] $$\begin{aligned} A(1^+,2_q^+,3^-_{\bar{q}};4_H) &= -\frac{1}{\sqrt{2}}\frac{[12]^2}{[23]}\,, \nn\\ A(1^-,2_q^+,3^-_{\bar{q}};4_H) &= -\frac{1}{\sqrt{2}}\frac{\langle 13\rangle^2}{\langle 23\rangle}\,.\end{aligned}$$ Now, we implement our expansion, and square to obtain $$\begin{aligned} \label{eq:qgMsq} {(|A|^{2})}^{(0)}_{gq,n}=0\,,\quad {(|A|^{2})}^{(2)}_{gq,n}=2C_F\frac{{(|A|^{2})}^{\text{LO}}}{Q^4}\frac{(1-x)^2\ s_{p2}}{x}+\mathcal{O}(\lambda)\,,\end{aligned}$$ where ${(|A|^{2})}^{\text{LO}}$ is the LO amplitude squared for $gg\to H$ and $x$ is the momentum fraction of the quark as it becomes collinear with the gluon. Note that if any entity is incoming, then in our formalism, we need to replace all components $p^-\rightarrow -p^-$, and take $|p\rangle\rightarrow i\ |p\rangle$. Doing this ensures that we match the condition of being positive $p^0$. Therefore, since in this example we are taking the $p_1^\mu$ and $p_2^\mu$ momenta to be incoming, we need to implement the following changes:$$\begin{aligned} x&=\frac{p_3^-}{-p_1^-+p_3^-}\,,\quad p^-=-p_1^-+p_3^-\,,\end{aligned}$$ and the Mandelstam invariant appearing in [Eq. ]{} takes the form s\_[p2]{}=p\^-p\_2\^+=-(p\_1\^–p\_3\^-)p\_2\^+ = -Q\^2(1+(\^2)). To compare our result with the notation of Ref. [@Ebert:2018lzn], where the cross section is expressed in terms of the splitting variable $z_a$, such that $k^- = Qe^Y\frac{1-z_a}{z_a}$, we need $$\begin{aligned} x=1-\frac{1}{z}\,.\end{aligned}$$ Using $ x=1-\frac{1}{z}$ and $s_{p2}=-Q^2$, reads [(|A|\^[2]{})]{}\^[(0)]{}=0,[(|A|\^[2]{})]{}\^[(2)]{}=, which matches Eq.(5.22) of up to the difference in the normalization convention and $\cO(\epsilon)$ terms. Given that also the phase space [Eq. ]{} matches [Eq. ]{}, this is sufficient to reproduce the leading log for this channel at subleading power. ### A simple $H+1$ jet example: $H\to \bar q q \bar Q Q$ {#sec:example_HqqQQ} We now move to the more involved case of multiple collinear directions and consider the simple example of $H\to \bar q q \bar Q Q$. When a quark $q$ and an antiquark of different flavor $\bar{Q}$ become collinear, the squared subleading power amplitudes are given by [Equation ]{} $$\begin{aligned} |A(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)|_{{{1 \parallel 4}}}^2&=\frac{\left[(1-x)\ s_{p2}+x\ s_{p3}\right]^2}{4s_{p2}s_{p3}\ x(1-x)}\,,\\ |A(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)|_{{{1 \parallel 4}}}^2&=\frac{s_{23}^2}{4 s_{p2}s_{p3}\ x(1-x)}\,.\end{aligned}$$ Integrating over the two particle collinear phase space, we have $$\begin{aligned} \left(\frac{\mu^2}{{\mathcal{T}}_N Q}\right)^{\epsilon}\!\!\int_0^1 \frac{{\mathrm{d}}x}{x^\epsilon(1-x)^{\epsilon}}|A(1_q^+,2_{\bar{q}}^-;3_Q^+,4_{\bar{Q}}^-;5_H)|_{{1 \parallel 4}}^2 &=\left( \frac{s_{p2}}{4s_{p3}} + \frac{s_{p3}}{4s_{p2}} \right)\left[ -\frac{1}{\epsilon}+\ln \frac{Q {\mathcal{T}}_N}{\mu^2} + \text{finite} \right] ,\nn \\ \left(\frac{\mu^2}{{\mathcal{T}}_N Q}\right)^{\epsilon} \!\!\int_0^1 \frac{{\mathrm{d}}x}{x^\epsilon(1-x)^{\epsilon}} |A(1_q^+,2_{\bar{q}}^-;3_Q^-,4_{\bar{Q}}^+;5_H)|_{{1 \parallel 4}}^2 &= \frac{s_{23}^2}{2 s_{p2}s_{p3}} \left[ -\frac{1}{\epsilon}+\ln \frac{Q {\mathcal{T}}_N}{\mu^2} + \text{finite} \right].\end{aligned}$$ Here we see that we are able to easily extract the $1/\epsilon$ divergence, or correspondingly the logarithm. Already from this simple example, we can see an interesting, but expected result, namely that at subleading power the kinematic dependence on the jet directions will no longer be that of the $N$-jet process, as was true in the leading power factorization in [Equation ]{}. This will be interesting to study numerically for a complete process, and is also important phenomenologically as it controls the rapidity dependence of the power corrections. We have therefore shown that we can efficiently extract subleading power logarithms in event shape observables from the subleading power collinear limits. Even in the case of complicated multi-leg amplitudes, the ability to extract the entire logarithm from the two-particle collinear limit means that there is only a single angular integral, which even if it cannot be done analytically, is finite, and therefore can be done by numerically evaluating the spinors. This should be achievable in a fairly automated way, enabling power corrections to be computed for multi-jet event shapes. Power Law Divergences in Subleading Power Matrix Elements {#sec:powlaw} ========================================================= In this section we wish to elaborate on an interesting physical effect observed in the expansion of multi-point amplitudes at subleading power, namely the appearance of power law singularities. As an example, we can consider the $H\to ggq\bar q$ amplitude at subleading power. For simplicity, we can focus on the color stripped amplitude for a particular helicity configuration[@Kauffman:1996ix] $$A(1^+,2^-,3_q^+,4_{\bar{q}}^-;5_H)=\frac{\langle 24\rangle^3}{\langle 12\rangle\langle 14\rangle\langle 34\rangle}-\frac{[13]^3}{[12][23][34]}\,,$$ and we can consider the behavior in the collinear limit when the gluon 1 and the quark 3 become collinear. This collinear limit has no leading power term, and using the expansion in [Eq. ]{} we find that the subleading power term in the expansion is given by $$A(1^+,2^-,3_q^+,4_{\bar{q}}^-;5_H)_{{1 \parallel 3}}=0\times \cO(\lambda^{-1}) + \frac{\langle 24\rangle^3}{x\sqrt{1-x}\ \langle p2\rangle\langle p4\rangle^2}+\mathcal{O}(\lambda)\,,$$ where $x$ parametrizes the energy fraction of the gluon. Squaring the amplitude, we find $$|A_{ggq\bar{q}H}|_{{1 \parallel 3}}^2= \frac{s_{24}^3}{x^2(1-x)s_{p2}s_{p4}}+\mathcal{O}(\lambda)\,,$$ which is $\cO(\lambda^2)$ suppressed with respect to the leading power and exhibits a $1/x^2$ divergence as the gluon becomes soft. This is distinct from the behavior of the leading power splitting functions, which go like $1/x$. These divergences are of course regulated by dimensional regularization in the phase space integral. However, their treatment requires the use of distributional identities which are less familiar than those required to treat the more standard $1/x$ divergences. For this reason, here we discuss briefly how these divergences are treated for a SCET$_{\rm I}$ type observable (like N-jettiness), and the manner that they appear for different processes and a wider class of observables. In the case that the two collinear particles are both in the final state, one can simply integrate over all values of $x$ in the standard fashion. The power law divergences is then regulated by dimensional regularization. More interesting, is when the subleading power collinear limit arises from an initial state splitting. In this case one has an integral against the PDFs, and one must expand as a distribution to extract the divergence, as is familiar at leading power. At subleading power we encounter a wider class of distributions, beyond the common $\delta$-function and $+$-functions. The impact of these power law divergences on PDFs has been discussed in detail in Ref. [@Ebert:2018gsn], so here we only provide a brief review that suffices for our discussion here. Consider the integral $$\begin{aligned} \label{eq:toy_dist2} I_m = \int_0^1 {\mathrm{d}}x\,\frac{\tilde g(x)}{(1-x)^{1+m+\epsilon}} \,,\end{aligned}$$ where $m\ge 0$ is an integer. Here we have put the divergence at $x\to 1$, as is standard in the parameterization of initial state splittings, and $\tilde g(x)$ contains, for example, the PDFs, and other functions that are regular as $x\to 1$. For $m=0$, the divergence can be extracted using the familiar distributional identity $$\begin{aligned} \label{eq:plus_dist_1} \frac{1}{(1-x)^{1+\epsilon}} = - \frac{\delta(1-x)}{\epsilon} + \cL_0(1-x) + \cO(\epsilon) \,.\end{aligned}$$ Here $\cL_0(1-x) = [ 1/(1-x) ]_+^1$ is the standard plus distribution, which satisfies $$\begin{aligned} \int_x^1 {\mathrm{d}}x\, \tilde g(x) \cL_0(1-x) = \int_x^1 {\mathrm{d}}x \, \frac{\tilde g(x) - \tilde g(1)}{1-x} + \tilde g(1) \underbrace{\int_x^1 {\mathrm{d}}x \, \cL_0(1-x)}_{=\ln(1-x)} \,,\quad x \in [0,1] \,.\end{aligned}$$ This standard plus distribution is sufficient for the treatment of divergences encountered at leading power. At subleading power, where $m>0$, this must be generalized to multiple plus distributions. For the particular case of $m=1$, we have $$\begin{aligned} \label{eq:double_plus} \frac{1}{(1-x)^{2+\epsilon}} & = \frac{\delta'(1-x)}{\epsilon} - \delta(1-x) + \cL_0^{++}(1-x) + \cO(\epsilon) \,.\end{aligned}$$ Here we encounter the double plus function, whose action on a function is given by $$\begin{aligned} \int_x^1 {\mathrm{d}}x \, \tilde g(x) \cL_0^{++}(1-x) & = \int_x^1 {\mathrm{d}}x \frac{\tilde g(x) - \tilde g(1) - \tilde g'(1)(x-1)}{1-x} \nn\\*&\quad + \tilde g(1) \underbrace{\int_x^1 {\mathrm{d}}x\, \cL_0^{++}(1-x)}_{-x/(1-x)} +\, \tilde g'(1) \underbrace{\int_x^1 {\mathrm{d}}x\, (x-1) \cL_0^{++}(1-x)}_{-\ln(1-x)} \,.\end{aligned}$$ Most importantly, we find that the divergence is associated with a $\delta'(1-x)$, instead of the standard $\delta(1-x)$ that is familiar at leading power. When integrated against the PDFs, this will give derivatives of the PDFs, an interesting physical effect which occurs at subleading powers. Derivatives of PDFs also appeared in the calculation of subleading power corrections to ${\mathcal{T}}_0$, which were computed in [@Moult:2016fqy; @Boughezal:2016zws; @Moult:2017jsg; @Boughezal:2018mvf; @Ebert:2018lzn]. In this case the derivatives of the PDFs have a very simple origin, arising from residual momentum routed into the PDFs. Namely, one finds power corrections arising from the expansion of the PDFs $$f_i\biggl[\xi\Bigl(1 + \frac{k}{Q}\Bigr)\biggr] = f_i(\xi) + \frac{k}{Q}\, \xi f_i'(\xi) + \dotsb \,,$$ where $k\sim \cO (\lambda^2)$. This is quite different than the case found here, where the $1/x^2$ singularity arises from the structure of the amplitude itself rather than from expanding kinematics. ----------------------------- --------------------------------- --------------------- --------------------------------- ------------------- [ ($\tau$,${\mathcal{T}}_N$)]{} [ ($p_T$)]{} [ ($\tau$,${\mathcal{T}}_N$)]{} [ ($p_T$)]{} \[-3mm\] $(|A|^2)^{(0)}(x)$ [$\frac{1}{x}$]{} 1 [$\frac{1}{x}$]{} 1 \[3mm\] $(|A|^2)^{(2)}(x)$ [$\frac{1}{x}$]{} [$\frac{1}{x}$]{} ? \[3mm\] $\Phi^{(0)}(x)$ 1 [$\frac{1}{x}$]{} [$\frac{1}{x}$]{} \[3mm\] $\Phi^{(2)}(x)$ 1 [$\frac{1}{x^2}$]{} ? ? \[3mm\] ----------------------------- --------------------------------- --------------------- --------------------------------- ------------------- : Behavior of the most singular terms for matrix element squared contributions $(|A|^2)^{(k)}$ and phase space $\Phi^{(k)}$, at both leading power $k=0$ and subleading power $k=2$, for a single emission when the energy fraction of the emission $x\to 0$. The contribution to the cross section involves products of these terms: ${\mathrm{d}}\sigma^{(i+j)} \sim A^{(i)} \times \Phi^{(j)}$, see [Eq. ]{}. In red we highlight the terms analyzed in this paper. We take the explicit results for $A^{(2)}$ and $\Phi^{(2)}$ for fully differential color singlet production in from [@Ebert:2018lzn], while those for are taken from [@Ebert:2018gsn]. Entries with “ ? " have not yet appeared in the literature.[]{data-label="tab:singularities"} More generally one can ask the question whether power law divergences are a general feature of subleading power calculations in SCET. To answer this question one can analyze the behavior of the cross section at next to leading power both for (where examples include jet masses, thrust, beam thrust) and measurements (where examples include observables with a small transverse momentum $q_T\ll Q$). The NLP corrections to the cross section can be schematically described at one emission as \[eq:dfsigmadec\] \^[(2)]{} \~\_0\^1 , where $A^{(0)}$ is the leading power matrix element squared, $\Phi^{(0)}$ is the leading power phase space and $A^{(i)}$ or $\Phi^{(i)}$ indicate the $i$-th order in the $\lambda$ expansion of $A$ or $\Phi$ respectively. In we summarize the behavior of the most singular terms both for the phase space and the matrix element squared in different contexts, varying the process and the type of observable. In the case of a process with only 2 back-to-back directions (like color-singlet production with no additional jets) and an measurement, Refs. [@Moult:2016fqy; @Boughezal:2016zws; @Moult:2017jsg; @Boughezal:2018mvf; @Ebert:2018lzn] found no power law divergences both at LL and NLL. However, for the same color single process but with an ($p_T$) measurement, Ref. [@Ebert:2018gsn] found power law divergences at subleading power. This includes observables such as the $p_T$ spectrum of a Higgs or Drell-Yan case. This is indicated by the entries in the second column of . In this case the $1/x^2$ singularity arises from the subleading power phase space, $\Phi^{(2)}$, or from the product $(|A|^2)^{(2)}\Phi^{(0)}$ which each scale like $1/x$ individually. The example considered here is given in the third column of . Here the result is different, since the power law divergence occurs directly in the expansion of the matrix element itself, $(|A|^2)^{(2)}$. This is a feature that, as far as we know, has never been encountered before in the literature. Again these divergences are regulated by the use of dimensional regularization. The appearance of power law singularities directly from the expansion of the amplitude for an observable with an additional jet is quite interesting, and we expect to be a generic property of the $N$-jet case. For the case of $H\to gg$, the subleading power logarithms exponentiate to all orders with the double logarithms governed by the cusp anomalous dimension [@Moult:2018jjd]. Loosely speaking, this follows from the fact that the expansion in the soft and collinear limits inherited its properties from the leading power expansion. In the more general $N$-jet case, due to the presence of the power law singularities, we expect that this will no longer be the case, and it will be of significant interest to understand the all orders structure of the subleading power logarithms. Conclusions {#sec:conclusions} =========== In this paper we have shown how we can use spinor-helicity amplitudes to efficiently study subleading power collinear limits and extract logarithms of infrared observables in high multiplicity final states. Our approach uses consistency relations derived from effective field theory to show that the leading logarithm in the subleading power expansion of the event shape observable can be derived from the subleading power expansion of the two-particle collinear limit, for which we gave an efficient parametrization in terms of spinor variables. This approach significantly simplifies the analysis, as it avoids the need to consider complicated multi-particle phase space integrals. In our extension to higher point amplitudes, we have noticed some interesting features of the power expansion. In general, for higher point amplitudes, we find that power law divergences are present in subleading power squared amplitudes themselves. These lead to observable effects, in particular, the appearance of derivatives of the PDFs. Derivatives of the PDFs have appeared in the calculation of power corrections in other contexts, and we discussed and contrasted the different mechanisms in [Sec. \[sec:powlaw\]]{}. We believe that there are a number of immediate phenomenological applications of our techniques. In particular, our techniques can be applied to compute the power corrections for $N$-jettiness subtractions for $H/W/Z+$ jet. These are the processes for which the power corrections are most needed from a phenomenological perspective, but have so far been too cumbersome to compute. Using our consistency relations, the power corrections up to NNLO can be computed using the one-loop $H/W/Z+4$ parton amplitudes, all of which are known analytically in terms of spinors [@Bern:1996ka; @Bern:1997sc; @Berger:2006sh; @Badger:2007si; @Badger:2009vh; @Dixon:2009uk; @Badger:2009hw]. It would also be interesting to understand in more detail the subleading power collinear behavior of general amplitudes. We intend to pursue these directions in future work. We thank Franz Herzog for providing us several simplified squared amplitudes which we used as cross checks of our techniques, and Markus Ebert for discussion of distributional identities related to multiple plus distributions. This work was supported in part by the Office of Nuclear Physics of the U.S. Department of Energy under Contract No. DE-SC0011090, by the Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and by the Simons Foundation Investigator Grant No. 327942. This work was also partially supported by the UROP Endowment Fund, MIT. [^1]: For incoming particles $\sqrt{-k^\pm}=i\sqrt{k^\pm}$, and conjugated spinors also get an extra minus sign, which ensures that spinor identities are valid for both positive and negative outgoing momenta. [^2]: Note the $\delta^{(4)}$ that defines ${\tilde{P}}^\mu = p_N^\mu +k^\mu$. [^3]: From [Eq. ]{} use the ${\mathrm{d}}m_0^2$ integral and the $\delta({\tilde{P}}^2 - m_0^2)$ which is part of ${\mathrm{d}}\Phi_N$, to solve the $\delta(p_N^2)$ after using momentum conservation: m\_0\^2 \_[\^+(p\_N\^2)]{} \_\_[\_[\_N]{}]{} =\_[\_N]{} (k\^+ - \_N(1-x)) . [^4]: Note that this relation is true only at LP, which is enough for the LP phase space calculation we are considering here. If one were to consider the power corrections coming from the phase space, then in general $q_a^\mu = x_a {E_\mathrm{cm}}\biggl(\frac{1}{z_a}+ \underbrace{\Delta_a^{(2)}}_{\sim {\mathcal{T}}_N} \biggr) \frac{n^\mu}{2}$. See Section 3 of Ref. [@Ebert:2018lzn] for a detailed discussion on this. [^5]: The full $\cO(\alpha_s)$ correction has later been computed in Ref. [@Ebert:2018lzn]. Earlier results for the inclusive cross section in the hadronic $\tau$ definition can also be found in Refs. [@Boughezal:2016zws; @Boughezal:2018mvf] at LL and NLL respectively. [^6]: Taking for simplicity the leptonic definition of ${\mathcal{T}}$, $\rho = e^Y$
{ "pile_set_name": "ArXiv" }
--- abstract: 'The effects of non-magnetic disorder on the critical temperature $T_c$ of organic weak-linked layered superconductors with singlet in-plane pairing are considered. A randomness in the interlayer Josephson coupling is shown to destroy phase coherence between the layers and $T_c$ suppresses smoothly in a large extent of the disorder strength. Nevertheless the disorder of arbitrarily high strength can not destroy completely the superconducting phase. The obtained quasi-linear decrease of the critical temperature with increasing disorder strength is in good agreement with experimental measurements.' author: - 'Enver Nakhmedov$^{1,2}$, Oktay Alekperov$^{2}$ and Reinhold Oppermann$^{1}$' title: 'Effects of randomness on the critical temperature in quasi-two-dimensional organic superconductors' --- Introduction ============ Organic molecular crystals $\kappa-(BEDT-TTF)_2X$ \[abbreviated as $\kappa-(ET)_2X$\] are in the center of attention due to their unusual normal metallic and superconducting properties [@wsgc91]. The flat $ET$ molecules in $\kappa-(ET)_2X$ organic metals dimerize to form molecular units that stack in planes on a triangular lattice [@iys06]. The anions $X$, which modify from $X=Cu[N(CN)_2]Cl$ through $X=Cu[N(CN)_2]Br$ to $X=Cu(NCS)_2$, separate the planes and accept one electron from each $BEDT-TTF$ dimer. Most of the ET-based superconductors (SCs) are strongly anisotropic quasi-two-dimensional (quasi-2D) conductors because the conductivity is approximately isotropic within the layers of the ET donor molecules but smaller by a factor of $\sim 10^{3}$ in the perpendicular direction. Measurements of the superconducting coherence lengths [@kwok90] within- $\xi_{\|}$ and perpendicular $\xi_{\perp}$ to the superconducting planes in e.g. $\kappa-(ET)_2Cu[N(CN)_2]Br$ yield $\xi_{\|} \approx 37 \AA$ and $\xi_{\perp} \approx 4 \AA$, the latter of which is much smaller than the interlayer distance $\sim 15 \AA$. This fact suggests that superconductivity in the direction perpendicular to the plane may involve Josephson tunneling. Low temperature properties of organic SCs are known to be very sensitive to disorder [@pk04]. Alloying with anions, $x$-ray irradiation, or cooling rate controlled anion reorientation introduces non-magnetic randomness into the system, however leaving unchanged, to a large extent, in-plane molecular structures. Recently, the effects of non-magnetic disorder on superconductivity in organic $\kappa-(ET)_2Cu(SCN)_2$ have been studied experimentally in Refs. [@aabo06; @soyk11]. The non-magnetic disorder was introduced in these experiments via irradiation by either x-rays or protons [@aabo06; @soyk11] and via partial substitution of $BEDT-TTF$ molecules with deuterated $BEDT-TTF$ or $BMDT-TTF$ molecules [@soyk11]. All disorder seems to affect the terminal ethylene group and anion bound structures. The measurements for samples with molecular substitutions show [@soyk11] that the mean free path $l$ is longer than the in-plane coherent length $\xi_{\|}$, indicating that the superconducting planes can be considered to be in clean limit. $T_c$ was found [@aabo06] to fall quasi-linearly with defect density, and the dependence exhibits a sharp change in slope from $0.31$ to $0.15$ around a threshold value of the interlayer residual resistivity $\rho_0^{\ast}\approx 2~\Omega cm$. The main feature of the experiments is that the samples exhibit a superconducting ground state even at the highest defect densities, and there is not a SC-normal metal phase transition different from quasi-1D organic SCs [@no10], where the randomness transforms the system into a normal metallic state. In the light of the experimental data, the Abrikosov-Gor’kov’s theory [@ag58] for non-magnetic defects in non-s-wave SCs seems to fail to explain the experimental data. We study in this article the effects of randomness in the Josephson coupling energy on the critical temperature of weak-linked quasi-2D SCs. Therefore, the influence of a possible in-plane molecular disorder on the superconducting properties of the system is ignored. Suppression of superconductivity in the presence of non-magnetic impurities can in general be realized by destroying either the modulus or the phase coherence of the order parameter [@nf98; @ek95]. Although strong fluctuations of the order parameter phase destroy off-diagonal long-range order (ODLRO) in an isolated superconducting plane [@rice65], the point-like topological defects of a “phase field” such as “vortex” and “antivortex” of Kosterlitz and Thouless [@kt73] sets up a quasi-long range order in the system. Classical phase fluctuation regime ================================== The strongly anisotropic organic SCs with in-plane singlet pairings are modeled as a regularly placed superconducting layers with Josephson-coupling between nearest-neighboring layers with a classical free energy functional $$\begin{aligned} &&\hspace{-5mm}F_{st}\{\varphi \} = N_s^{(2)}(T) \sum_j \int d^2r \bigg \{\frac{\hbar ^2}{8 m_{\|}} \bigg[\bigg( \frac{\partial \varphi_j}{\partial x} \bigg)^2 +\nonumber\\ &&\hspace{-5mm}+\bigg( \frac{\partial \varphi_j}{\partial y} \bigg)^2\bigg]+\sum_{g=\pm 1}E_{j,j+g} \left[1 - \cos \big( \varphi_j - \varphi_{j+g} \big)\right] \bigg \}, \label{freeenergy}\end{aligned}$$ where $\varphi_j({\bf r})$ denotes the phase of the order parameter $\Delta_j({\bf r}) = |\Delta_j| \exp (i \varphi_j({\bf r}))$, $N_s^{(2)}(T)$ is the surface density of superconducting electrons; $N_s^{(2)}(T)= N_N^{(2)}(0) \equiv N_N^{(2)} = \frac{p_F^2}{2 \pi \hbar^2}$ at $T \ge T_c^{(2)}$, and $N_s^{(2)}(T) = N_N^{(2)}(0) \tau(T)$ with $\tau (T) = \frac{T_c^{(2)} - T}{T_c^{(2)}}$ at $T \le T_c^{(2)}$. The last term in Eq.(\[freeenergy\]) describes the Josephson coupling with the energy $E_{j, j+g}$. Fluctuations of the order parameter modulus can be neglected for pure SCs far from $T_c^{(2)}$, the mean-field critical temperature calculated for an isolated layer. Therefore, the contributions to $F_{st}\{\varphi \}$ in Eq. (\[freeenergy\]), coming from the modulus of the order parameter $| \Delta_{\bf j}|$, are omitted. We assume the Josephson energy $E_{j, j+g}$ to be a random parameter with Gaussian distribution, centered at the mean value $E_g$, given by $$P \{ E_{j, j+g} \} = (2 \pi W^2)^{-1/2} \exp \big \{ - (E_{j, j+g} - E_{g})^2/(2 W^2) \big \}. \label{gauss}$$ $W^2$ is taken as a measure of disorder strength. Employing the replica trick one can calculate the average value of the order parameter $\cos (\varphi_{\bf j}(z))$ $$\begin{aligned} \langle\langle\cos(\varphi_j({\bf r}))\rangle\rangle_{dis}= -\hspace{.1cm}T \frac{\delta}{\delta \eta_j({\bf r})} \langle \ln \mathcal{Z} \rangle |_{\eta_j({\bf z}) = 0} \nonumber\\ = - T \hspace{.1cm}\frac{\delta}{\delta \eta_j ({\bf r})} \lim_{n \to 0} \frac{ \partial \langle \mathcal{Z}^n \rangle}{\partial n}|_{\eta_j = 0}, \label{correlator}\end{aligned}$$ where $\mathcal{Z} = \int \mathcal{D} \varphi \exp \big( - \frac{1}{T} F_{st} \{ \varphi, \eta \} \big)$ is the partition function with respect to the free energy functional $F_{st} \{ \varphi, \eta \}$ which contains, in addition to Eq. (\[freeenergy\]), the generating field term $\sum_j \eta_j \cos ( \varphi_j({\bf r}))$. The double bracket $\langle \langle \dots \rangle \rangle_{dis}$ is a shorthand notation for the double average over thermodynamic fluctuations and over disorder. Integration out the Gaussian random variables yields $$\langle \langle \cos ( \varphi_j({\bf r}))\rangle \rangle_{dis} = \prod_{j, g}\int\frac{d \zeta_{j, g}}{\sqrt {2 \pi}} e^{ - \frac{\zeta_{j, g}^2}{2}} \frac{\int \mathcal{D} \varphi \cos (\varphi_j) e^{- \mathcal{F}/T}}{\int \mathcal{D} \varphi e^{- \mathcal{F}/T}}, \label{av}$$ with $$\begin{aligned} &&\hspace{-5mm}\mathcal{F} = N_s^{(2)} \sum_j \int d^2r \bigg\{ \frac{\hbar^2}{8 m_{\|}} \left[\left( \frac{\partial \varphi_j}{\partial x}\right)^2 +\left( \frac{\partial \varphi_j}{\partial y}\right)^2\right] +\nonumber\\ &&\hspace{-5mm}+\sum_g (E_g- W\zeta_{j,g}) [1 - \cos(\varphi_j({\bf r}) -\varphi_{j+g}({\bf r}))] \bigg\}, \label{F5}\end{aligned}$$ where $\zeta_{j, g}({\bf r})$ denotes a Hubbard-Stratonovich auxiliary decoupling field. In order to clarify a character of the saddle-point for the variable $\zeta_{j,g}({\bf r})$, one writes the expression (\[av\]) for $\langle\langle \cos [\varphi_j({\bf r})-\varphi_{j+g}({\bf r})]\rangle \rangle_{dis}$ and find the saddle-point $\zeta_{j_0,g_0}$ as $$\zeta_{j_0,g_0}=\frac{W N_s^{(2)}}{T}\frac{\langle \cos[\varphi_{j_0}-\varphi_{j_0+g_0}]\rangle^2- \langle \cos^2[\varphi_{j_0}-\varphi_{j_0+g_0}]\rangle}{\langle \cos[\varphi_{j_0}-\varphi_{j_0+g_0}]\rangle} \label{saddle-point}$$ The expression for the effective free-energy functional at the saddle-point $\zeta_{j_0,g_0}$, given by Eq. (\[saddle-point\]), becomes similar to that for a regular quasi-2D SC with renormalized inter-layer Josephson energy, $E_g \to E_g -\frac{W^2 N_s^{(2)}}{T}\frac{\langle \cos[\varphi_{j_0}-\varphi_{j_0+g_0}]\rangle^2- \langle \cos^2[\varphi_{j_0}-\varphi_{j_0+g_0}]\rangle} {\langle \cos[\varphi_{j_0}-\varphi_{j_0+g_0}]\rangle}$. Equation for $T_c$ in quasi-2D SCs is derived from Eq. (\[av\]) by using the self-consistent mean-field method [@el74], which consists in replacing the cosine term of Eq. (\[F5\]) as $$\sum_{\bf g} E_{\bf g} \cos(\varphi_{j}-\varphi_{j+g}) %\longrightarrow \to \eta E_{\perp} \langle \langle \cos(\varphi) \rangle \rangle _{c}\cos \varphi (z),$$ where $\eta$ is the coordination number, and $E_{\perp} \approx t_{\perp}^2/\epsilon_F$ with $t_{\perp}$ and $\epsilon_F$ being the interlayer tunneling integral and the Fermi energy. The phase correlations on the nearest-neighboring layers in this approximation are simplified by describing them as a motion of a phason in the average field of phases with the most probable value, which coincides with the average value for a clean system $\langle \langle \cos (\varphi)\rangle \rangle_c \equiv \langle \langle \cos (\varphi)\rangle \rangle_{dis}$ [@el74]. For a dirty system the most probable value differs strongly from the average value. Indeed, we assume that a distribution function of the order parameter in the presence of randomness is broad and asymmetric. This broadness and asymmetry becomes stronger around the critical temperature due to huge thermal fluctuations. Therefore, knowledge of the arithmetic average is insufficient, and infinitely many moments give a contribution to the distribution function of the order parameter at the tail. We identify $\langle \langle \cos(\varphi) \rangle \rangle _{c}$ with the most probable or typical value of the order parameter. For the disordered SC we choose $\langle \langle \cos \varphi \rangle \rangle_c =\langle \langle \cos \varphi \rangle \rangle_{dis} - \frac{\langle \langle \cos \varphi \rangle^2 \rangle_{dis} - \langle \langle \cos \varphi \rangle \rangle_{dis}^2}{\langle \langle \cos \varphi \rangle \rangle_{dis}}$ which resembles a change made by the saddle-point (\[saddle-point\]) in the free-energy functional. The functional integral over the phases in Eq. (\[av\]) can not be evaluated yet, even after this simplification. Taking advantage of the smallness of $E_{\perp}\langle\langle\cos(\varphi)\rangle\rangle_c$ near $T_c$, however. an expansion of the integrand of Eq. (\[av\]) in this quantity allows us to obtain the following equations for $\langle\langle\cos(\varphi)\rangle\rangle_{dis}$ and $\langle\langle\cos(\varphi)\rangle^2\rangle_{dis}$ $$\langle\langle\cos(\varphi)\rangle\rangle_{dis}=\frac{\eta N_s^{(2)}E_{\perp}}{k_BT} \int d^2 r \langle \cos \varphi (0) \cos \varphi ({\bf r}) \rangle_{0} \langle \cos \varphi \rangle_{c}$$ $$\langle\langle\cos(\varphi)\rangle^2\rangle_{dis}=\left(1+W^2/E_{\perp}^2\right) \langle \langle\cos \varphi \rangle \rangle_{dis}.$$ The final equation for $T_c$, obtained from the above written expressions, reads $$\hspace{-3mm} 1 = \frac{\eta E_{\perp} N_s^{(2)}}{T_c}\biggl(1 - \frac{W^2}{ E^{2}_{\perp}}\biggr) \int \langle \cos(\varphi({\bf r})) \cos(\varphi(0))\rangle_0 d {\bf r}. \label{Tc}$$ The phase-phase correlator in Eq. (\[Tc\]) is calculated in the clean limit of the $2D$ free energy functional, obtained from Eq. (\[freeenergy\]) by setting $E_{\bf j,j+g} = 0$, which yields, [@rice65; @nf98] $$\begin{aligned} &&\langle \cos [\varphi ({\bf r})- \varphi (0)] \rangle_{0} =\nonumber\\ &&=\left\{ \begin{array}{ll} \left(\frac{\xi_{\|}}{r}\right)^{\frac{4k_BT}{\epsilon_F(1-T/T_c^{(2)})}}, & r > \xi_{\|}\\ \exp \left[-\frac{k_BT}{2 \epsilon_F(1-T/T_c^{(2)})}\left(\frac{r}{\xi_{\|}}\right)^2\right], & r<\xi_{\|}\\ \end{array} \right. \label{2Dcorrelator}\end{aligned}$$ where $\xi_{\|} = \frac{\hbar \gamma v_F}{\pi^2 k_BT_c^{(2)}}$ with $\ln \gamma =c=0.577$ is the in-plane coherence length. Real-space integration of the correlator (\[2Dcorrelator\]) in Eq. (\[Tc\]) for the critical temperature imposes the following restriction on the critical temperature $-\frac{4 k_BT_c}{\epsilon_F(1-T_c/T_c^{(2)})}+2<0$ yielding $T_c>T^{\ast}$, where $$1/T^{\ast}=1/T_c^{(2)}+2 k_B/\epsilon_F. \label{T-ast}$$ $T^{\ast}$ may be identified as the Kosterlitz-Thouless transition temperature. The order parameter’s phases correlation between nearest-neighbor layers disappears as $T_c$ approaches $T^{\ast}$, and system reveals effectively 2D superconducting behavior at $T=T^{\ast}$. So, the critical temperature in quasi-2D SC varies in the interval of $T^{\ast} < T_c<T_c^{(2)}$. One can estimate $T^{\ast}$ for $\kappa-(ET)_2Cu(SCN)_2$ SC with $T_c^{(2)} \approx 10.5 K$ for which the Fermi velocity and the effective mass of electrons are measured [@soyk11; @aabo06] to be $v_F\approx 4 \times 10^4 m/s$ and $m^{\ast} \approx 3~m_0$, respectively, where $m_0$ is a free electron mass. These data yield $T^{\ast} \approx 8.8K$, which agrees well with maximally dropped critical temperature with disorder in Refs. [@aabo06; @soyk11]. The randomness in the Josephson energy in the presence of the order parameter phase fluctuations destroys the transverse stiffness in the system. Equations (\[Tc\]) and (\[2Dcorrelator\]) yield $$1 = qt^2\left\{4(1-e^{-1/4t})+1/(1- t)\right\}, \label{Tc3}$$ where $t$ is a dimensionless $T_c$-shift, $0 < t < 1$, introduced as $$t = (\epsilon_F/2 k_B) \left(1/T_c - 1/T_{c}^{(2)} \right), \label{t-eq}$$ and $q$ is a dimensionless parameter $$q = \frac{4 \eta \gamma^2}{\pi^4}\left(\frac{t_{\perp}}{k_BT_c^{(2)}} \right)^2\left(1-\frac{W^2}{E_{\perp}^2}\right) \equiv q_0 (1-x), \label{q-eq}$$ with $\ln \gamma = c=0.577$. $q$ decreases from its maximal value $q=q_0=\frac{4 \eta \gamma^2}{\pi^4}\left(\frac{t_{\perp}}{k_BT_c^{(2)}}\right)^2$ to zero as the disorder parameter $x=W^2/E_{\perp}^2$ increases from zero up to the maximal value $x=1$ for strong randomness $W \sim E_{\perp}$. The numeric solution of Eqs. (\[Tc3\]) and (\[t-eq\]) for the dependence of $T_c$ on $x$ is depicted in Fig. \[Tc-classic\]. Equation (\[Tc3\]) is solved in two asymptotic limits. For $0< t <1/4$, which corresponds to a weak disorder limit when $T_c$ varies around $T_c^{(2)}$, the exponential term is neglected, yielding $$\frac{1}{T_c}=\frac{1}{T^{\ast}}-\frac{40 \eta \gamma^2 E_{\perp}}{\pi^4 k_B(T_c^{(2)})^2} \left(1-\frac{W^2}{E_{\perp}^2}\right) \label{Tc-weak}$$ In the limit $1/4<t<1$, which corresponds to relatively strong disorder limit when $T_c$ varies around $T^{\ast}$, the exponential term in Eq. (\[Tc3\]) is expanded yielding $$\frac{1}{T_c}=\frac{1}{T_c^{(2)}}+\frac{2k_B}{\epsilon_F(1+q)}\approx \frac{1}{T^{\ast}}- \frac{8 \eta \gamma^2 E_{\perp}}{\pi^4 k_B(T_c^{(2)})^2} \left(1-\frac{W^2}{E_{\perp}^2}\right)$$ Quantum phase fluctuations regime ================================= The results, obtained above for the classical fluctuations are valid for weak randomness, when $T_c$ and $E_{\perp}$ have relatively high values. Quantum fluctuations have to be taken into account for a strong disorder (small $T_c$ and $E_{\perp}$) limit. The effects of randomness in the presence of quantum phase fluctuations can be studied by starting from a Hamiltonian of weak-linked metallic layers with in-layer attractive electron-electron interactions. Integrating out the electronic degrees of freedom in the partition function, by following the method of Ambegaokar et al. [@aes82], yields the following the expression for the dynamical free energy functional $$F_{qu}\{\varphi\}=\frac{\hbar}{8V} \sum_{i,j}\int \int d{\bf r} d{\bf r'}d\tau K_{i,j} \dot{\varphi}_i({\bf r},\tau)\dot{\varphi}_j({\bf r'},\tau) + F_{st}^{qu}\{\varphi \},$$ where the phases depend now on the imaginary “time” $\tau$ too, the dot on the phase means a “time”-derivative, $K_{i,j}$ is the susceptibility [@aes82; @nf98]. $F_{st}^{qu}\{\varphi\}$ is the stationary part of the dynamical free energy functional, which differs from the classical functional (\[freeenergy\]) by additional integration over the imaginary “time” $\tau$. Repeating the procedure of derivation of Eq. (\[Tc\]) for the critical temperature in the case of the classical fluctuations, one arrives at $$\begin{aligned} &&1 = \eta E_{\perp} N_s^{(2)}(T) \biggl(1 - \frac{W^2}{ E^{2}_{\perp}}\biggr)\times\nonumber\\ &&\times \int_0^{1/k_BT}d \tau \int d^2r \langle \cos[\varphi(0,0)] \cos[\varphi({\bf r}, \tau)]\rangle_0. \label{Tc-qu}\end{aligned}$$ The phase-phase quantum correlator for a pure $2D$ SC is calculated to yield $$\begin{aligned} &&\hspace{-5mm}\langle \cos [\varphi({\bf r}, \tau)-\varphi (0,0)]\rangle_0=\nonumber\\ &&\hspace{-5mm}=\exp\left\{-\frac{k_BT}{V} \sum_{\omega_n}\sum_{k>0}\frac{2[1-\cos({\bf k \cdot r}-\omega_n\tau)]} {\frac{\hbar^2N_s^{(2)}k^2}{4m^{\ast}}+\frac{\hbar^2}{4}\omega_n^2K(k)}\right\}\end{aligned}$$ A straightforward calculation results in $$\begin{aligned} &&\hspace{-7mm} \langle \cos [\varphi ({\bf r}, \tau)- \varphi (0,0)] \rangle_0 =\nonumber\\ &&\hspace{-7mm} =\left\{ \begin{array}{ll} \exp\left[-\alpha + \frac{\alpha}{\sqrt{\left(\frac{2T\tau}{\beta}+1\right)^2+ \left(\frac{r}{\xi_{\|}}\right)^2}}\right], & \beta <1,\beta r<\xi_{\|}\\ (\beta r/\xi_{\|})^{-\alpha \beta}e^{-\alpha}, & \beta <1,~\beta r>\xi_{\|}\\ \exp\left[-\alpha\left(\frac{\beta r^2}{8\xi_{\|}^2}+\frac{T\tau}{\beta}\right)\right], & \beta >1,r<\xi_{\|}\\ (r/\xi_{\|})^{-\alpha \beta}, &\beta >1,~r>\xi_{\|} \end{array} \right. \label{2Dcorrelator-qu}\end{aligned}$$ where $\alpha=\frac{2}{\pi \xi_{\|} \hbar}\sqrt{\frac{m^{\ast}}{KN_s^{(2)}}}$ and $\beta=\frac{2k_BT}{\hbar}\xi_{\|}\sqrt{\frac{m^{\ast}K}{N_s^{(2)}}}$ are the dimensionless quantum parameters. Although $\alpha=\frac{4\pi k_BT_c^{(2)}}{\epsilon_F\tau^{1/2}}\alpha_0$ is proportional to the dynamical parameter $\alpha_0=\frac{1}{2\gamma}\left[\frac{\pi}{(2\hbar^2/m^{\ast})K}\right]^{1/2}$, which characterizes a quantum charging effect in the system, the other parameter $\beta=\frac{T}{\pi T_c^{(2)}\alpha_0 \tau^{1/2}}$ depends inversely on $\alpha_0$, nevertheless the product $\alpha \beta$ does not depend on $\alpha_0$. The quantum correlator (\[2Dcorrelator-qu\]) becomes the classic one (\[2Dcorrelator\]) for $\alpha_0 \to 0$ ($\alpha \ll 1$, $\beta \gg 1$, but $\alpha \beta \to const$). Equations (\[Tc-qu\]) and (\[2Dcorrelator-qu\]) yield a dependence of $T_c$ on $E_{\perp}$ and $W$ for two limiting cases $\beta >1$ and $\beta <1$. In both cases the integration of the correlator (\[2Dcorrelator-qu\]) over coordinates imposes the restriction $T^{\ast}<T_c<T_c^{(2)}$ (or $\alpha \beta >2$), where $T^{\ast}$ is defined by expression (\[T-ast\]). For $\beta <1$, which corresponds to strong charging regime, Eq. (\[Tc-qu\]) after integration over ${\bf r}$ and $\tau$ results in non-linear equation for $t$ $$q\left\{\frac{t}{2a}+ a \left(1+\frac{t^3}{1-t}\right)\exp (-2\sqrt{a/t}~) \right\}=1, \label{Tc-quantum}$$ where, $a=\frac{2\pi^2 k_BT_c^{(2)}\alpha_0^2}{\epsilon_F}$ characterizes the dynamic effects too, since $a \propto \alpha_0^2$. The numeric solution of Eq. (\[Tc-quantum\]) is given in Fig. \[SC-Tc-quantum\] for $a=1.2$ and different values of $q_0$. The quantum fluctuations reduce $T_c$ considerably and alter the results of the classical fluctuations regime at low temperatures and small $E_{\perp}$, which corresponds to strong randomness or high resistivity in the $T_c(x)$ dependence. The slope of, e.g. the dashed (green) curve with $q_0=0.8$ changes from $0.54$ in the interval $0<x<0.5$ of Fig. \[Tc-classic\] for the classic fluctuations regime to the value of $0.30$ in the interval $0.5<x<1$ of Fig. \[SC-Tc-quantum\] for the quantum fluctuations regime, the ratio of which ($1.8$) is comparable with that ($\sim 2$) estimated for the experimental curve [@aabo06]. For $\beta > 1$ Eqs. (\[Tc-qu\]) and (\[2Dcorrelator-qu\]) are solved in the limit of $2<\alpha \beta <8$ yielding $$T_c=T^{\ast} (1+q)/\{1+T^{\ast} q/T_c^{(2)}\}.$$ The case of $\beta>1$ and $\alpha \beta > 8$ results in $$1=qt^2\left\{4+ 1/(1-t)\right\},$$ an approximate solution of which is given by Eq. (\[Tc-weak\]). The case of $\beta>1$ or $\alpha <1$ corresponds to the weak quantum fluctuations limit, and, therefore, the results do not depend on the dynamical parameter $\alpha_0$. A detailed comparison of the results with the experiments needs to express $T_c$ on the interlayer residual resistivity $\rho_0= \pi \hbar^4/(2e^2m^{\ast}a_{\perp}t_{\perp}^2\tau_t)$, [@pk04] where $a_{\perp}$ is the interlayer distance. The inelastic scattering time $\hbar/\tau_t=\pi c_{imp}N(o)|U|^2$ is assumed to relate with a measure of the randomness $x \sim W^2$ as $W^2= \pi c_{imp}|U|^2/2$. It is necessary to take into account that the charging effect in the quantum fluctuations regime reduces the value of the interlayer tunneling integral $t_{\perp}$ (or the transverse rigidity) [@nf98]. Therefore, $\rho_0$ in the quantum fluctuations regime is rescaled for a given value of $x$ to a larger interval in comparision with that in the classical fluctuation regime. The dependence of $T_c$ on $\rho_0$ is given in the insert of Fig. \[SC-Tc-quantum\]. Conclusions =========== In this paper we report disorder effects on $T_c$ of quasi-2D SCs with random Josephson coupling. The interplay of non-magnetic disorder with quantum phase fluctuations becomes a central factor in suppression of the superconducting phase in organic quasi-2D SCs. A randomness in the interlayer coupling energy is shown to decrease $T_c$ quasi-linearly, nevertheless the superconducting phase does not completely vanish even at arbitrarily high strength of the disorder. The present theory explains very well the recent experimental measurements given in Refs.[@aabo06; @soyk11]. We neglect in this article effects of in-plane disorder on $T_c$ in organic SCs. Such randomness results in suppression of $T_c$ due to the Anderson localization for non-s-wave pairings, and it seems to destroy the homogeneity of the order parameter modulus leading to the formation of a cluster-like “superconducting island” inside the metallic phase. On the other hand the in-plane disorder may “pin” the Kosterlitz-Thouless topological defects and destroy the quasi-long range order in the system. All these effects deserve further investigation. Acknowledgment ============== This research was supported by the DFG under Grant No. Op28/8-1 and by the government of the Azerbaijan Republic under Grant No. EIF-2010-1(1)-40/01-22. [99]{} J. M. Williams, A. J. Schultz, U. Geiser, K.D. Carlson, A. M. Kini, H. H. Wang, W.-K. Kwok, M.-H. Whangbo, J. E. Schirber, Science [**252**]{}, 1501 (1991). T. Ishiguro, K. Yamaji, and G. Saito, [*Organic Superconductors*]{} (2nd Edn., Springer-Verlag, Heidelberg, 2006). W. K. Kwok, U. Welp, K. D. Carlson, G. W. Crabtree, K. G. Vandervoort, H. H. Wang, A. M. Kini, J. M. Williams, D. L. Stupka, L. K. Montgomery, and J. E. Thompson, Phys. Rev. B [**42**]{}, 8686 (1990). B. J. Powell and R. H. McKenzie, Phys. Rev. B [**69**]{}, 024519 (2004). J. G. Analytis, A. Ardavan, S. J. Blundell, R. L. Owen, E. F. Garman, C. Jeynes, and B. J. Powell, Phys. Rev. Lett. [**96**]{}, 177002 (2006). T. Sasaki, H. Oizumi, N. Yoneyama, and N. Kobayashi, J. Phys. Soc. Jpn. [**80**]{}, 104703 (2011). E. Nakhmedov and R. Oppermann, Phys. Rev. B [**81**]{}, 134511 (2010). A. A. Abrikosov and L. P. Gor’kov, Zh.Eksp.Teor.Fiz. [**35**]{}, 1558 (1958) \[Sov.Phys.JETP [**8**]{}, 1090 (1959)\]. E. P. Nakhmedov and Yu. A. Firsov, Physica [**C 295**]{}, 150 (1998). V. J. Emery and S. A. Kivelson, Nature [**374**]{}, 434 (1995); Phys. Rev. Lett. [**74**]{}, 3253 (1995). T. M. Rice, Phys. Rev. A [**140**]{}, 1889 (1965). J. M. Kosterlitz and D. J. Thouless, J. Phys. C [**6**]{}, 1181 (1973); P. Minnhagen, Rev. Mod. Phys. [**59**]{}, 1001 (1987). K. B. Efetov and A. I. Larkin, Zh.Eksp.Teor.Fiz. [**66**]{}, 2290 (1974) \[Sov.Phys.JETP [**39**]{}, 1129 (1974)\]. V. Ambegaokar, U. Eckern, and G. Schön, Phys. Rev. Lett. [**48**]{}, 1745 (1982); Phys. Rev. B [**30**]{}, 6419 (1984).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this notice, we revisit the recent work [@Kang] of Jung Yoog Kang and Tai Sup about special polynomials with exponential distribution in order to state some improvements and get new proofs for results therein.' address: | Mouloud Goubi\ Department of Mathematics\ University of UMMTO RP. 15000\ Tizi-ouzou, Algeria\ Laboratoire d’Algèbre et Théorie des Nombres, USTHB Alger author: - '**Mouloud Goubi**' title: Remarks on some properties of special polynomials with exponential distribution --- Introduction ============ In this notice we revisit the recent work [@Kang] on some properties of special polynomials with exponential distribution of Jung Yoog Kang and Tai Sup Lee published in Commun. Korean Math. Soc. The object of this study is the family of polynomials $\mathfrak{E}_n\left(\lambda:x\right)$ generated by the generating function $$\frac{\lambda}{e^{\lambda t}}e^{xt}=\sum_{n{\geqslant}0}\mathfrak{E}_n\left(\lambda:x\right)\frac{t^n}{n!}$$ and associated numbers $\mathfrak{E}_n\left(\lambda\right)=\mathfrak{E}_n\left(\lambda:0\right)$ generated by $$\frac{\lambda}{e^{\lambda t}}=\sum_{n{\geqslant}0}\mathfrak{E}_n\left(\lambda\right)\frac{t^n}{n!}$$ First we prove that $$\mathfrak{E}_n\left(\lambda:x\right)=\lambda\left(x-\lambda\right)^n$$ and then $$\mathfrak{E}_n\left(\lambda\right)=\left(-1\right)^n\lambda^{n+1}$$ These results go alone to show that some results in this paper are trivial. For example the result in Theorem 2.10 p.387 $$D\mathfrak{E}_n\left(\lambda:x\right)=n\mathfrak{E}_{n-1}\left(\lambda:x\right)$$ and the result in Theorem 3.1 p.387 $$D_y\mathfrak{E}_n\left(\lambda:x+y\right)=D_x\mathfrak{E}_n\left(\lambda:x+y\right)=n\mathfrak{E}_{n-1}\left(\lambda:x+y\right).$$ Furthermore the Theorem 3.2 p.388 $$\int_{0}^{1}\mathfrak{E}_n\left(\lambda:x+y\right)dy=\frac{\mathfrak{E}_{n+1}\left(\lambda:x+1\right)-\mathfrak{E}_{n+1}\left(\lambda:y\right)}{n+1}$$ and then the Corollary 3.3 p.388 $$\int_{0}^{1}\mathfrak{E}_n\left(\lambda:x\right)dx=\frac{\mathfrak{E}_{n+1}\left(\lambda:1\right)-\mathfrak{E}_{n+1}\left(\lambda\right)}{n+1}.$$ Some basic properties ===================== For any positive integer $n$, the polynomial $\mathfrak{E}_n\left(\lambda:x\right)$ is a binomial polynomial with weight $\lambda$, the following theorem states an improvement of the expression (i) Theorem 2.2 [@Kang] p.384 $$\mathfrak{E}_n\left(\lambda:x\right)=\sum_{k=0}^{n}{n\choose k}\mathfrak{E}_n\left(\lambda\right)x^{n-k}$$ \[th1\] $$\label{eqth1} \mathfrak{E}_n\left(\lambda:x\right)=\sum_{k=0}^{n}{n\choose k}\left(-1\right)^k\lambda^{k+1}x^{n-k}.$$ Furthermore $$\mathfrak{E}_n\left(\lambda\right)=\left(-1\right)^{n}\lambda^{n+1}$$ Since $$\frac{\lambda}{e^{\lambda t}}e^{xt}=\lambda e^{\left(x-\lambda\right)t}=\sum_{n{\geqslant}0}\lambda\left(x-\lambda\right)^n\frac{t^n}{n!}$$ then $$\sum_{n{\geqslant}0}\mathfrak{E}_n\left(\lambda:x\right)\frac{t^n}{n!}=\sum_{n{\geqslant}0}\lambda\left(x-\lambda\right)^n\frac{t^n}{n!}.$$ After comparison we deduce that $$\mathfrak{E}_n\left(\lambda:x\right)=\lambda\left(x-\lambda\right)^n=\sum_{k=0}^{n}{n\choose k}\left(-1\right)^k\lambda^{k+1}x^{n-k}.$$ To get the second formula just remark that $$\mathfrak{E}_n\left(\lambda:x\right)=\lambda\left(x-\lambda\right)^n=\left(-1\right)^{n}\lambda^{n+1}+\sum_{k=0}^{n-1}{n\choose k}\left(-1\right)^k\lambda^{k+1}x^{n-k}$$ and then for $x=0$ we conclude that $$\mathfrak{E}_n\left(\lambda\right)=\left(-1\right)^{n}\lambda^{n+1}$$ The identity (ii) Theorem 2.2 [@Kang] p.384 is a consequence of the Theorem \[th1\] \[cor1\] $$\label{eq1cor1} \mathfrak{E}_n\left(\lambda:x+y\right)=\sum_{k=0}^{n}{n\choose k}\mathfrak{E}_n\left(\lambda:x\right)y^{n-k}$$ The identity Corollary \[cor1\] follows from the identity Theorem \[th1\] as follows. $$\mathfrak{E}_n\left(\lambda:x+y\right)=\sum_{k=0}^{n}{n\choose k}\left(-1\right)^k\lambda^{k+1}\left(x+y\right)^{n-k}$$ $$\mathfrak{E}_n\left(\lambda:x+y\right)=\sum_{i=0}^{n}\sum_{k=0}^{i}{n\choose k}{n-k\choose i-k}\left(-1\right)^k\lambda^{k+1}x^{i-k}y^{n-i}$$ but $${n\choose k}{n-k\choose i-k}=\frac{n!i!}{k!\left(i-k\right)!\left(n-i\right)!i!}={n\choose i}{i\choose k}$$ then $$\label{eq2cor1} \mathfrak{E}_n\left(\lambda:x+y\right)=\sum_{i=0}^{n}\sum_{k=0}^{i}{n\choose i}{i\choose k}\left(-1\right)^k\lambda^{k+1}x^{i-k}y^{n-i}$$ and the result Corollary \[cor1\] follows. We attract attention that the identiy is an improvement of the Theorem 3.4 [@Kang] p.389. Only in means of the identity Theorem \[th1\] a sample proof of the identity in Theorem 2.4 [@Kang] p.385 $$x^n=\sum_{k=0}^{n}{n\choose k}\lambda^{n-k-1}\mathfrak{E}_k\left(\lambda: x\right)$$ is just to write $$\sum_{k=0}^{n}{n\choose k}\lambda^{n-k-1}\mathfrak{E}_k\left(\lambda: x\right)=\sum_{k=0}^{n}{n\choose k}\lambda^{n-k}\left(x-\lambda\right)^k=\left(x-\lambda+\lambda\right)^n=x^n.$$ Another proof of the identities (i) and (ii) in Theorem 2.3 [@Kang] is explained in the following theorem. \[th2\] $$\label{eq1th2} \mathfrak{E}_n\left(\lambda: x\right)=\left(-1\right)^{n+1}\mathfrak{E}_n\left(-\lambda: -x\right)$$ $$\label{eq2th2} \mathfrak{E}_n\left(\lambda: x\right)=2\mathfrak{E}_n\left(\frac{\lambda}{2}: -\frac{\lambda}{2}+x\right)$$ Since we have $$\mathfrak{E}_n\left(-\lambda: -x\right)=-\lambda\left(-x+\lambda\right)^n=-\left(-1\right)^n\lambda\left(x-\lambda\right)^n=\left(-1\right)^{n+1}\mathfrak{E}_n\left(\lambda: x\right)$$ then $$\mathfrak{E}_n\left(\lambda: x\right)=\left(-1\right)^{n+1}\mathfrak{E}_n\left(-\lambda: -x\right).$$ and $$\mathfrak{E}_n\left(\frac{\lambda}{2}: -\frac{\lambda}{2}+x\right)=\frac{\lambda}{2}\left(-\frac{\lambda}{2}+x-\frac{\lambda}{2}\right)^n=\frac{\lambda}{2}\left(x-\lambda\right)^n=\frac{1}{2}\mathfrak{E}_n\left(\lambda: x\right)$$ and then $$\mathfrak{E}_n\left(\lambda: x\right)=2\mathfrak{E}_n\left(\frac{\lambda}{2}: -\frac{\lambda}{2}+x\right).$$ A sample proof of the identity in Theorem 2.5 [@Kang] p.285 $$\begin{aligned} \sum_{k=0}^{n}{n\choose k}\left(\lambda-x\right)^k\mathfrak{E}_{n-k}\left(\lambda: x\right)= \left\{ \begin{array}{ccc} \lambda\ &\quad \textrm{ if }\ n=0, \\ 0\ &\quad \textrm{ otherwise} \end{array} \right.\end{aligned}$$ is given as follows. It is trivial to see that for $n=0$ the sum is $\lambda$ and if $n{\geqslant}1$ we have $$\sum_{k=0}^{n}{n\choose k}\left(\lambda-x\right)^k\mathfrak{E}_{n-k}\left(\lambda: x\right)=\lambda\left(x-\lambda\right)^n\sum_{k=0}^{n}{n\choose k}\left(-1\right)^k=0$$ because $$\sum_{k=0}^{n}{n\choose k}\left(-1\right)^k=\left(1-1\right)^n=0.$$ For any cupel $\left(a,b\right)$ of numbers, the formulae in Theorem 2.8 [@Kang] p.386 $$\begin{aligned} \sum_{k=0}^{n}{n\choose k}\left(\frac{a}{b}\right)^{n-2k}\mathfrak{E}_{n-k}\left(\frac{b\lambda}{a}: \frac{bx}{a}\right)\mathfrak{E}_{k}\left(\frac{a\lambda}{b}: \frac{ay}{b}\right)\\ \nonumber=\sum_{k=0}^{n}{n\choose k}\left(\frac{b}{a}\right)^{n-2k}\mathfrak{E}_{n-k}\left(\frac{a\lambda}{b}: \frac{ax}{b}\right)\mathfrak{E}_{k}\left(\frac{b\lambda}{a}: \frac{by}{a}\right)\end{aligned}$$ results from the identities $$\left(\frac{a}{b}\right)^{n-2k}\mathfrak{E}_{n-k}\left(\frac{b\lambda}{a}: \frac{bx}{a}\right)\mathfrak{E}_{k}\left(\frac{a\lambda}{b}: \frac{ay}{b}\right)=\left(x-\lambda\right)^{n-k}\left(y-\lambda\right)^{k}$$ and $$\left(\frac{b}{a}\right)^{n-2k}\mathfrak{E}_{n-k}\left(\frac{a\lambda}{b}: \frac{ax}{b}\right)\mathfrak{E}_{k}\left(\frac{b\lambda}{a}: \frac{by}{a}\right)=\left(x-\lambda\right)^{k}\left(y-\lambda\right)^{n-k}$$ and the fact that $$\sum_{k=0}^{n}{n\choose k}\left(x-\lambda\right)^{n-k}\left(y-\lambda\right)^{k}=\sum_{k=0}^{n}{n\choose k}\left(x-\lambda\right)^{k}\left(y-\lambda\right)^{n-k}=\left(x+y-2\lambda\right)^n.$$ In the case $x=y$ a new identity without the sum is obtained in the following corollary. \[cor2\] $$\label{eqcor2} \left(\frac{a}{b}\right)^{2n-4k}\mathfrak{E}_{n-k}\left(\frac{b\lambda}{a}: \frac{bx}{a}\right)\mathfrak{E}_{k}\left(\frac{a\lambda}{b}: \frac{ax}{b}\right)=\mathfrak{E}_{n-k}\left(\frac{a\lambda}{b}: \frac{ax}{b}\right)\mathfrak{E}_{k}\left(\frac{b\lambda}{a}: \frac{bx}{a}\right)$$ We have $$\mathfrak{E}_{n-k}\left(\frac{b\lambda}{a}: \frac{bx}{a}\right)\mathfrak{E}_{k}\left(\frac{a\lambda}{b}: \frac{ax}{b}\right)=\left(\frac{b}{a}\right)^{n-2k}\left(x-\lambda\right)^n$$ then $$\left(\frac{a}{b}\right)^{2n-4k}\mathfrak{E}_{n-k}\left(\frac{b\lambda}{a}: \frac{bx}{a}\right)\mathfrak{E}_{k}\left(\frac{a\lambda}{b}: \frac{ax}{b}\right)=\left(\frac{a}{b}\right)^{n-2k}\left(x-\lambda\right)^n$$ with $$\left(\frac{a}{b}\right)^{n-2k}\left(x-\lambda\right)^n=\mathfrak{E}_{n-k}\left(\frac{a\lambda}{b}: \frac{ax}{b}\right)\mathfrak{E}_{k}\left(\frac{b\lambda}{a}: \frac{bx}{a}\right)$$ and the result follows. Finally for the cupel $\left(1,b\right)$, the identity Corollary \[cor2\] becomes \[cor3\] $$\label{eqcor3} \left(\frac{1}{b}\right)^{2n-4k}\mathfrak{E}_{n-k}\left(b\lambda: bx\right)\mathfrak{E}_{k}\left(\frac{\lambda}{b}: \frac{x}{b}\right)=\mathfrak{E}_{n-k}\left(\frac{\lambda}{b}: \frac{x}{b}\right)\mathfrak{E}_{k}\left(b\lambda: bx\right)$$ [99]{} J. Y. Kang and T. S. Lee [*Some properties of special polynomials with exponential distribution*]{}, Commun. Korean Math. Soc. [**34**]{} (2019), No. 2, 383–390.
{ "pile_set_name": "ArXiv" }
--- abstract: 'As emotion plays a growing role in robotic research it is crucial to develop methods to analyze and compare among the wide range of approaches. To this end we present a survey of 1427 IEEE and ACM publications that include robotics and emotion. This includes broad categorizations of trends in emotion input analysis, robot emotional expression, studies of emotional interaction and models for internal processing. We then focus on 232 papers that present internal processing of emotion, such as using a human’s emotion for better interaction or turning environmental stimuli into an emotional drive for robotic path planning. We conducted constant comparison analysis of the 232 papers and arrived at three broad categorization metrics - emotional intelligence, emotional model and implementation - each including two or three subcategories. The subcategories address the algorithm used, emotional mapping, history, the emotional model, emotional categories, the role of emotion, the purpose of emotion and the platform. Our results show a diverse field of study, largely divided by the role of emotion in the system, either for improved interaction, or improved robotic performance. We also present multiple future opportunities for research and describe intrinsic challenges common in all publications.' author: - 'Richard Savery and Gil Weinberg $^{1}$ [^1] [^2]' bibliography: - 'name.bib' title: | **A Survey of Robotics and Emotion:\ Classifications and Models of Emotional Interaction** --- Introduction ============ Research in robotics and emotions has seen dramatic increases over the last thirty years (see Figure \[fig:broad\]). This research has taken many forms, such as analyzing a person’s emotional expression [@7783498], or presenting believable robotic emotional output [@fukuda2004facial]. Publications have also focused on the role of emotion in human-robot interaction, and models for how robots should internally process and respond to emotion. This wide range of approaches and methodologies is not easily classified, analyzed or compared between projects. Currently when developing new research using robotics and emotion there is no standard practice or framework to easily place new work, forcing roboticists to continually make new choices drawing from psychology literature. In this paper we conduct an extensive survey and meta-analysis on publications that combine robotics and emotion through any means. We begin by presenting broad categorizations of the types of inputs and outputs used by these systems. Our focus is then placed on systems that use emotion as part of their internal processing. This internal processing can be deeply varied, addressing aspects such as how robots can process and utilize information about humans’ emotions, or how they can turn external stimuli into an emotional response that can then improve system performance. Our meta-analysis is based on collecting all publications from the IEEE XPlore digital library and the ACM Full-Text Collection that discuss emotion and robotics. While IEEE and ACM do not include all publications on robotics and emotion, combined they contain 4 out of 5 of the robotics conferences and journals with the highest h-index. We believe that in combination, IEEE and ACM contain a representative and broad enough range of publications to develop an analysis of robotic research as a whole. Our search resulted in 1427 publications of which the abstracts were analyzed and classified. A constant comparison analysis was conducted on 232 papers that included emotional models. After establishing related works and our motivation this paper transitions to the Method section, which describes our literature review process and the categories that emerged through the analysis. This is followed by our results which presents an objective analysis of the data collected. We conclude with a discussion section presenting our insights and broad ideas for future work from the survey. Due to the quantity of publications analyzed, our reference section does not contain them all. Instead we only cite works that are specifically mentioned throughout the paper. A full list of analyzed publications is available online. [^3] ![image](images/broadcats.png){width="16cm"} Background and Motivation ========================= Emotions are a widely studied phenomena with many classification methods. The most prominent discrete categorization is that proposed by Ekman [@ekman1999basic] and includes fear, anger, disgust, sadness, happiness and surprise. Another way to consider emotions is with a continuous scale, most common is the Circumplex model; a two dimension model using valence and arousal [@posner2005circumplex]. Mood is often considered a longer term form of emotion, taking place over longer spans than emotion [@watson1994emotions], while affect is human experience of feeling emotions. For the purpose of this analysis we consider the term emotion in the broadest sense, and include analysis of papers whether they focus on emotion, affect or mood. Similarly, for the term robot we include all publications that describe a plan or potential to implement in robotics in the future, or are published in robotic conferences. Research into robotics and emotion can be divided into two main categories, emotion for social interaction, and emotion for improved performance and “survivability” [@arkin2009ethical]. For interaction, emotion can be used to improve agent likeability and believability [@ogata2000emotional]. Emotion in interaction has also been used to improve communication and allow for intuitive dialogue between human and robot [@breazeal2003emotion]. The second main purpose for implementing emotion in robotics is for improved performance or survivabilty. This builds on the belief that emotion is key to animals’ ability to survive and navigate in the world and can likewise be applied to robotics [@arkin2003ethological]. There have been multiple surveys and meta-analysis on robots and human-robot interaction, although to our knowledge none focusing on emotion. In 2008 an extensive survey was conducted on human-robot interaction [@goodrich2008human] with only limited mention of emotion. Many other robot surveys have focused on specific aspects of robotics, such as robotic grasp [@shimoga1996robot], social robotics [@leite2013social], or empathy [@paiva2017empathy]. The closest publication addressing a survey on robotics and emotions is “A Survey of Socially Interactive Robots” [@fong2003survey] however, was written in 2002 and only contains a brief overview of emotion and robotics. Considering the rapid growth and interest in emotion and robotics, we believe that a meta-analysis of emotion and robotics with an emphasis on emotional interaction and modelling is now due. Method ====== Our review process was divided into three main steps, as done by Frich et al. [@frich2018twenty]. The first step involved finding all relevant articles and collecting publications. This was followed by dividing these papers into broad categories with a preliminary analysis. From there we conducted a thorough analysis on the remaining articles (see Figure \[fig:method\]). ![Flow Chart of Survey Method[]{data-label="fig:method"}](images/Method.png){width="8cm"} Step 1: Collecting Publications ------------------------------- We originally collected publications by retrieving all papers that contained the keywords Robot and Emotion from the IEEE Digital Library and the ACM Digital Collection. This resulted in a collection of 330 publications. From a random sampling of abstracts that contained the word emotion, without the keyword emotion we quickly realized that relying on keywords would not provide an extensive survey. We then expanded to collecting all publications that include the words robot and either emotion (or a variation such as emotional) or affect or mood. This resulted in a collection of 1,427 publications referencing robotics and emotion ranging from 1986 to February 2020. Step 2: Preliminary Sorting --------------------------- From the 1,427 publications all abstracts were manually read by the first author and in cases where the abstract was not clear, relevant sections of the paper itself were checked. We then sorted the articles into four categories. These categories were input focus, output focus, emotional modelling, and perception. We also created a separate list of articles that were not relevant for the survey. Figure \[fig:method\] shows the quantity of each article from each collection, including duplicates between IEEE and ACM. Our primary category focused on models of emotion and how robots can interact emotionally. These included a wide variety of systems discussed in detail in Section \[method:step3\] and comprise the papers used in later sections. Input only publications focused on a method of input to a robotic system such as facial recognition [@mohseni2014facial] or speech recognition[@zhu2019emotion]. Output only papers focused on robots conveying some emotion and often evaluated the output, such as audio [@savery2019establishing] or robotic gait [@destephe2013conveying]. If the system included an emotional input and output it was placed in our primary category of emotional modelling. The category human perception included publications that discussed and evaluated existing robots perception to a range of audiences. These were occasionally “wizard of oz” setups with no clear path to implementation [@mok2014empathy], or studies on interaction design that do not present new technologies [@lee2013aesthetic], or general surveys of audience emotional attitudes to robots [@karim2016older]. The list of publications that were deemed as irrelevant included a peripheral use of the word emotion, or measuring a users emotion when interacting with a robot for evaluation without the robot processing the emotion [@westlund2016transparency]. There were also occasional duplicates in the data-set and some extended-abstracts lacking sufficient detail to be categorized. After placing each publication in categories we considered reducing the publications for analysis by citation count or citation average by year. We chose not to use this approach as we aimed to include as broad a range of approaches as possible, including experimental approaches and the full spectrum of divergent emotional modelling techniques. With this in mind we decided against removing potential approaches and concepts based solely on a paper’s lack of visibility in citation counts. Step 3: Emotional Coding {#method:step3} ------------------------ ![image](images/Categories.png){width="15cm"} Our final review method is based on the broad principles described by Onwuegbuzie and Frels [@onwuegbuzie2016seven]. We used constant comparison analysis to build categories and subcategories from the analyzed papers by coding each paper, then organizing codes and continual refinement of categorization. Through this process we developed three primary categories - emotional intelligence, emotional model and implementation (see Fig. \[fig:categories\]). Emotional intelligence refers to how the system processes emotion, focusing on how input is translated through an algorithm to an output, and whether or not it contains a knowledge of past events or history. Emotional model analyzes what types of emotion are used and how many are used, while the final category, implementation categorizes the role and purpose of emotion and the platform that is used. ### Emotional Intelligence - Algorithm Our first categorization classifies the algorithm used to drive the system. There was a large range of approaches which did not easily break into categories, however multiple overarching trends did emerge. We found several reoccurring algorithms such as Fuzzy Models, Markov Models, Neural Networks, Probability Tables, Reinforcement Learning and Self-Organizing Maps. Each of these implementations varied greatly in complexity. For example Markov models could range from simple first order implementations [@qing2011artificial] to complex hidden Markov models [@li2018emotional]. In addition to these subcategories we found three other broader categories - biology inspired systems, computational models basic, and computational models complex. Biology inspired systems draw directly from comparisons to human or animal systems such as imitating a homeostasis approach [@malfaz2011biologically] or a neurocognitive affective system [@park2006neurocognitive]. Computational basic included simple implementations, which used direct mappings, such as when something goes wrong being sad, or excited when asked for help [@antona2019my] or systems with clear tables of responses for different states [@diaz2018intelligent]. Computational complex featured custom systems that did not fit in the other categories and had more detailed models of emotion, including complicated mappings between all inputs [@canamero2001show]. Importantly we imply no superiority between basic and complex, as this only represents one component of the system and a more complex emotional model did not necessarily lead to better results. ### Emotional Intelligence - History The category history referred to whether or not the model remembered or altered emotions based on past information. This category was binary, with the system either having history or not. A basic form of history involved a system that moves in a certain direction based on the input, such as moving in incremental steps between their past emotion and a human’s current emotion [@shih2017implement]. In publications that featured longer history the term mood was often used, with combined emotion and mood models occurring very frequently [@gockley2006modeling; @masuyama2014affective]. ### Emotional Intelligence - Mapping The category mapping described what input and output was used by the system. We developed two categories for input types and two for output types. These categories could work in any combination, such as both inputs to one output. The input categories were external stimuli and external emotion while the output categories were internal process and emotion expression. External stimuli include all stimuli that do not contain emotional information such as the distance from a wall or other perceived features [@tang2012robot]. External stimuli also include goals and tasks of a system, such as a robot’s list of objectives [@izumi2009behavior]. External emotion primarily contains recognition of a human’s emotions, such as through voice input [@lim2011converting], but can also contain content that has been preassigned an emotion externally before use with the robot, such as emotionally tagged images [@diaz2018intelligent]. Emotional expression as an output occurs anytime the robot expresses emotion, such as through facial expressions [@hara1996real]. Internal process is when the robot uses an emotion internally to change or alter its decisions in a way that does not lead to an emotional expression. This form of output is common for environmental navigation and path planning [@lee2009mobile]. ### Emotional Model - Model and Categories Our aim was to present patterns of emotional models occurrences in publications related to robotics. With this goal in mind, emotional models were added and classified based on their presence in the publications analyzed, and not on their existence in emotional literature. Our final categorization included standard emotional models, Cirumplex (Valence/Arousal), Ekman’s six categorizations, Plutchik’s Wheel of Emotions [@plutchik2001nature] and PAD (3 dimensional model) [@mehrabian1980basic]. In addition to these categories we included a broad category for custom definitions. Custom models ranged from subjective variations of Ekman’s categorizations, to original approaches, such as happy, hungry and tired [@ho1997model] that do not reference any other literature. These custom choices were often tailored to fit the robotic task at hand such as frustration [@murphy2002emotion] or combinations such as tired, tension and happiness [@li2008cooperative]. For each categorization we also included the amount of labelled emotions that were used. This number varied widely and often didn’t allude to the variety of emotions for each model. For example, binary classifications were commonly not only happy and sad, or positive and negative, but could instead be courage and fear [@dominguez2006emotional]. When custom classifications used a single emotion this could be any emotion, such as guilt in a military application [@arkin2009ethical] or regret for optimal task queuing [@jiang2019respect]. This classification specifically categorized the emotion used in the implemented model. Many papers referenced the Plutchik, Ekman, or the Circumplex model but made significant custom changes to the model. For clarity we always referenced the minimum amount of emotions used in a system. For example, if a system was able to detect 8 states from a face, but after processing displayed 4 different emotions we would count this as 4. This was however rare, publications almost always maintained the same model and emotion types throughout a system. ### Implementation - Role, Purpose and Platform The implementation category contained three subcategories - the role, the purpose and the robotic platform used. The role of emotion was either core or component, with core representing publications where the emotion was the central part of the implementation, and component when it is a part of a broader system. The classification labelled purpose analyzes how emotion acts within the system as a whole. Through the survey process, we arrived at two labels, interaction and performance. These two categories match those proposed by Arkin, who describes the purpose of emotions in robotics as interaction and ’survivability’, which allows the robot to better interact with the world [@arkin2003moving]. The robotic platform considered the robot used for implementation, ranging from common HRI robots such as NAO [@nanty2013fuzzy], to custom designs [@lee2008evolutionary] or digital interfaces [@nasir2018markov]. ![Algorithm Use Count[]{data-label="fig:alogrithmusecoun"}](images/algorithms.png){width="8cm"} ![Sankey Diagram of Input to Output[]{data-label="fig:sankey"}](images/sankeymatic_800x550.png){width="8cm"} Results ======= ------------------- -------- -------- -------- -------- -------- -------- History 6.67% 27.59% 32.79% 22.22% 34.62% 27.16% Emotion Core 68.75% 31.03% 72.13% 70.83% 69.23% 65.95% Emotion Component 31.25% 68.97% 27.87% 29.17% 30.77% 34.05% Interaction 62.50% 65.52% 80.33% 76.39% 80.77% 75.86% Performance 37.50% 34.48% 19.67% 23.61% 19.23% 24.14% Digital 37.50% 44.83% 39.34% 31.94% 50.00% 40.09% Robot 62.50% 55.17% 60.66% 68.06% 50.00% 59.91% ------------------- -------- -------- -------- -------- -------- -------- Broad Categories ---------------- An analysis of the results from the categorization of abstracts shows a continual growth in the research of robotics and emotion across all categories. Figure \[fig:broad\] shows these trends for each category as well as the specific growth in input categorization. While we categorized input, we did not create clear categories for the output. We found that output was commonly multi-modal and often focused on the robot as a whole, without always a clear emphasis. Our categorization for input includes movement as separate category, designed for cases where the emphasis is on a particular aspect of continuous movement such as gait. The clear spike in the the graphs for input corresponds to the growth in facial emotion recognition. This increase in robotics research and face recognition mirrors the recent leap in deep learning and face recognition [@masi2018deep], beginning from the state of the art paper DeepFace in 2014 [@taigman2014deepface]. Emotional Intelligence - Algorithm ---------------------------------- The primary algorithm used within our classification system was computational complex. Figure \[fig:alogrithmusecoun\] shows the spread of algorithm use. We did not find any unifying trends amongst algorithm use. Neural networks are the main outlier with 11 of their 12 uses happening after 2012, with continual growth through this time period. Emotional Intelligence - History -------------------------------- The total use of history in a system was 27.16% with limited trends over time. From 1990-2000 only one paper from the sixteen we analyzed included history, however all following time periods fit in the range of 27% to 35% as shown in Table \[tab:bigtable\]. Emotional Intelligence - Mapping -------------------------------- In the categorization of input to output, all possible combinations were represented in at least 3 publications in the dataset, shown in Figure \[fig:sankey\] and Table \[tab:inputcat\]. Table \[tab:inputcat\] shows for performance or interaction which types of input were used and the frequency; in the first row it shows performance systems with emotional stimuli mapped to internal process occurs five times in the dataset. Emotion to Process & Expression, and Emotion & Stimuli to process had the lowest use, in 3 and 7 papers respectively. In contrast Emotion to Expression was by far the most common mapping being used 74 times, while Stimuli to Process was the second most common with 49 uses. Stimuli Emotion Expression Process Freq ------------- --------- --------- ------------ --------- ------ Performance No Yes No Yes 5 Performance Yes No No Yes 40 Performance Yes No Yes No 2 Performance Yes No Yes Yes 3 Performance Yes Yes No Yes 4 Performance Yes Yes Yes Yes 2 Interaction No Yes No Yes 8 Interaction No Yes Yes No 74 Interaction No Yes Yes Yes 3 Interaction Yes No No Yes 9 Interaction Yes No Yes No 32 Interaction Yes No Yes Yes 8 Interaction Yes Yes No Yes 3 Interaction Yes Yes Yes No 24 Interaction Yes Yes Yes Yes 15 : Input and Output Mapping By System Purpose[]{data-label="tab:inputcat"} Emotional Model - Model and Categories -------------------------------------- Custom emotional models (n = 154) were used in 67% of the papers analyzed. The discrete Ekman emotions (n = 45) were the second most common occurring in 19% of the publications. Circumplex (n = 20) , PAD (n =8 ), and Plutchik (n = 5) each occurred in less than 10% of the puplications. Figure \[fig:emotionsused\] shows the number of emotion categories used by all publications with custom emotion models; a paper that uses happy and sad would show two emotions used. The mean for all categories was 4.10 with a median of 4. Publications with a purpose of interaction had a mean of 4.66 and median 4, while performance publications had a mean of 2.97 and median of 2. ![Quantity of Emotions Used in Custom Emotion Models[]{data-label="fig:emotionsused"}](images/customemotions.png){width="8cm"} Implementation - Role, Purpose and Platform ------------------------------------------- The role of emotion as either the core research or as a component of research has varied only slightly over time. Outside of a jump between 2001-2005 the range has remained close to the current total of 66% as the core, and 34% as a component. The purpose of emotion - either for interaction or performance - has seen a gradual shift towards an emphasis on use in interaction. From 2016-2019, 81% of emotion related papers used emotion for interactions. Across all publications interaction was the focus of 76% of papers. Table \[tab:bigtable\] shows the variation between robot and digital implementations. There were no significant trends with physical robots being used slightly more. The most commonly used robot was SoftBank Robotics Nao with 14 uses. The other robots used were a mix of commercial and research robots like Pepper [@kashii2017ex] and many custom designs. The mean usage of each robot platform was 1.44 times with a median of a single use. As expected when a custom designed robot was used multiple times it was exclusively by the same researchers. Discussion ========== From the development of our method and results we found multiple trends that require further analysis and discussion. In the following section we discuss both clear findings from the publications as well as our own beliefs on possible future directions and areas that require more refinement. These points are divided into three sections, paradigm features, future opportunities and intrinsic challenges. Paradigm features are clear trends that are invoked across all publications analyzed. Future opportunities and intrinsic challenges describe our subjective views for future work and areas we believe have room to develop within current literature. Paradigm Features ----------------- ### Distinct Approaches for Interaction and Performance While not unexpected, the results clearly present two methodologies associated with the role of emotion in a robotic system for interaction or performance. For interaction and performance the emotional models and number of categories used varies greatly. Likewise the mapping system used consistently falls into different categories. While this distinction was expected it is clear that robotics and emotion studies should be compared with a framework to their related purpose. ### Diverse Approaches Shaped by Current Trends Both the algorithms used and the emotional models used showed a wide diversity of approaches. The computational basic and complex categories, which featured custom models not clearly categorized, represented over 60% of the approaches. While there is wide variation the literature does follow broader computer science and engineering trends. This is most clear in the significant increase in facial analysis for emotion. Likewise, 11 out of the 12 neural networks used in emotional models were used since 2012, and it is reasonable to expect more work in this area, shadowing neural networks overall growth across many domains. Future Opportunities -------------------- ### Limited Long Term Interaction and History Our definition of a model containing history allowed for the lowest inclusion, such as a single previous step or a first order Markov model. Even within this context only a total of 27% of papers included history within their model. In the majority of publications history was rarely considered in more than one or two previous steps. Beyond adding short-term emotion to more systems there is clearly a wider-scope for systems that have longer term emotions, carry across the entirety of interactions and even emotional models that carry day-to-day developments within the robot. ### Signalling is the Current Paradigm for Interaction Our analysis of papers reconfirmed ideas presented in other literature of signalling being the dominant paradigm in human-robot interaction with emotions. Emotional signaling relies on the idea that emotions are expressed and reveal the inner state of the robot. Signaling has many limitations, such as an assumption that an outwardly shown state is always an internal emotion. It also implies that displaying an emotion always carries the same meaning to the other participants [@jung2017affective]. Signalling further implies that all parts of the emotion are displayed, when often a human may only display part of the emotion they are feeling [@bucci2019real]. In our review we found signaling was overlooked unanimously by each publication. ### The Social Aspect of Emotions In psychology research, emotions are considered inherently social [@van2016social], with emotions shaped by our interactions with others. This occurs through direct influence from others, as well as regulation based on social expectations [@van2009emotions; @parkinson1996emotions]. In most papers, emotion was seen as at most a dyadic occurrence and often as a solitary experience held only by the robot. The omission of social aspects prevents many psychology based implementations and risks missing a crucial aspect of human emotional models. Some promising results for analyzing interactions between teams of humans and robots has been shown [@correia2018group], but social based emotions as a whole have very limited representation in the publications analyzed. Intrinsic Challenges -------------------- ### Anthropomorphism While research into robotics and emotion will inherently cross into anthropomorphism, we believe extra consideration should be given to the language used in describing robotic emotion systems. It was common in papers for a robot to be described as ’feeling’ when it had a single input mapped to an emotion. Implications that a robot actually feels sad when a simple negative event happens overly reduces and simplifies much of the research done in artificial emotions. ### Custom Emotional Models There are many theories and models of emotion published in psychology research, with a wide variety of reviewed and generally accepted models. While custom models of emotion are at times certainly appropriate, we believe many papers slightly varied established models such as Ekman’s without providing a clear rationale behind the variation. This variation without proper justification discourages proper evaluation between systems. A related survey in affective computing also found that the majority of papers used custom emotional models, demonstrating this is not only a feature of robotic systems [@sreeja2017emotion]. ### Project Isolation A significant challenge of analyzing papers on robotics and emotion is the relative isolation of each system. As previously stated, even fundamental aspects such as the emotional model used varied greatly between each project. In addition to challenges in comparison between publications, it is often not possible to compare each system to anything outside of the proposed control from within the study. In our review we found only one paper that compared their emotional model with a baseline system [@godfrey2009towards], every other publication compared only one emotional model usually to a control group which had no emotion added. While some papers compare a digital implementation with a physical implementation, it was very rare for a paper to compare an emotional model on more than one robot. We found a single paper[@park2009] that compared the same model on different physical implementations. While we acknowledge testing implementations on different robots adds significant scope, we believe there is room for much more work in this area, especially considering the nature of emotion. This includes ideas such as emotional models having different applications and implications on humanoid and non-humanoid robots. Currently we can only infer from different publications, but lack controlled comparisons between models. This issue is further compounded with consideration that the average use of each robot in the publications was 1.44 times for an emotional study, indicating there is an absence of long-term studies of emotion on the vast majority of platforms. Conclusion ========== In this paper we have presented a meta-analysis of publications focused on robotics and emotion. From this work we have identified multiple trends and developed a variety of discussion points for future work in robotics and emotion. From these trends we have described features of the paradigm of emotion in robotics. With the current growth in research we believe there are still key areas that present significant challenges, primarily in the isolation of projects and variation in models used. We also believe there are many future areas that are nearly unexplored in the social role of emotions, signalling and adding history to robotic emotion. [^1]: $^{1}$Georgia Tech Center for Music Technology [^2]: [email protected], [email protected] [^3]: https://github.com/richardsavery/robot-emotions-survey
{ "pile_set_name": "ArXiv" }
--- abstract: 'A version of the so-called convexification" numerical method for a coefficient inverse scattering problem for the 3D Hemholtz equation is developed analytically and tested numerically. Backscattering data are used, which result from a single direction of the propagation of the incident plane wave on an interval of frequencies. The method converges globally. The idea is to construct a weighted Tikhonov-like functional. The key element of this functional is the presence of the so-called Carleman Weight Function (CWF). This is the function which is involved in the Carleman estimate for the Laplace operator. This functional is strictly convex on any appropriate ball in a Hilbert space for an appropriate choice of the parameters of the CWF. Thus, both the absence of local minima and convergence of minimizers to the exact solution are guaranteed. Numerical tests demonstrate a good performance of the resulting algorithm. Unlikeprevious the so-called tail functions globally convergent method, we neither do not impose the smallness assumption of the interval of wavenumbers, nor we do not iterate with respect to the so-called tail functions.' author: - 'Michael V. Klibanov [^1] [^2], Aleksandr E. Kolesov [^3]' title: 'Convexification of a 3-D coefficient inverse scattering problem[^4]' --- **Keywords**: coefficient inverse scattering problem, Carleman weight function, globally convergent numerical method **2010 Mathematics Subject Classification:** 35R30. Introduction {#sec:1} ============ In this work, we develop a version of the so-called convexification“ numerical method for a coefficient inverse scattering problem (CISP) for the 3D Helmholtz equation with backscattering data resulting from a single measurement event which is generated by a single direction of the propagation of the incident plane wave on an interval of frequencies. We present both the theory and numerical results. Our method converges globally. This is a generalization to the 3D case of our (with coauthors) previous 1D version of the convexification [KlibanovKolesov17]{}. Three main advantages of the convexification method over the previously developed the so-called tail functions” globally convergent method for a similar CISP [BeilinaKlibanov08, BeilinaKlibanov12,KlibanovLiem16,KlibanovKolesov17exp,KlibanovLiem17buried,KlibanovLiem17exp]{} are: (1) To solve our problem, we construct a globally strictly convex cost functional with the Carleman Weight Function (CWF) in it, (2) we do not impose in our convergence analysis the smallness assumption on the interval of wavenumbers, and (3) we do not iterate with respect to the so-called tail functions". It is well known that any CISP is both highly nonlinear and ill-posed. These two factors cause substantial difficulties in numerical solutions of these problems. A *globally convergent method* (GCM) for a CISP is such a numerical method, which has a rigorous guarantee of reaching a sufficiently small neighborhood of the exact solution of that CISP without any advanced knowledge of this neighborhood. In addition, the size of this neighborhood should depend only on approximation errors and the level of noise in the data. Over the years the first author with coauthors has proposed a variety of globally convergent methods for CISPs with single measurement data, see, e.g. [@BeilinaKlibanov08; @BeilinaKlibanov12; @KlibanovLiem16; @KlibanovKolesov17exp; @KlibanovLiem17exp; @BeilinaKlibanov15; @KlibanovIoussoupova95; @Klibanov97a; @Klibanov97b; @KlibanovTimonov04; @KlibanovKamburg16; @KlibanovLoc16; @KlibanovThanh15], and references cited therein. These methods can be classified into two types. Methods of the first type, which we call the *tail functions* methods, are certain iterative processes. On each iterative step one solves the Dirichlet boundary value problem for a linear elliptic Partial Differential Equation (PDE). This PDE depends on the iteration number. The solution of that problem enables one to update the unknown coefficient. Using this update, one updates the so-called tail function, which is a complement of a certain truncated integral, where the integration is carried out with respect to the wavenumber. The stopping criterion for the iterative process is developed computationally. The tail function method was successfully tested on experimental backscattering data The tail function method was successfully tested on experimental backscattering data [@KlibanovLiem16; @KlibanovKolesov17exp; @KlibanovLiem17buried; @KlibanovLiem17exp; @KlibanovLoc16]. Globally convergent numerical methods of the second type are called the *convexification* methods. They are based on the minimization of the weighted Tikhonov-like functional with the CWF in it. The CWF is the function which is involved in the Carleman estimate for the corresponding PDE operator. The CWF can be chosen in such a way that the above functional becomes strictly convex on a ball of an arbitrary radius in a certain Hilbert space (see some details in this section below). Note that the majority of known numerical methods of solutions of nonlinear ill-posed problems minimize conventional least squares cost functionals [Chavent09,Goncharsky13,Goncharsky17]{}, which are usually non convex and have multiple local minima and ravines, see, e.g. [@Scales92] for a good numerical example of multiple local minima. Hence, a gradient-like method for such a functional converges to the exact solution only if the starting point of iterations is located in a sufficiently small neighborhood of this solution. Some other effective approache to numerical methods for nonlinear ill-posed problems can be found in [@Lakhal2; @Lakhal3]. Various versions of the convexification methods have been proposed since the first work [@KlibanovIoussoupova95], see [@Klibanov97a; @Klibanov97b; @KlibanovTimonov04]. However, these versions have some theoretical gaps, which have limited their numerical studies so far. In the recent works [BeilinaKlibanov15, KlibanovKamburg16, KlibanovKoshev16]{} the attention to the convexification method was revived. Theoretical gaps were eliminated in [@BakushinskiiKlibanov17] and thorough numerical studies for one dimensional problems were performed [@KlibanovKolesov17; @KlibanovThanh15]. Besides, in [@Klibanov15] the convexification method was developed for ill-posed problems for quasilinear PDEs and corresponding numerical studies for the 1D case were conducted in [KlibanovKoshev16,BakushinskiiKlibanov17]{}. The idea of any version of the convexification has direct roots in the method of [@KlibanovBukhgeim81], which is based on Carleman estimates. The method of [@KlibanovBukhgeim81] was originally designed only for proofs of uniqueness theorems for CIPs, also see, e.g. the book [@KlibanovTimonov04] and the recent survey [Ksurvey]{}. Recently an interesting version of the convexification was published in [@Baud] for a CISP for the hyperbolic equation $u_{tt}=\Delta u+a\left( x\right) u$ with the unknown coefficient $a\left( x\right) $ in the case when one of initial conditions does not non-vanish. The method of [@Baud] is also based on the idea of [KlibanovBukhgeim81]{} and has some roots in [BeilinaKlibanov15,KlibanovKamburg16]{}. By the convexification, one constructs a weighted Tikhonov-like functional $J_{\lambda }$ on a closed ball $\overline{B\left( R\right) }$ of an arbitrary radius $R>0$ and with the center at $\left\{ 0\right\} $ in an appropriate Hilbert space. Here $\lambda >0$ is a parameter. The key theorem claims that one can choose a number $\lambda \left( R\right) >0$ such that for all $\lambda \geq \lambda \left( R\right) $ the functional $J_{\lambda }$ is strictly convex on $\overline{B\left( R\right) }.$ Furthermore, the existence of the unique minimizer of $J_{\lambda }$ on $\overline{B\left( R\right) }$ as well as convergence of minimizers to the exact solution when the level of noise in the data tends to zero are proven. In addition, it is proven that the gradient projection method reaches a sufficiently small neighborhood of the exact coefficient when starting from an arbitrary point of $B\left( R\right) $. Since $R>0$ is an arbitrary number, then this is a *globally convergent* numerical method. Due to a broad variety of applications, Inverse Scattering Problems (ISPs) are quite popular in the community of experts in inverse problems. There are plenty of works dedicated to this topic. Since this paper is not a survey, we refer to only few of them, e.g. [Goncharsky13,Goncharsky17,Lakhal2,Lakhal3,Am1,Am2,Am3,Bao,Buhan,Chow1, Chow2,Ito,Jin,Kab1,Kab,Kab3,Lakhal1,Liu1,Liu2,Liu3]{} and references cited thereein. We note that the authors of [@Chow1] have considered a modified tail functions method. As stated above, we are interested in a CISP for the Helmholtz equation with the data generated by a single measurement event. As to the CISPs with multiple measurements, we refer to a global reconstruction procedure, which was developed and numerically implemented in [@Kab1], also see [@Kab; @Kab3] for further developments and numerical studies. Actually, this is an effective extension of the classical 1D Gelfand-Krein-Levitan method on the 2D case. In section 2 we formulate our forward and inverse problems. In section 3 we construct the weighted Tikhonov-like functional with the CWF in it. In section 4 we formulate our theorems.  We prove them in section 5. In section 6 we present numerical results. Problem Statement {#sec:2} ================= The Helmholtz equation {#sec:2.1} ---------------------- Just as in the majority of the above cited previous works of the first author with coauthors about GCM, we focus in this paper applications to the detection and identification of targets, which mimic antipersonnel land mines (especially plastic mines, i.e. dielectrics) and improvised explosive devices (IEDs) using measurements of a single component of the electric wave field. In this case the medium is assumed to be non magnetic, non absorbing, and the dielectric constant in it should be represented by a function, which is mostly a constant with some small sharp inclusions inside (however, we do not assume in our theory such a structure of the dielectric constant). These inclusions model antipersonnel land mines and IEDs. Suppose that the incident electric field has only one non zero component. It was established numerically in [@Liem] that the propagation of that component through such a medium is well governed by the Helmholtz equation rather than by the full Maxwell’s system. Besides, in all above cited works of the first author with coauthors about experimental data those targets were accurately imaged by the above mentioned tail functions GCM using experimentally measured single component of the electric field and modeling the propagation of that component by the Helmholtz equation. In addition, we are unaware about a GCM for a CISP with single measurement data for the Maxwell’s system. Thus, we use the Helmholtz equation below. The need of the detection and identification of, e.g. land mines, might, in particular, occur on a battlefield. Due to the security considerations, the amount of collected data should be small in this case, and these should be the backcattering data. Thus, we use only a single direction of the propagation of the incident plane wave of the electric field and assume measurements of only the backscattering part of the corresponding component of that field. Forward and inverse problems {#sec:2.2} ---------------------------- Let $\mathbf{x}=(x,y,z)\in \mathbb{R}^{3}$. Let $b,d,\xi >0$ be three numbers. It is convenient for our numerical studies (section 6) to define from the beginning the domain of interest $\Omega $ and the backscattering part $\Gamma $ of its boundary as$$\Omega =\left\{ \left( x,y,z\right) :\left\vert x\right\vert ,\left\vert y\right\vert <b,z\in \left( -\xi ,d\right) \right\} ,\text{ }\Gamma =\left\{ \left( x,y,z\right) :\left\vert x\right\vert ,\left\vert y\right\vert <b,z=-\xi \right\} . \label{eq:2.1}$$Let the function $c(\mathbf{x})$ be the spatially distributed dielectric constant and $k$ be the wavenumber. We consider the following forward problem for the Helmholtz equation: $$\Delta u+k^{2}\,c(\mathbf{x})\,u=0,\quad \mathbf{x}\in \mathbb{R}^{3}, \label{eq:helmholtz}$$$$u\left( \mathbf{x},k\right) =u_{s}\left( \mathbf{x},k\right) +u_{i}\left( \mathbb{x},k\right) , \label{eq:utotal}$$where $u(\mathbf{x},k)$ is the total wave, $u_{s}(\mathbf{x},k)$ is the scattered wave, and $u_{i}(\mathbf{x},k)$ is the incident plane wave propagating along the positive direction of the $z-$axis, $$u_{i}(\mathbf{x},k)=e^{ikz}. \label{eq:uinc}$$The scattered wave $u_{s}(\mathbf{x},k)$ satisfies the Sommerfeld radiation condition: $$\lim_{r\rightarrow \infty }r\left( \frac{\partial u_{s}}{\partial r}-iku_{s}\right) =0,\quad r=\left\vert \mathbf{x}\right\vert . \label{eq:sommerfled}$$Also, the function $c(\mathbf{x})$ satisfies the following conditions: $$c(\mathbf{x})=1+\beta \left( \mathbf{x}\right) ,\text{ }\beta \left( \mathbf{x}\right) \geq 0,\,\mathbf{x}\in \mathbb{R}^{3},\quad \mbox{and}c(\mathbf{x})=1,\,\mathbf{x}\notin \overline{\Omega }. \label{eq:coef}$$The assumption of (\[eq:coef\]) $c(\mathbf{x})=1$ in $\mathbb{R}^{3}\setminus \Omega $ means that we have vacuum outside of the domain $\Omega .$ Finally, we assume that $c(\mathbf{x})\in C^{15}(\mathbb{R}^{3})$. This smoothness condition was imposed to derive the asymptotic behavior of the solution of the Helmholtz equation (\[eq:helmholtz\]) at $k\rightarrow \infty $ [@KlibanovRomanov16]. We also note that extra smoothness conditions are usually not of a significant concern when a CIP is considered, see, e.g. theorem \[thm:4.1\] in [@Rom]. In particular, this smoothness condition implies that the function $u\left( \mathbf{x},k\right) \in C^{16+\gamma }\left( \overline{G}\right) ,\forall \gamma \in \left( 0,1\right) ,\forall k>0,$ where $C^{16+\gamma }\left( \overline{G}\right) $ is the Hölder space and $G\subset \mathbb{R}^{3}$ is an arbitrary bounded domain [@Gilbarg]. Also, it follows from lemma 3.3 of [KlibanovLiem16]{} that the derivative $\partial _{k}u\left( \mathbf{x},k\right) $ exists for all $\mathbf{x}\in \mathbb{R}^{3},k>0$ and satisfies the same smoothness condition as the function $u\left( \mathbf{x},k\right) .$ **Coefficient Inverse Scattering Problem (CISP).** *Let the domain $\Omega $ and the backscattering part $\Gamma \subset \partial \Omega $ of its boundary be as in (\[eq:2.1\]). Let the wavenumber $k\in \lbrack \underline{k},\overline{k}],$ where* $[\underline{k},\overline{k}]$*$\subset \left( 0,\infty \right) $ is an interval of wavenumbers. Determine the function $c(\mathbf{x}),\,\mathbf{x}\in \Omega $, assuming that the following function $g(\mathbf{x},k)$ is given*: $$u(\mathbf{x},k)=g_{0}(\mathbf{x},k),\quad \mathbf{x}\in \Gamma ,\,k\in \lbrack \underline{k},\overline{k}]. \label{eq:cisp}$$ In addition to the data (\[eq:cisp\]) we can obtain the boundary conditions for the derivative of the function $u(\mathbf{x},k)$ in the $z-$direction using the data propagation procedure (section 6.2), $$u_{z}(\mathbf{x},k)=g_{1}(\mathbf{x},k),\quad \mathbf{x}\in \Gamma ,\,k\in \lbrack \underline{k},\overline{k}]. \label{eq:gz0}$$ In addition, we complement Dirichlet (\[eq:cisp\]) and Neumann ([eq:gz0]{}) boundary conditions on $\Gamma $ with the heuristic Dirichlet boundary condition at the rest of the boundary $\partial \Omega $ as:$$u(\mathbf{x},k)=e^{ikz},\mathbf{x}\in \partial \Omega \diagdown \Gamma ,\,k\in \lbrack \underline{k},\overline{k}]. \label{2.2}$$This boundary condition coincides with the one for the uniform medium with $c\left( \mathbf{x}\right) \equiv 1.$ To justify (\[2.2\]), we recall that, using the tail functions method, it was demonstrated in sections 7.6 and 7.7 of [@KlibanovLiem16] that (\[2.2\]) does not affect much the reconstruction accuracy as compared with the correct Dirichlet boundary condition. Besides, (\[2.2\]) has always been used in works [KlibanovLiem16, KlibanovKolesov17exp, KlibanovLiem17buried, KlibanovLiem17exp]{} with experimental data, where accurate results were obtained by the tail functions GCM. The uniqueness of the solution of this CISP is an open and long standing problem. In fact, uniqueness of a similar coefficient inverse problem can be currently proven only in the case if the right hand side of equation ([eq:helmholtz]{}) is a function which is not vanishing in $\overline{\Omega }.$ This can be done by the method of [KlibanovBukhgeim81,KlibanovTimonov04,Ksurvey]{}. Hence, we assume below the uniqueness of our CISP. Travel time {#sec:2.3} ----------- The Riemannian metric generated by the function $c(\mathbf{x})$ is: $$d\tau (\mathbf{x})=\sqrt{c(\mathbf{x})}|d\mathbf{x}|,\quad |d\mathbf{x}|=\sqrt{(dx)^{2}+(dy)^{2}+(dz)^{2}}.$$Fix the number $a>0.$ Consider the plane $P_{a}=\{(x,y,-a):x,y\in \mathbb{R}\}.$ We assume that $\Omega \subset \left\{ z>-a\right\} $ and impose everywhere below the following condition on the function $c(\mathbf{x})$: **Regularity Assumption**. *For any point* $x\in \mathbb{R}^{3}$* there exists a unique geodesic line* $\Gamma (x,a)$*, with respect to the metric* $d\tau $*, connecting* $x$* with the plane* $P_{a}$* and perpendicular to* $P_{a}$*.* A sufficient condition of the regularity of geodesic lines is [@Rom3]: $$\sum_{i,j=1}^{3}\frac{\partial ^{2}c\left( \mathbf{x}\right) }{\partial x_{i}\partial x_{j}}\xi _{i}\xi _{j}\geq 0,\forall \mathbf{x}\in \overline{\Omega },\forall \mathbf{\xi }\in \mathbb{R}^{3}.$$ We introduce the travel time $\tau (\mathbf{x})$ from the plane $P_{a}$ to the point $\mathbf{x}$ as [@KlibanovRomanov16] $$\tau (\mathbf{x})=\int_{\Gamma (\mathbf{x},a)}\sqrt{c\left( \mathbf{\xi }\right) }d\sigma .$$ The Weighted Tikhonov Functionals {#sec:3} ================================= The asymptotic behavior {#sec:3.1} ----------------------- It was proven in [@KlibanovRomanov16] that the following asymptotic behavior of the function $u(\mathbf{x},k)$ is valid: $$u(\mathbf{x},k)=A(\mathbf{x})e^{ik\tau (\mathbf{x})}\left[ 1+s\left( \mathbf{x},k\right) \right] ,\text{ }\mathbf{x}\in \overline{\Omega },k\rightarrow \infty , \label{eq:uasymptotics}$$where the function $s\left( \mathbf{x},k\right) $ is such that $$s\left( \mathbf{x,}k\right) =O\left( \frac{1}{k}\right) ,\partial _{k}s\left( \mathbf{x,}k\right) =O\left( \frac{1}{k}\right) ,\text{ }\mathbf{x}\in \overline{\Omega },k\rightarrow \infty . \label{3.1}$$Here the function $A(\mathbf{x})>0$ and $\tau (\mathbf{x})$ is the length of the geodesic line in the Riemannian metric generated by the function $c(\mathbf{x})$. Denote $$w(\mathbf{x},k)=\frac{u(\mathbf{x},k)}{u_{i}(\mathbf{x},k)}. \label{eq:w}$$ Using (\[eq:uasymptotics\]), (\[3.1\]) and (\[eq:w\]), we obtain for $\mathbf{x}\in \overline{\Omega },k\rightarrow \infty $ that $$w(\mathbf{x},k)=A(\mathbf{x})e^{ik(\tau (\mathbf{x})-z)}\left[ 1+s\left( \mathbf{x},k\right) \right] . \label{eq:wasymptotics}$$Using (\[eq:uasymptotics\]) and (\[eq:wasymptotics\]), we uniquely define the function $\log w(\mathbf{x},k)$ for $\mathbf{x}\in \Omega $, $k\in \lbrack \underline{k},\overline{k}]$ for sufficiently large values of $\underline{k}$ as $$\log w(\mathbf{x},k)=\ln A(\mathbf{x})+ik(\tau (\mathbf{x})-z)+\mathop{\displaystyle \sum }_{n=1}^{\infty }\frac{\left( -1\right) ^{n-1}}{n}\left( s(\mathbf{x} ,k)\right) ^{n}. \label{3.2}$$Obviously for so defined function $\log w(\mathbf{x},k)$ we have that $\exp \left[ \log w(\mathbf{x},k)\right] $ equals to the right hand side of ([eq:wasymptotics]{}). Thus, we assume below that the number $\underline{k}$ is sufficiently large. The integro-differential equation {#sec:3.2} --------------------------------- It follows from (\[eq:helmholtz\]), (\[eq:uinc\]), (\[eq:coef\]) and (\[eq:w\]) that the function $w\left( \mathbf{x},k\right) $ satisfies the following equation in the domain $\Omega $$$\Delta w+k^{2}\beta w+2ikw_{z}=0. \label{eq:intdiffw}$$ For $\mathbf{x}\in \Omega ,\,k\in \lbrack \underline{k},\overline{k}]$ we define the function $v(\mathbf{x},k),$ $$v(\mathbf{x},k)=\frac{\log w(\mathbf{x},k)}{k^{2}}. \label{eq:v}$$Then$$\Delta v+k^{2}\left( \nabla v\right) ^{2}+2ikv_{z}+\beta (\mathbf{x})=0. \label{eq:intdiffv}$$Let $q(\mathbf{x},k)$ be the derivative of the function $v$ with respect to $k,$ $$q(\mathbf{x},k)=\partial _{k}v(\mathbf{x},k). \label{eq:q}$$Then $$v(\mathbf{x},k)=-\int_{k}^{\overline{k}}q\left( \mathbf{x},\kappa \right) d\kappa +V(\mathbf{x}). \label{eq:vq}$$We call $V(\mathbf{x})$ the tail function: $$V(\mathbf{x})=v\left( \mathbf{x},\overline{k}\right) . \label{3.30}$$To eliminate the function $\beta (\mathbf{x})$ from equation ([eq:intdiffv]{}), we differentiate (\[eq:intdiffv\]) with respect to $k,$ $$\Delta q+2k\nabla v\cdot \left( k\nabla q+\nabla v\right) +2i\left( kq_{z}+v_{z}\right) =0. \label{eq:intdiffq}$$Substituting (\[eq:vq\]) into (\[eq:intdiffq\]) leads to the following integro-differential equationeq: $$\begin{gathered} L(q)=\Delta q+2k\left( \nabla V-\int_{k}^{\overline{k}}\nabla q(\mathbf{x},\kappa )d\kappa \right) \cdot \left( k\nabla q+\nabla V-\int_{k}^{\overline{k}}\nabla q\left( \mathbf{x},\kappa \right) d\kappa \right) \\ +2i\left( kq_{z}+V_{z}-\int_{k}^{\overline{k}}q_{z}\left( \mathbf{x},\kappa \right) d\kappa \right) =0. \end{gathered} \label{eq:intdiff}$$ Finally, we complement this equation with the overdetermined boundary conditions: $$\begin{gathered} q(\mathbf{x},k)=\phi _{0}(\mathbf{x},k),\quad q_{z}(\mathbf{x},k) =\phi _{1}(\mathbf{x},k),\quad \mathbf{x}\in \Gamma ,\,k\in \lbrack \underline{k},\overline{k}], \\ q(\mathbf{x},k) =0,\quad \mathbf{x}\in \partial \Omega \setminus \Gamma ,\,k\in \lbrack \underline{k},\overline{k}], \end{gathered} \label{eq:intdiffbcs}$$where the functions $\phi _{0}$ and $\phi _{1}$ are calculated from the functions $g_{0}$ and $g_{1}$ in (\[eq:cisp\]), (\[eq:gz0\]). The third boundary condition (\[eq:intdiffbcs\]) follows from (\[eq:uinc\]), ([2.2]{}), (\[eq:w\]), (\[eq:v\]) and (\[eq:q\]). Note that in (\[eq:intdiff\]) both functions $q(\mathbf{x},k)$ and $V(\mathbf{x})$ are unknown. Hence, we approximate the function $V(\mathbf{x})$ first. Next, we solve the problem (\[eq:intdiff\] ), (\[eq:intdiffbcs\]) for the function $q(\mathbf{x},k)$. **Remark 3.1.** Suppose that certain approximations for the functions $q(\mathbf{x},k)$ and $V(\mathbf{x})$ are found. Then an approximation for the unknown coefficient $c\left( \mathbf{x}\right) $ can be found via backwards calculations: first, approximate the function $v\left( \mathbf{x},k\right) $ via (\[eq:vq\]) and then approximate the function $\beta \left( \mathbf{x}\right) $ using equation (\[eq:intdiffv\]) for a certain value of $k\in \left[ \underline{k},\overline{k}\right] $. In our computations we use $k=\underline{k}$  for that value of $k$. Next, one should use (\[eq:coef\]). Therefore, we focus below on approximating functions $q(\mathbf{x},k)$ and $V(\mathbf{x}).$ Approximation of the tail function {#sec:3.3} ---------------------------------- The method of this paper to approximate the tail function is different from the method explored before in [@KlibanovKolesov17]. Also, unlike the tail functions method, we do not update tails here. It follows from (\[3.2\]) and (\[3.30\]) that there exists a function $p(\mathbf{x})$ such that $$v\left( \mathbf{x},k\right) =\frac{p\left( \mathbf{x}\right) }{k}+O\left( \frac{1}{k^{2}}\right) ,\quad q\left( \mathbf{x},k\right) =-\frac{p\left( \mathbf{x}\right) }{k^{2}}+O\left( \frac{1}{k^{3}}\right) ,\quad k\rightarrow \infty ,\,\mathbf{x}\in \Omega . \label{eq:vqasympt}$$Since the number $\overline{k}$ is sufficiently large, we drop terms $O\left( 1/\overline{k}^{2}\right) $ and $O\left( 1/\overline{k}^{3}\right) $ in (\[eq:vqasympt\]). Next, we approximately set $$v\left( \mathbf{x},k\right) =\frac{p\left( \mathbf{x}\right) }{k},\quad q\left( \mathbf{x},k\right) =-\frac{p\left( \mathbf{x}\right) }{k^{2}},\quad k\geq \overline{k},\,\mathbf{x}\in \Omega . \label{eq:vqtail}$$Substituting (\[eq:vqtail\]) in (\[eq:intdiff\]) and letting $k=\overline{k}$, we obtain $$\Delta V(\mathbf{x})=0,\quad \mathbf{x}\in \Omega . \label{eq:tail}$$This equation is supplemented by the following boundary conditions: $$V(\mathbf{x})=\psi _{0}(\mathbf{x}),\quad V_{z}(\mathbf{x})=\psi _{1}(\mathbf{x}),\quad \mathbf{x}\in \Gamma ,\quad V(\mathbf{x})=0,\quad \mathbf{x}\in \partial \Omega \setminus \Gamma , \label{eq:tailbcs}$$where functions $\psi _{0}$ and $\psi _{1}$ can be computed using ([eq:cisp]{}) and (\[eq:gz0\]). Boundary conditions (\[eq:tailbcs\]) are over-determined ones. Due to the approximate nature of (\[eq:vqtail\]), we have observed that the obvious approach of finding the function $V(\mathbf{x})$ by dropping the second boundary condition (\[eq:tailbcs\]) and solving the resulting Dirichlet boundary value problem for Laplace equation ([eq:tail]{}) with the resulting boundary data (\[eq:tailbcs\]) does not provide satisfactory results. The same observation was made in [KlibanovKolesov17]{} for the 1D case. Thus, we use a different approach to approximate the function $V\left( \mathbf{x}\right) $. Let the number $s>0$ be such that $s>\xi .$ Let $\lambda ,\nu >0$ be two parameters which we will choose later. We introduce the CWF as$$\varphi _{\lambda }\left( z\right) =\exp \left[ 2\lambda \left( z+s\right) ^{-\nu }\right] , \label{3.3}$$see Theorem \[thm:4.1\] in section 4.1. Below we fix a number $\nu $ and allow $\lambda $ to change. We find an approximate solution of the problem ([eq:tail]{}), (\[eq:tailbcs\]) by minimizing the following cost functional with the CWF in it:$$I_{\mu ,\alpha }\left( V\right) =\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left\vert \Delta V\right\vert ^{2}\varphi _{\mu }\left( z\right) d\mathbf{x}+\alpha \Vert V\Vert _{H^{3}(\Omega )}^{2}. \label{eq:Jtail}$$We minimize the functional $I_{\nu ,\alpha }\left( V\right) $ on the set $S$, $$V\in S=\{V\in H^{2}(\Omega ):\,V(\mathbf{x})=\psi _{0}\left( \mathbf{x}\right) ,\,V_{z}(\mathbf{x})=\psi _{1}\left( \mathbf{x}\right) ,\mathbf{x}\in \Gamma ,V(\mathbf{x})=0,\,\mathbf{x}\in \partial \Omega \setminus \Gamma \}. \label{eq:setW}$$ In (\[eq:Jtail\]), $\alpha >0$ is the regularization parameter. The multiplier $\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) $ is introduced to balance two terms in the right hand side of (\[eq:Jtail\]). **Remark 3.2**. Since the Laplace operator is linear, one can also find an approximate solution of problem (\[eq:tail\]), (\[eq:tailbcs\]) by the regular quasi-reversibility method via setting in (\[eq:Jtail\]) $\mu =0$ [@Klibanov2015]. However, we have noticed that a better computational accuracy is provided in the presence of the CWF$.$ This observation coincides with the one of [@BakushinskiiKlibanov17] where it was noticed numerically that the presence of the CWF in an analog of the functional (\[eq:Jtail\]) for the 1D heat equation provides a better solution accuracy for the quasi-reversibility method. We now follow the classical Tikhonov regularization concept [@T]. By this concept, we should assume that there exists an exact solution $V_{\ast }\left( \mathbf{x}\right) $ of the problem (\[eq:Jtail\]), (\[eq:setW\]) with the noiseless data $\psi _{0\ast }(\mathbf{x}),\psi _{1\ast }(\mathbf{x}).$ Below the subscript $\ast $" is related only to the exact solution. In fact, however, the data $\psi _{0}(\mathbf{x}),\psi _{1}(\mathbf{x})$ contain noise. Let $\delta \in \left( 0,1\right) $ be the level of noise in the data $\psi _{0}(\mathbf{x}),\psi _{1}(\mathbf{x})$. Again, following the same concept, we should assume that the number $\delta \in \left( 0,1\right) $ is sufficiently small. Assume that there exist functions $Q\left( \mathbf{x}\right) ,Q_{\ast }\left( \mathbf{x}\right) \in H^{2}\left( \Omega \right) $ such that (see (\[eq:setW\])) $$Q\left( \mathbf{x}\right) =\psi _{0}(\mathbf{x}),\quad \partial _{z}Q(\mathbf{x})=\psi _{1}(\mathbf{x}),\quad \mathbf{x}\in \Gamma ;\quad Q(\mathbf{x})=0,\quad \mathbf{x}\in \partial \Omega \setminus \Gamma , \label{3.4}$$$$Q_{\ast }\left( \mathbf{x}\right) =\psi _{0\ast }(\mathbf{x}),\quad \partial _{z}Q_{\ast }(\mathbf{x})=\psi _{1\ast }(\mathbf{x}),\quad \mathbf{x}\in \Gamma ;\quad Q_{\ast }(\mathbf{x})=0,\quad \mathbf{x}\in \partial \Omega \setminus \Gamma , \label{3.5}$$$$\left\Vert Q-Q_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }<\delta . \label{3.6}$$Introduce the number $t_{\nu },$$$t_{\nu }=\left( s-\xi \right) ^{-\nu }-\left( s+d\right) ^{-\nu }>0. \label{3.70}$$ Let $$W\left( \mathbf{x}\right) =V\left( \mathbf{x}\right) -Q\left( \mathbf{x}\right) . \label{1}$$ Then by (\[eq:Jtail\]) and (\[eq:setW\]) the functional $I_{\mu ,\alpha } $ becomes $$\widetilde{I}_{\mu ,\alpha }\left( W\right) =\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left\vert \Delta W+\Delta Q\right\vert ^{2}\varphi _{\mu }\left( z\right) d\mathbf{x}+\alpha \Vert W+Q\Vert _{H^{3}(\Omega )}^{2},\text{ }W\in H_{0}^{3}\left( \Omega \right) . \label{5.1}$$Theorem \[thm:4.2\] of section 4 claims that for each $\alpha >0$ there exists unique minimizer $W_{\mu ,\nu ,\alpha }\in H^{3}\left( \Omega \right) $ of the functional (\[eq:Jtail\]), which is called the regularized solution". Using (\[1\]), denote $V_{\mu ,\nu ,\alpha }=W_{\mu ,\nu ,\alpha }+Q.$ It is stated in Theorem \[thm:4.2\] that one can choose a sufficiently large number $\nu _{0}=\nu _{0}\left( \Omega ,s\right) $ depending only on $\Omega $ and $s$ such that for any fixed value of the parameter $\nu \geq \nu _{0}$ the choices $$\alpha =\alpha \left( \delta \right) =\delta ,\mu =\ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \label{3.71}$$regularized solutions converge to the exact solution as $\delta \rightarrow 0.$ More precisely, there exists a constant $C=C\left( \Omega \right) >0$* *such that $$\left\Vert V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }-V_{\ast }\right\Vert _{H^{2}\left( \Omega \right) }\leq C\left( 1+\Vert V_{\ast }\Vert _{H^{3}(\Omega )}\right) \sqrt{\delta }\sqrt{\ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) }. \label{3.7}$$Here and below $C=C\left( \Omega \right) >0$ denotes different positive constants depending only on the domain $\Omega .$ Associated spaces {#sec:3.4} ----------------- Below, for any complex number $z\in \mathbb{C}$ we denote $\overline{z}$ its complex conjugate. It is convenient for us to consider any complex valued function $U=\mathop{\rm Re}U+i\mathop{\rm Im}U=U_{1}+iU_{2}$ as the 2D vector function $U=\left( U_{1},U_{2}\right) .$ Thus, below any Banach space we use for a complex valued function is actually the space of such 2D real valued vector functions. Norms in these spaces of 2D vector functions are defined in the standard way, so as scalar products, in the case of Hilbert spaces. For brevity we do not differentiate below between complex valued functions and corresponding 2D vector functions. However, it is always clear from the context what is what. We define the Hilbert space $H_{m}$ of complex valued functions $f\left( \mathbf{x},k\right) $ as$$H_{m}=\left\{ f\left( \mathbf{x},k\right) :\left\Vert f\right\Vert _{H_{m}}=\left[ \int_{\underline{k}}^{\overline{k}}\left\Vert f\left( \mathbf{x},k\right) \right\Vert _{H^{m}\left( \Omega \right) }^{2}dk\right] ^{1/2}<\infty \right\} ,\text{ }m=1,2,3. \label{3.700}$$Denote $\left[ ,\right] $ the scalar product in the space $H_{3}.$ The subspace $H_{m}^{0}$ of the space $H_{m}$ is defined as$$H_{m}^{0}=\left\{ f\in H_{m}:f\left( \mathbf{x},k\right) \mid _{\partial \Omega }=0,f_{z}\left( \mathbf{x},k\right) \mid _{\Gamma }=0,\forall k\in \left[ \underline{k},\overline{k}\right] \right\} .$$Also, in the case of functions independent on $k$,$$H_{0}^{m}\left( \Omega \right) =\left\{ f\left( \mathbf{x}\right) \in H^{m}\left( \Omega \right) :f\left( \mathbf{x}\right) \mid _{\partial \Omega }=0,f_{z}\left( \mathbf{x}\right) \mid _{\Gamma }=0\right\} .$$Similarly we define for $r=0,1,2$$$C_{r}=\left\{ f\left( \mathbf{x},k\right) :\left\Vert f\right\Vert _{C_{r}}=\max_{k\in \left[ \underline{k},\overline{k}\right] }\left\Vert f\left( \mathbf{x},k\right) \right\Vert _{C^{r}\left( \overline{\Omega }\right) }\right\} ,$$where $C^{0}\left( \overline{\Omega }\right) =C\left( \overline{\Omega }\right) .$ Embedding theorem implies that: $$H_{3+r}\subset C_{1+r},\left\Vert f\right\Vert _{C_{1+r}}\leq C\left\Vert f\right\Vert _{H_{3+r}},\text{ }\forall f\in H_{3+r},r=0,1\text{,} \label{3.73}$$$$\left\Vert \widetilde{f}\right\Vert _{C^{1}\left( \overline{\Omega }\right) }\leq C\left\Vert \widetilde{f}\right\Vert _{H^{3}\left( \Omega \right) },\text{ }\forall \widetilde{f}\in H^{3}\left( \Omega \right) . \label{3.74}$$ The weighted Tikhonov-like functional {#sec:3.5} ------------------------------------- Suppose that there exists a function $F\left( \mathbf{x},k\right) \in H_{4}$ such that (see (\[eq:intdiffbcs\])): $$F\left( \mathbf{x},k\right) \mid _{\Gamma }=\phi _{0}\left( \mathbf{x},k\right) ,\text{ }F_{z}\left( \mathbf{x},k\right) \mid _{\Gamma }=\phi _{1}\left( \mathbf{x},k\right) ,\text{ }F\left( \mathbf{x},k\right) \mid _{\partial \Omega \diagdown \Gamma }=0. \label{3.8}$$Also, assume that there exists an exact solution $c_{\ast }\left( \mathbf{x}\right) $ of our CISP satisfying the above conditions imposed on the coefficient $c\left( \mathbf{x}\right) $ and generating the noiseless boundary data $\phi _{0\ast }$ and $\phi _{1\ast }$ in (\[eq:intdiffbcs\]). Let the function $F_{\ast }\left( \mathbf{x},k\right) \in H_{3}$ satisfies boundary conditions (\[3.8\]) in which functions $\phi _{0}$ and $\phi _{1}$ are replaced with functions $\phi _{0\ast }$ and $\phi _{1\ast }$ respectively. We assume that$$\left\Vert F-F_{\ast }\right\Vert _{H_{4}}<\delta . \label{3.80}$$ Let $q_{\ast }\in H_{3}$ be the function $q$ generated by the exact coefficient $c_{\ast }\left( \mathbf{x}\right) .$ Introduce functions $p,p_{\ast }\in H_{3}^{0}$ as $$p\left( \mathbf{x},k\right) =q\left( \mathbf{x},k\right) -F\left( \mathbf{x},k\right) ,\text{ }p_{\ast }\left( \mathbf{x},k\right) =q_{\ast }\left( \mathbf{x},k\right) -F_{\ast }\left( \mathbf{x},k\right) . \label{3.9}$$It follows from the discussion in section 2.2 about the smoothness as well as from (\[eq:v\]), (\[eq:q\]) and (\[3.9\]) that the functions $p,p_{\ast }\in H_{3}^{0}.$ Let $R>0$ be an arbitrary number. Consider the ball $B\left( R\right) \subset H_{3}^{0}$ of the radius $R$,$$B\left( R\right) =\left\{ f\in H_{3}^{0}:\left\Vert f\right\Vert _{H_{3}}<R\right\} . \label{3.10}$$ Based on the integro-differential equation (\[eq:intdiff\]), boundary conditions (\[eq:intdiffbcs\]) for it, (\[3.8\]) and (\[3.9\]), we construct our weighted Tikhonov-like functional with the CWF (\[3.3\]) in it as $$J_{\lambda ,\rho }\left( p\right) =\exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert L\left( p+F\right) \left( \mathbf{x},\kappa \right) \right\vert ^{2}\varphi _{\lambda }^{2}\left( z\right) d\mathbf{x}d\kappa +\rho \left\Vert p\right\Vert _{H_{3}}^{2}, \label{eq:J}$$where $\rho >0$ is the regularization parameter. Similarly with ([eq:Jtail]{}), the multiplier $\exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) $ is introduced to balance two terms in the right hand side of ([eq:J]{}). The minimizer $V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }$ of the functional (\[eq:Jtail\]) is chosen in $J_{\lambda ,\rho }\left( p\right) $ as the tail function. We consider the following minimization problem: **Minimization Problem**. *Minimize the functional* $J$*$_{\lambda ,\rho }$*$(q)$* on the set $\overline{B\left( R\right) }$.* Theorems {#sec:4} ======== In this section we formulate theorems about numerical procedures considered in section 3. We start from the Carleman estimate with the CWF (\[3.3\]). \[thm:4.1\] Let $\Omega \subset \mathbb{R}^{3}$ be the above domain (\[eq:2.1\]). Temporary denote $\mathbf{x}=\left( x,y,z\right) =\left( x_{1},x_{2},x_{3}\right)$. There exist numbers $C=C\left( \Omega \right) >0$, $\nu _{0}=\nu _{0}\left( \Omega ,s,d\right) \geq 1$ and $\lambda _{0}=\lambda _{0}\left( \Omega ,s,d\right) \geq 1$ depending only on listed parameters such that for any real valued function $u\in H_{0}^{2}\left( \Omega \right) $ the following Carleman estimate holds with the CWF $\varphi _{\lambda }\left( z\right) $ in (\[3.3\] for and fixed number $\nu \geq \nu _{0}$ and for all $\lambda \geq \lambda _{0}$ $$\int_{\Omega }\left( \Delta u\right) ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}\geq \frac{C}{\lambda }\sum_{i,j=1}^{3}\int_{\Omega }\left( u_{x_{i}x_{j}}\right) ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}+C\lambda \int_{\Omega }\left( \nabla u\right) ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}+C\lambda ^{3}\int_{\Omega }u^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x} \label{4.1}$$ **Remark 4.1**. A close analog of Theorem \[thm:4.1\] is formulated as lemma 4.1 of [@KlibanovThanh15] and is proven in the proof of lemma 6.5.1 of [@BeilinaKlibanov12]. Hence, we omit the proof of Theorem \[thm:4.1\]. The next theorem is about the problem (\[eq:Jtail\]), (\[eq:setW\]). \[thm:4.2\] Assume that there exists a function $Q\in H^{3}\left( \Omega \right) $ satisfying conditions (\[3.4\]), (\[3.6\]). Then for each set of parameters $\mu ,\nu ,\alpha >0$ there exists unique minimizer $W_{\mu ,\alpha ,\nu }\in H^{3}\left( \Omega\right) $ of the functional (\[5.1\]). Let $V_{\mu ,\nu ,\alpha}=W_{\mu ,\nu ,\alpha }+Q$ (see (\[1\])). Suppose that there exists an exact solution $V_{\ast }\in H^{3}\left( \Omega \right) $ of the problem (\[eq:tail\]), (\[eq:tailbcs\]) with the noiseless boundary data $\psi _{0\ast }(x),\psi _{1\ast }(x)$. Also, assume that there exists a function $Q_{\ast }\in H^{3}\left( \Omega \right) $ satisfying conditions (\[3.5\]) and such that $$\left\Vert Q_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }\emph{\ }\leq C\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }. \label{4.200}$$Let inequality (\[3.6\]) hold, where $\delta \in \left( 0,1\right) $ is the level of noise in the data. Let $\nu _{0}\left( \Omega,s\right) $ and $\lambda _{0}\left( \Omega ,s\right) $ becnumbers of Theorem \[thm:4.1\]. Fix a number $\nu \geq \nu _{0}\left( \Omega,s\right) $ and let the parameter $\nu $ be independent on $\delta$. Choose a number $\delta _{0}\in \left( 0,e^{-2\lambda_{0}t_{\nu }}\right)$, where $\lambda _{0}$ is defined in Theorem \[thm:4.1\] and the number $t_{\nu }$ is defined in (\[3.70\]). For any $\delta \in \left( 0,\delta _{0}\right) $ let the choice (\[3.71\]) holds. Then the convergence estimate (\[3.7\]) of functions $V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }$ to the exact solution $V_{\ast }$ holds for $\delta \rightarrow 0$. In addition, the function $V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\in C^{1}\left( \overline{\Omega }\right) $ and $$C\left\Vert V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right\Vert _{C^{1}\left( \overline{\Omega }\right) }\leq \left\Vert V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right\Vert _{H^{3}\left( \Omega \right) }\leq C\left( 1+\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }\right) . \label{4.100}$$ Theorem \[thm:4.3\] is the central analytical result of this paper. \[thm:4.3\] Assume that conditions of Theorem \[thm:4.2\] hold. Set in (\[eq:intdiff\]) $V=V_{\mu \left(\delta \right) ,\nu ,\alpha \left( \delta \right) },$ where the function $V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }$ is the one of Theorem \[thm:4.2\]. The functional $J_{\lambda ,\rho }\left( p\right) $ has the Fréchet derivative $J_{\lambda ,\rho }^{\prime }\left( p\right) \in H_{3}$ at any point $p\in H_{3}^{0}.$ Assume that there exists a function $F,F_{\ast }\left( \mathbf{x},k\right) \in H_{4}$ satisfying conditions (\[3.8\]), (\[3.80\]), where $\delta \in \left( 0,1\right) .$ Let $\lambda _{0}=\lambda _{0}\left( \Omega \right) $ be the number defined in Theorem \[thm:4.1\]. Then there exists a number $\lambda _{1}=\lambda _{1}\left( \Omega ,R,\left\Vert F_{\ast }\right\Vert _{H_{4}},\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) },\underline{k},\overline{k}\right) \geq \lambda _{0}\left( \Omega \right) $ and a number $C_{1}=C_{1}\left( \Omega ,R,\left\Vert F_{\ast }\right\Vert _{H_{4}},\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) },\underline{k},\overline{k}\right) >0,$ both depending only on listed parameters, such that for any $\lambda \geq \lambda _{1}$ the functional $J_{\lambda ,\rho }\left( p\right) $ is strictly convex on $\overline{B\left( R\right) }.$ In other words, the following estimates are valid for all $p_{1},p_{2}\in \overline{B\left( R\right) }:$ $$J_{\lambda ,\rho }\left( p_{2}\right) -J_{\lambda ,\rho }\left( p_{1}\right) -J_{\lambda ,\rho }^{\prime }\left( p_{1}\right) \left( p_{2}-p_{1}\right) \geq \frac{C_{1}}{\lambda }\left\Vert p_{2}-p_{1}\right\Vert _{H_{2}}^{2}+\rho \left\Vert p_{2}-p_{1}\right\Vert _{H_{3}}^{2}, \label{4.2}$$$$J_{\lambda ,\rho }\left( p_{2}\right) -J_{\lambda ,\rho }\left( p_{1}\right) -J_{\lambda ,\rho }^{\prime }\left( p_{1}\right) \left( p_{2}-p_{1}\right) \geq C_{1}\left\Vert p_{2}-p_{1}\right\Vert _{H_{1}}^{2}+\rho \left\Vert p_{2}-p_{1}\right\Vert _{H_{3}}^{2}. \label{4.3}$$ **Remark 4.2**. The first term in the right hand side of (\[4.3\]) does not decay with the increase of $\lambda ,$ unlike (\[4.2\]). Hence, the convexity property" of the functional $J_{\lambda ,\rho }$ is sort of better in terms of the $H_{1}-$norm in (\[4.3\]) rather than in terms of the $H_{2}-$norm in (\[4.2\]). On the other hand, the norm of that term is weaker than the one in (\[4.2\]). Also, to establish convergence of reconstructed coefficients $c_{n}\left( \mathbf{x}\right) ,$ we need the $H_{2}-$norm: see (\[4.11\]) and Remark 3.1. \[thm:4.4\] Suppose that the conditions of Theorems \[thm:4.2\] and \[thm:4.3\] regarding the tail function $V=V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }$ and the function $F$ hold. The Fréchet derivative $J_{\lambda ,\rho }^{\prime }$ of the functional $J_{\lambda ,\rho }$ satisfies the Lipschitz continuity condition in any ball $B\left( R^{\prime }\right) $ as in (\[3.10\]) with any $R^{\prime }>0.$ In other words, the following inequality holds with the constant $M=M\left( \Omega ,R^{\prime },\left\Vert F_{\ast }\right\Vert _{H_{4}},\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) },\lambda ,\nu ,\rho ,\underline{k},\overline{k}\right) >0$ depending only on listed parameters: $$\left\Vert J_{\lambda ,\rho }^{\prime }\left( p_{1}\right) -J_{\lambda ,\rho }^{\prime }\left( p_{2}\right) \right\Vert _{H_{3}}\leq M\left\Vert p_{1}-p_{2}\right\Vert _{H_{3}},\text{ }\forall p_{1},p_{2}\in B\left( R^{\prime }\right) .$$ Let $P_{\overline{B}}:H_{3}^{0}\rightarrow \overline{B\left( R\right) }$ be the projection operator of the Hilbert space $H_{3}^{0}$ on $\overline{B\left( R\right) }.$ Let $p_{0}\in B\left( R\right) $ be an arbitrary point of the ball $B\left( R\right) $. Consider the following sequence: $$p_{n}=P_{\overline{B}}\left( p_{n-1}-\omega J_{\lambda ,\rho }^{\prime }\left( p_{n-1}\right) \right) ,\text{ }n=1,2,..., \label{4.9}$$where $\omega \in \left( 0,1\right) $ is a certain number. \[thm:4.5\] Assume that conditions of Theorems \[thm:4.2\] and \[thm:4.3\] hold. Let $\lambda \geq \lambda _{1},$ where $\lambda _{1}$ is the number of Theorem \[thm:4.3\]. Then there exists unique minimizer $p_{\min ,\lambda }\in \overline{B\left( R\right) }$ of the functional $J_{\lambda ,\rho }\left( p\right) $ on the set $\overline{B\left( R\right) }$ and $$J_{\lambda ,\rho }^{\prime }\left( p_{\min ,\lambda }\right) \left( y-p_{\min ,\lambda }\right) \geq 0,\text{ \ }\forall y\in H_{3}^{0}. \label{4.6}$$Also, there exists a sufficiently small number $\omega _{0}=\omega _{0}\left( \Omega ,R,\left\Vert F\right\Vert _{H_{4}},\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) },\underline{k},\overline{k},\lambda ,\delta \right) \in \left( 0,1\right) $ depending only on listed parameters such that for any $\omega \in \left( 0,\omega _{0}\right) $ the sequence (\[4.9\]) converges to the minimizer $p_{\min ,\lambda }\in \overline{B\left( R\right) }$ of the functional $J_{\lambda ,\rho }\left( p\right) $ on the set $\overline{B\left( R\right) }$, $$\left\Vert p_{\min ,\lambda }-p_{n}\right\Vert _{H_{3}}\leq r^{n}\left\Vert p_{\min ,\lambda }-p_{0}\right\Vert _{H_{3}},\text{ }n=1,2,.., \label{4.90}$$ where the number $r=r\left( \omega ,\Omega ,R,\left\Vert F\right\Vert _{H_{4}},\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) },\underline{k},\overline{k},\lambda ,\delta \right) \in \left( 0,1\right) $ depends only on listed parameters. By (\[4.90\]) we estimate the convergence rate of the sequence (\[4.9\]) to the minimizer. The next question is about the convergence of this sequence to the exact solution $p_{\ast }$ assuming that it exists. \[thm:4.6\] Assume that conditions of Theorems \[thm:4.2\] and \[thm:4.3\] hold. Let $\lambda _{1}$ be the number of Theorem \[thm:4.3\]. Choose a number $\delta _{1}\in \left( 0,e^{-2\lambda _{1}t_{\nu }}\right) .$ For $\delta \in \left( 0,\delta _{1}\right) ,$ set $\rho =\rho \left( \delta \right) =\sqrt{\delta },\lambda =\lambda \left( \delta \right) =\ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) .$ Furthermore, assume that the exact solution $p_{\ast }$ exists and $p_{\ast }\in B\left( R\right) $. Then there exists a number $C_{2}=C_{2}\left( \Omega ,R,\left\Vert F\right\Vert _{H_{4}},\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) },\underline{k},\overline{k}\right) >0$ depending only on listed parameters such that $$\left\Vert p_{\ast }-p_{\min ,\lambda \left( \delta \right) }\right\Vert _{H_{2}}\leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}, \label{4.7}$$$$\left\Vert c_{\ast }-c_{\min ,\lambda \left( \delta \right) }\right\Vert _{L_{2}\left( \Omega \right) }\leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}, \label{4.8}$$where the function $c_{\min ,\lambda \left( \delta \right) }\left( \mathbf{x}\right) $ is reconstructed from the function $p_{\min ,\lambda \left( \delta \right) }$ using (\[3.9\]) and Remark 3.1. In addition, the following convergence estimates hold $$\left\Vert p_{\ast }-p_{n}\right\Vert _{H_{2}}\leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}+r^{n}\left\Vert p_{\min ,\lambda \left( \delta \right) }-p_{0}\right\Vert _{H_{3}},\text{ }n=1,2,..., \label{4.10}$$$$\left\Vert c_{\ast }-c_{n}\right\Vert _{L_{2}\left( \Omega \right) }\leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}+C_{2}r^{n}\left\Vert p_{\min ,\lambda \left( \delta \right) }-p_{0}\right\Vert _{H_{3}},\text{ }n=1,2,..., \label{4.11}$$where $r$ is the number in (\[4.90\]) and the function $c_{n}\left( \mathbf{x}\right) $ is reconstructed from the function $p_{n}\left( \mathbf{x},k\right) $ using (\[3.9\]) and Remark 3.1. **Remark 4.1**. Since $R>0$ is an arbitrary number and $p_{0}$ is an arbitrary point of the ball $B\left( R\right) $, then Theorems \[thm:4.5\] and \[thm:4.6\] ensure the global convergence of the gradient projection method for our case, see the second paragraph of section 1. We note that if a functional is non convex, then the convergence of a gradient-like method of its minimization might be guaranteed only if the starting point of iterations is located in a sufficiently small neighborhood of its minimizer. Proofs {#sec:5} ====== In this section we prove theorems formulated in section 4, except of Theorem \[thm:4.1\] (see Remark 4.1). Proof of Theorem \[thm:4.2\] {#sec:5.1} ---------------------------- By (\[1\]) and (\[5.1\]) the vector function $W_{\min }=\left( W_{1,\min },W_{2,\min }\right) \in H_{0}^{3}\left( \Omega \right) $ is a minimizer of the functional $\widetilde{I}_{\mu ,\alpha }\left( W\right) $ if and only if $$\begin{gathered} \exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta W_{1,\min }\Delta h_{1}+\Delta W_{2,\min }\Delta h_{2}\right) \varphi _{\mu }\left( z\right) d\mathbf{x}+\alpha \left( \left( W_{\min },h\right) \right) = \\ - \exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta Q_{1}\Delta h_{1}+\Delta Q_{2}\Delta h_{2}\right) \varphi _{\mu }\left( z\right) d\mathbf{x}-\alpha \left( \left( Q,h\right) \right) ,\,\forall h=\left( h_{1},h_{2}\right) \in H_{0}^{3}\left( \Omega \right) , \end{gathered} \label{5.2}$$where $\left( \left( ,\right) \right) $ is the scalar product in $H^{3}\left( \Omega \right) .$ For any vector function $P=\left( P_{1},P_{2}\right) \in H_{0}^{3}\left( \Omega \right) $ consider the expression in the left hand side of (\[5.2\]) in which the vector $\left( W_{1,\min },W_{2,\min }\right) $ is replaced with $\left( P_{1},P_{2}\right) .$ Then this expression defines a new scalar product $\left\{ P,h\right\} $ in $H^{3}\left( \Omega \right) ,$ and the corresponding norm $\sqrt{\left\{ P,P\right\} }$ is equivalent to the norm in $H^{3}\left( \Omega \right) .$ Next, $$\begin{gathered} \left\vert -\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta Q_{1}\Delta h_{1}+\Delta Q_{2}\Delta h_{2}\right) \varphi _{\mu }\left( z\right) d\mathbf{x}-\alpha \left( \left( Q,h\right) \right) \right\vert \leq D\left\Vert Q\right\Vert _{H^{3}\left( \Omega \right) }\left\Vert h\right\Vert _{H^{3}\left( \Omega \right) }\\ \leq D_{1}\sqrt{\left\{ Q,Q\right\} }\sqrt{\left\{ h,h\right\} },\text{ }\forall h=\left( h_{1},h_{2}\right) \in H_{0}^{3}\left( \Omega \right) \end{gathered}$$with certain constants $D,D_{1}$ independent on $Q$ and $h$ but dependent on parameters $\mu ,\nu .$ Hence, Riesz theorem implies that there exists unique vector function $\widehat{Q}=\widehat{Q}\left( Q\right) \in H_{0}^{3}\left( \Omega \right) $ such that $$-\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta Q_{1}\Delta h_{1}+\Delta Q_{2}\Delta h_{2}\right) \varphi _{\mu }\left( z\right) d\mathbf{x}-\alpha \left( \left( Q,h\right) \right) =\left\{ \widehat{Q},h\right\} ,\text{ }\forall h=\left( h_{1},h_{2}\right) \in H_{0}^{3}\left( \Omega \right) .$$Hence, by (\[5.2\]) $\left\{ W_{\min },h\right\} =\left\{ \widehat{Q},h\right\} ,\forall h\in H_{0}^{3}\left( \Omega \right) .$ Hence, $W_{\min }=\widehat{Q}.$ Thus, existence and uniqueness of the minimizer of the functional $\widetilde{I}_{\mu ,\alpha }\left( W\right) $ are established, and the same for $I_{\mu ,\alpha }\left( V\right) $. We now prove convergence estimate (\[3.7\]). Let $W_{\ast }=V_{\ast }-Q_{\ast }\in H_{0}^{3}\left( \Omega \right) .$ Denote $\widetilde{W}=W_{\min }-W_{\ast },$ $\widetilde{Q}=Q-Q_{\ast }.$ Since$$\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta W_{\ast ,1}\Delta h_{1}+\Delta W_{\ast ,2}\Delta h_{2}\right) \varphi _{\mu }\left( z\right) d\mathbf{x}+\alpha \left[ W_{\ast },h\right] \label{5.3}$$$$=-\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta Q_{1}^{\ast }\Delta h_{1}+\Delta Q_{2}^{\ast }\Delta h_{2}\right) \varphi _{\mu }\left( z\right) d\mathbf{x+}\alpha \left[ W_{\ast },h\right] ,\text{ }\forall h\in H_{0}^{3}\left( \Omega \right) ,$$then subtracting (\[5.3\]) from (\[5.2\]) and setting $h=\widetilde{W},$ we obtain$$\begin{gathered} \exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta \widetilde{W}\right) ^{2}\varphi _{\mu }\left( z\right) d\mathbf{x+}\alpha \left\Vert \widetilde{W}\right\Vert _{H^{3}\left( \Omega \right) }^{2}\\ =-\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta \widetilde{Q}_{1}\Delta \widetilde{W}_{1}+\Delta \widetilde{Q}_{2}\Delta \widetilde{W}_{2}\right) \varphi _{\mu }\left( z\right) d\mathbf{x-}\alpha \left( \left( W_{\ast }+Q,\widetilde{W}\right) \right) . \end{gathered}$$Using the Cauchy-Schwarz inequality, taking into account (\[3.6\]) and recalling that $\alpha =\delta $, we obtain$$\begin{gathered} \exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta \widetilde{W}\right) ^{2}\varphi _{\mu }\left( z\right) d\mathbf{x}+\delta \left\Vert \widetilde{W}\right\Vert _{H^{3}\left( \Omega \right) }^{2} \\ \leq C\delta \left( 1+\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }^{2}\right) +C\exp \left( 2\mu t_{\nu }\right) \delta ^{2}. \end{gathered} \label{5.4}$$Since $\mu =\ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) $ and $\delta \in \left( 0,1\right) ,$ then $\exp \left( 2\mu t_{\nu }\right) \delta ^{2}=\delta $ and $$C\delta \left( 1+\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }^{2}\right) +C\exp \left( 2\mu t_{\nu }\right) \delta ^{2}\leq C\delta \left( 1+\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }^{2}\right) ,$$then (\[5.4\]) implies that$$\left\Vert \widetilde{W}\right\Vert _{H^{3}\left( \Omega \right) }\leq C\left( 1+\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }\right) , \label{5.5}$$$$\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta \widetilde{W}\right) ^{2}\varphi _{\mu }\left( z\right) d\mathbf{x\leq }C\delta \left( 1+\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) }^{2}\right) . \label{5.50}$$Since $$\exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \varphi _{\mu }\left( z\right) \geq \exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \exp \left( 2\mu \left( s+d\right) ^{-\nu }\right) =1,$$then Theorem \[thm:4.1\] implies that$$\begin{gathered} \exp \left( -2\mu \left( s+d\right) ^{-\nu }\right) \int_{\Omega }\left( \Delta \widetilde{W}\right) ^{2}\varphi _{\mu }\left( z\right) d\mathbf{x}\\ \mathbf{\geq }\frac{C}{\mu }\left( \sum_{i,j=1}^{3}\int_{\Omega }\widetilde{W}_{x_{i}x_{j}}^{2}d\mathbf{x+}\mu ^{2}\int_{\Omega }\left( \left( \nabla \widetilde{W}\right) ^{2}+\widetilde{W}^{2}\right) d\mathbf{x}\right) \geq \frac{C}{\mu }\left\Vert \widetilde{W}\right\Vert _{H^{2}\left( \Omega \right) }^{2}. \end{gathered} \label{5.51}$$The right estimate (\[4.100\]) follows from (\[5.5\]), (\[3.6\]) and (\[4.200\]). The left estimate (\[4.100\]) follows from (\[3.74\]). Comparing (\[5.50\]) with (\[5.51\]) and recalling (\[1\]) and ([4.200]{}), we obtain (\[3.7\]). $\square $ Proof of Theorem \[thm:4.3\] {#sec:5.2} ---------------------------- Recall that we treat any complex valued function $U=\mathop{\rm Re}U+i\mathop{\rm Im}U=U_{1}+iU_{2}$ in two ways: (1) in its original complex valued form and (2) in an equivalent form as a 2D vector function $\left( U_{1},U_{2}\right) $ (section 3.4). It is always clear from the content what is what. Let two arbitrary functions $p_{1},p_{2}\in \overline{B\left( R\right) }.$ Denote $h=p_{2}-p_{1}.$ Then $h=\left( h_{1},h_{2}\right) \in H_{0}^{3}\left( \Omega \right) .$ In this proof $C_{1}=C_{1}\left( \Omega ,R,\left\Vert F_{\ast }\right\Vert _{H_{4}},\left\Vert V_{\ast }\right\Vert _{H^{3}\left( \Omega \right) },\underline{k},\overline{k}\right) >0$ denotes different positive constants. Also, in this proof we denote for brevity $V\left( \mathbf{x}\right) =$* *$V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\left( \mathbf{x}\right) .$ We note that due to (\[3.73\]), (\[3.74\]), (\[3.80\]), (\[3.10\]) and (\[4.100\])$$\left\Vert \nabla V\right\Vert _{C\left( \overline{\Omega }\right) },\left\Vert F\right\Vert _{C^{2}\left( \overline{\Omega }\right) }\leq C_{1}, \label{5.52}$$$$\left\Vert \nabla h\right\Vert _{C\left( \overline{\Omega }\right) }\leq C_{1}. \label{5.53}$$ It follows from (\[eq:J\]) that we need to consider the expression $$A=\left\vert L\left( p_{1}+h+F\right) \right\vert ^{2}-\left\vert L\left( p_{1}+F\right) \right\vert ^{2}, \label{5.54}$$where the nonlinear operator $L$ is given in (\[eq:intdiff\]). First, we will single out the linear, with respect to $h$, part of $A$. This will lead us to the Frechét derivative $J_{\lambda ,\rho }^{\prime }.$ Next, we will single out $\left\vert \Delta h\right\vert ^{2}.$ This will enable us to apply Carleman estimate of Theorem \[thm:4.1\].  We have: $$\left\vert z_{1}\right\vert ^{2}-\left\vert z_{2}\right\vert ^{2}=\left( z_{1}-z_{2}\right) \overline{z}_{1}+\left( \overline{z}_{1}-\overline{z}_{2}\right) z_{2},\text{ }\forall z_{1},z_{2}\in \mathbb{C}. \label{5.6}$$Let$$z_{1}=L\left( p_{1}+h+F\right) ,\text{ }z_{2}=L\left( p_{1}+F\right) , \label{5.7}$$Then by (\[5.54\])-(\[5.12\])$$\begin{aligned} A_{1} &=&\left( z_{1}-z_{2}\right) \overline{z}_{1},\text{ }A_{2}=\left( \overline{z}_{1}-\overline{z}_{2}\right) z_{2},\text{ } \label{5.8} \\ A &=&A_{1}+A_{2}. \label{5.81}\end{aligned}$$Taking into account (\[eq:intdiff\]), (\[eq:J\]) and (\[5.7\]), we obtain$$\begin{gathered} z_{1}-z_{2}=\Delta h-2k^{2}\nabla h \left( \nabla V-\int_{k}^{\overline{k}}\left( \nabla p_{1}+\nabla F\right) d\kappa \right) \\ +2k\left( \int_{k}^{\overline{k}}\nabla hd\kappa \right) \left( 2\nabla V-2\int_{k}^{\overline{k}}\left( \nabla p_{1}+\nabla F\right) d\kappa +k\left( \nabla p_{1}+\nabla F\right) \right) +2i\left( h_{z}-\int_{k}^{\overline{k}}h_{z}d\kappa \right) . \end{gathered} \label{5.9}$$Next,$$\overline{z}_{1}=\left( \Delta \overline{h}+\Delta \overline{p_{1}}+\Delta \overline{F}\right)$$$$-2k\left( \nabla \overline{V}-\int_{k}^{\overline{k}}\left( \nabla \overline{p_{1}}+\nabla \overline{h}+\nabla \overline{F}\right) d\kappa \right) \cdot \left( k\left( \nabla \overline{p_{1}}+\nabla \overline{h}+\nabla \overline{F}\right) +\nabla \overline{V}-\int_{k}^{\overline{k}}\left( \nabla \overline{p_{1}}+\nabla \overline{h}+\nabla \overline{F}\right) d\kappa \right)$$$$-2i\left( k\left( \overline{p_{1z}}+\overline{h}_{z}+\overline{F_{z}}\right) +\overline{V_{z}}-\int_{k}^{\overline{k}}\left( \overline{p_{1z}}+\overline{h}_{z}+\overline{F_{z}}\right) d\kappa \right) .$$Hence, by (\[5.8\])$$A_{1}=\left( z_{1}-z_{2}\right) \overline{z}_{1}=\left\vert \Delta h\right\vert ^{2}+B_{1}^{\left( linear\right) }\left( h,\mathbf{x},k\right) +B_{1}\left( h,\mathbf{x},k\right) , \label{5.10}$$where the expression $B_{1}^{\left( linear\right) }\left( h,k\right) $ is linear with respect to $h=\left( h_{1},h_{2}\right) ,$$$\begin{gathered} B_{1}^{\left( linear\right) }\left( h,\mathbf{x},k\right) =\Delta hG_{1}+\left( \nabla h\nabla G_{2}\right) \cdot G_{3}+\left( \nabla \overline{h}\nabla G_{4}\right) \cdot G_{5} \\ +G_{7}\cdot \left( \int_{k}^{\overline{k}}\nabla hd\kappa \right) \nabla G_{6}+G_{9}\cdot \left( \int_{k}^{\overline{k}}\nabla \overline{h}d\kappa \right) \nabla G_{8}+G_{10}\left( h_{z}-\int_{k}^{\overline{k}}h_{z}d\kappa \right) +G_{11}\left( \overline{h}_{z}-\int_{k}^{\overline{k}}\overline{h}_{z}d\kappa \right) , \end{gathered} \label{5.11}$$where explicit expressions for functions $G_{j}\left( \mathbf{x},k\right) ,j=1,...,11$ can be written in an obvious way. Furthermore, it follows from these expressions as well as from (\[5.52\]) that $G_{1},G_{2},G_{4},G_{6}\in C_{1}$ and $G_{3},G_{5},G_{7},G_{9},$ $G_{10},G_{11}\in C_{0}$. And also$$\left\{ \begin{array}{c} \left\Vert G_{1}\right\Vert _{C_{1}},\left\Vert G_{2}\right\Vert _{C_{1}},\left\Vert G_{4}\right\Vert _{C_{1}},\left\Vert G_{6}\right\Vert _{C_{1}}\leq C_{1}, \\ \left\Vert G_{3}\right\Vert _{C_{0}},\left\Vert G_{5}\right\Vert _{C_{0}},\left\Vert G_{7}\right\Vert _{C_{0}},\left\Vert G_{9}\right\Vert _{C_{0}},\left\Vert G_{10}\right\Vert _{C_{0}},\left\Vert G_{11}\right\Vert _{C_{0}}\leq C_{1}.\end{array}\right. \label{5.12}$$The term $B_{1}\left( h,k\right) $ in (\[5.10\]) is nonlinear with respect to $h$. Applying the Cauchy-Schwarz inequality and also using (\[5.52\]) and (\[5.53\]), we obtain$$\left\vert B_{1}\left( h,\mathbf{x},k\right) \right\vert \geq \frac{1}{4}\left\vert \Delta h\right\vert ^{2}-C_{1}\left\vert \nabla h\right\vert ^{2}-C_{1}\int_{k}^{\overline{k}}\left\vert \nabla h\right\vert ^{2}d\kappa . \label{5.14}$$ Similarly with (\[5.10\])-(\[5.14\]) we obtain $$A_{2}=\left( \overline{z}_{1}-\overline{z}_{2}\right) z_{2}=B_{2}^{\left( linear\right) }\left( h,\mathbf{x},k\right) +B_{2}\left( h,\mathbf{x},k\right) , \label{5.15}$$where the term $B_{2}^{\left( linear\right) }\left( h,\mathbf{x},k\right) $ is linear with respect to $h$ and its form is similar with the one of $B_{1}^{\left( linear\right) }\left( h,\mathbf{x},k\right) $ in (\[5.11\]), although with different functions $G_{j},$ which still satisfy direct analogs of estimates (\[5.12\]). As to the term $B_{2}\left( h,\mathbf{x},k\right) ,$ it is nonlinear with respect to $h$ and, as in (\[5.14\]), $$\left\vert B_{2}\left( h,\mathbf{x},k\right) \right\vert \geq \frac{1}{4}\left\vert \Delta h\right\vert ^{2}-C_{1}\left\vert \nabla h\right\vert ^{2}-C_{1}\int_{k}^{\overline{k}}\left\vert \nabla h\right\vert ^{2}d\kappa . \label{5.16}$$Denote $B\left( h,\mathbf{x},k\right) =B_{1}\left( h,\mathbf{x},k\right) +B_{2}\left( h,\mathbf{x},k\right) .$ In addition to (\[5.14\]) and ([5.16]{}), the following upper estimate is valid: $$\left\vert B\left( h,\mathbf{x},k\right) \right\vert \leq C_{1}\left( \left\vert \Delta h\right\vert ^{2}+\left\vert \nabla h\right\vert ^{2}+\int_{k}^{\overline{k}}\left\vert \nabla h\right\vert ^{2}d\kappa \right) . \label{5.17}$$ Thus, it follows from (\[eq:intdiff\]), (\[eq:J\]), (\[5.7\])-([5.81]{}), (\[5.10\])-(\[5.16\]) that $$J_{\lambda ,\rho }\left( p_{1}+h\right) -J_{\lambda ,\rho }\left( p_{1}\right) =$$$$\exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left[ S_{1}\Delta h+S_{2}\cdot \nabla h\right] \varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +2\rho \left[ h,p_{1}\right] \label{5.18}$$$$+\exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }B\left( h,\mathbf{x},\kappa \right) \varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa .$$The second line of (\[5.18\])$$Lin\left( h\right) =\exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left[ S_{1}\Delta h+S_{2}\cdot \nabla h\right] \varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +2\rho \left[ h,p_{1}\right] \label{5.19}$$is linear with respect to $h$, where vector functions $S_{1}\left( \mathbf{x},k\right) ,S_{2}\left( \mathbf{x},k\right) $ are such that $$\left\vert S_{1}\left( \mathbf{x},k\right) \right\vert ,\left\vert S_{2}\left( \mathbf{x},k\right) \right\vert \leq C_{1}\text{ in }\overline{\Omega }\times \left[ \underline{k},\overline{k}\right] . \label{5.20}$$As to the third line of (\[5.18\]), it can be estimated from the below as$$\begin{gathered} \exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }B\left( h,\mathbf{x},\kappa \right) \varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa \\ \geq \exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \left[ \frac{1}{2}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \Delta h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa -C_{1}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \nabla h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa \right] +\rho \left\Vert h\right\Vert _{H_{3}}^{2}. \end{gathered} \label{5.21}$$In addition, (\[5.17\]) implies that$$\begin{gathered} \exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \left\vert \int_{\underline{k}}^{\overline{k}}\int_{\Omega }B\left( h,\mathbf{x},\kappa \right) \varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa \right\vert \\ \leq C_{1}\exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left( \left\vert \Delta h\right\vert ^{2}+\left\vert \nabla h\right\vert ^{2}\right) \varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +\rho \left\Vert h\right\Vert _{H_{3}}^{2}. \end{gathered} \label{5.22}$$ First, consider the functional $Lin\left( h\right) $ in (\[5.19\]). It follows from (\[3.70\]), (\[5.19\]) and (\[5.20\]) that$$\left\vert Lin\left( h\right) \right\vert \leq C_{1}\exp \left( 2\lambda t_{\nu }\right) \left\Vert h\right\Vert _{H_{3}}.$$Hence, $Lin\left( h\right) :H_{3}\rightarrow \mathbb{R}$ is a bounded linear functional. Hence, by Riesz theorem for each pair $\lambda ,\nu >0$ there exists a 2D vector function $Z_{\lambda ,\nu }\in H_{3}$ independent on $h$ such that $$Lin\left( h\right) =\left[ Z_{\lambda ,\nu },h\right] ,\text{ }\forall h\in H_{3}. \label{5.23}$$In addition, (\[5.17\]), (\[5.18\]) and (\[5.23\]) imply that$$\left\vert J_{\lambda ,\rho }\left( p_{1}+h\right) -J_{\lambda ,\rho }\left( p_{1}\right) -\left[ Z_{\lambda ,\nu },h\right] \right\vert \leq C_{1}\exp \left( 2\lambda t_{\nu }\right) \left\Vert h\right\Vert _{H_{3}}^{2}. \label{5.24}$$Thus, applying (\[5.18\])-(\[5.24\]), we conclude that $Z_{\lambda ,\nu } $ is the Frechét  derivative of the functional $J_{\lambda ,\rho }\left( p_{1}\right) $ at the point $p_{1},Z_{\lambda ,\nu }=J_{\lambda ,\rho }^{\prime }\left( p_{1}\right) $. Thus, (\[5.18\]) and (\[5.21\]) imply that $$\label{5.25} \begin{gathered} J_{\lambda ,\rho }\left( p_{1}+h\right) -J_{\lambda ,\rho }\left( p_{1}\right) -J_{\lambda ,\rho }^{\prime }\left( p_{1}\right) \left( h\right) \\ \geq \exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \left[ \frac{1}{2}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \Delta h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa -C_{1}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \nabla h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa \right] +\rho \left\Vert h\right\Vert _{H_{3}}^{2}. \end{gathered}$$Assuming that $\lambda \geq \lambda _{0},$ we now apply Carleman estimate of Theorem \[thm:4.1\], $$\frac{1}{2}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \Delta h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa -C_{1}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \nabla h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +\rho \left\Vert h\right\Vert _{H_{3}}^{2}$$$$\geq \frac{C}{\lambda }\sum_{i,j=1}^{3}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert h_{x_{i}x_{j}}\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +C\lambda \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \nabla h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa -C_{1}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \nabla h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +\rho \left\Vert h\right\Vert _{H_{3}}^{2}.$$Choosing $\lambda \geq \lambda _{1}$ to be sufficiently large, we obtain$$\label{5.26} \begin{gathered} \frac{1}{2}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \Delta h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa -C_{1}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \nabla h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +\rho \left\Vert h\right\Vert _{H_{3}}^{2}\\ \geq \frac{C}{\lambda }\sum_{i,j=1}^{3}\int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert h_{x_{i}x_{j}}\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +C_{1}\lambda \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert \nabla h\right\vert ^{2}\varphi _{\lambda }\left( z\right) d\mathbf{x}d\kappa +\rho \left\Vert h\right\Vert _{H_{3}}^{2}. \end{gathered}$$Finally, noting that $\varphi _{\lambda }\left( z\right) \geq \exp \left( 2\lambda \left( s+d\right) ^{-\nu }\right) $ in $\Omega $ and using ([5.25]{}) and (\[5.26\]), we obtain$$J_{\lambda ,\rho }\left( p_{1}+h\right) -J_{\lambda ,\rho }\left( p_{1}\right) -J_{\lambda ,\rho }^{\prime }\left( p_{1}\right) \left( h\right) \geq \frac{C_{1}}{\lambda }\left\Vert h\right\Vert _{H_{2}}^{2}+\rho \left\Vert h\right\Vert _{H_{3}}^{2},$$$$J_{\lambda ,\rho }\left( p_{1}+h\right) -J_{\lambda ,\rho }\left( p_{1}\right) -J_{\lambda ,\rho }^{\prime }\left( p_{1}\right) \left( h\right) \geq C_{1}\left\Vert h\right\Vert _{H_{1}}^{2}+\rho \left\Vert h\right\Vert _{H_{3}}^{2}.\text{ \ \ \ \ \ }$$$\square $ Proof of Theorem \[thm:4.4\] {#sec:5.3} ---------------------------- This proof is completely similar with the proof of theorem 3.1 of [BakushinskiiKlibanov17]{} and is, therefore, omitted. Proof of Theorem \[thm:4.5\] {#sec:5.4} ---------------------------- The existence and uniqueness of the minimizer $p_{\min ,\lambda }\in \overline{B\left( R\right) }$, inequality (\[4.6\]) as well as convergence estimate (\[4.90\]) follow immediately from the combination of Theorems \[thm:4.3\] and \[thm:4.4\] with lemma 2.1 and theorem 2.1 of [@BakushinskiiKlibanov17]. $\ \square $ Proof of Theorem \[thm:4.6\] {#sec:5.5} ---------------------------- Temporary denote $L\left( p+F\right) =L\left( p+F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) ,$ $J_{\lambda ,\rho }\left( p\right) :=$ $J_{\lambda ,\rho }\left( p,F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) $ meaning dependence on the tail function $V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }$. Consider the functional $J_{\lambda ,\rho }\left( p,F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) $ for $p=p_{\ast },$$$\label{5.27} \begin{gathered} J_{\lambda ,\rho }\left( p_{\ast },F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) =\exp \left( -2\lambda \left( s+d\right) ^{-\nu }\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert L\left( p_{\ast }+F,V\right) \left( \mathbf{x},\kappa \right) \right\vert ^{2}\varphi _{\lambda }^{2}\left( z\right) d\mathbf{x}d\kappa \\ +\rho \left\Vert p_{\ast }\right\Vert _{H_{3}}^{2}. \end{gathered}$$Since $p_{\ast }\in B\left( R\right) $ and $L\left( p_{\ast }+F_{\ast },V_{\ast }\right) \left( \mathbf{x},\kappa \right) =0,$ then (\[5.27\]) implies that $$J_{\lambda ,\rho }\left( p_{\ast },F_{\ast },V_{\ast }\right) =\rho \left\Vert p_{\ast }\right\Vert _{H_{3}}^{2}\leq \rho R^{2}=\sqrt{\delta }R^{2}. \label{5.28}$$It follows from (\[eq:intdiff\]), (\[3.7\]), (\[3.80\]), (\[4.100\]), (\[5.27\]) and (\[5.28\]) that $$J_{\lambda ,\rho }\left( p_{\ast },F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) \leq C_{2}\sqrt{\delta }\sqrt{\ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) }. \label{5.29}$$Next, using (\[4.2\]) and recalling that $\lambda =\ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) $, we obtain $$\begin{gathered} J_{\lambda ,\rho }\left( p_{\ast },F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) -J_{\lambda ,\rho }\left( p_{\min ,\lambda \left( \delta \right) },F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) -J_{\lambda ,\rho }^{\prime }\left( p_{\min ,\lambda \left( \delta \right) },F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) \left( p_{\ast }-p_{\min ,\lambda }\right) \\ \geq \frac{C_{2}}{\ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) }\left\Vert p_{\ast }-p_{\min ,\lambda }\right\Vert _{H_{2}}^{2}. \end{gathered}$$Next, since $-J_{\lambda ,\rho }\left( p_{\min ,\lambda \left( \delta \right) },F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) \leq 0$ and also by (\[4.6\]) $-J_{\lambda ,\rho }^{\prime }\left( p_{\min ,\lambda },F,V_{\mu \left( \delta \right) ,\nu ,\alpha \left( \delta \right) }\right) \left( p_{\ast }-p_{\min ,\lambda \left( \delta \right) }\right) \leq 0,$ we obtain, using (\[5.29\]),$$\left\Vert p_{\ast }-p_{\min ,\lambda \left( \delta \right) }\right\Vert _{H_{2}}^{2}\leq C_{2}\sqrt{\delta }\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/2},$$which implies (\[4.7\]). Estimate (\[4.8\]) follows immediately from (\[4.7\]), (\[3.9\]) and Remark 3.1. We now prove (\[4.10\]) and (\[4.11\]). Using (\[4.90\]), (\[4.7\]) and the triangle inequality, we obtain for $n=1,2,...$$$\begin{gathered} \left\Vert p_{\ast }-p_{n}\right\Vert _{H_{2}}\leq \left\Vert p_{\ast }-p_{\min ,\lambda \left( \delta \right) }\right\Vert _{H_{2}}+\left\Vert p_{\min ,\lambda \left( \delta \right) }-p_{n}\right\Vert _{H_{2}}\leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}+\left\Vert p_{\min ,\lambda \left( \delta \right) }-p_{n}\right\Vert _{H_{3}}\\ \leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}+r^{n}\left\Vert p_{\min ,\lambda }-p_{0}\right\Vert _{H_{3}}, \end{gathered}$$which proves (\[4.10\]). Next, using (\[4.8\]) and (\[4.90\]), we obtain $$\begin{gathered} \left\Vert c_{\ast }-c_{n}\right\Vert _{L_{2}\left( \Omega \right) }\leq \left\Vert c_{\ast }-c_{\min ,\lambda \left( \delta \right) }\right\Vert _{L_{2}\left( \Omega \right) }+\left\Vert c_{\min ,\lambda \left( \delta \right) }-c_{n}\right\Vert _{L_{2}\left( \Omega \right) }\\ \leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}+C_{2}\left\Vert p_{\min ,\lambda \left( \delta \right) }-p_{n}\right\Vert _{H_{2}}\leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}+C_{2}\left\Vert p_{\min ,\lambda \left( \delta \right) }-p_{n}\right\Vert _{H_{3}}\\ \leq C_{2}\delta ^{1/4}\left[ \ln \left( \delta ^{-1/\left( 2t_{\nu }\right) }\right) \right] ^{3/4}+C_{2}r^{n}\left\Vert p_{\min ,\lambda \left( \delta \right) }-p_{0}\right\Vert _{H_{3}}. \end{gathered}$$The latter proves (\[4.11\]). $\square $ Numerical Study {#sec:6} =============== In this section, we describe some details of the numerical implementation of the proposed globally convergent method and demonstrate results of reconstructions for computationally simulated data. Recall that, as it is stated in section 2.1, our applied goal in numerical studies is to calculate locations and dielectric constants of targets which mimic antipersonnel land mines and IEDs. We model these targets as small sharp inclusions located in an uniform background, which is air with its dielectric constant $c\left( air\right) =1.$ Sometimes IEDs can indeed be located in air. In addition, in previous works [@KlibanovLiem17buried; @KlibanovThanh15b] of the first author with coauthors the problem of imaging of targets mimicing land mines and IEDs in the case when those targets are buried in a sandbox is considering. Microwave experimental data are used in these publications. The tail functions numerical method was used in these works. It was demonstrated in [@KlibanovLiem17buried; @KlibanovThanh15b] that, after applying certain data preprocessing procedures, one can treat those targets as ones located in air. Recall that $c\left( air\right) =1$ is a good approximation for the value of the dielectric constant of air. Thus, in this paper, we conduct numerical experiments for the case when small inclusions of our interest are located in air. We test several of values of the dielectric constant and sizes of those inclusions. However, we do not assume in computations the knowledge of the background in the domain of interest $\Omega $ in ([eq:2.1]{}), except of the knowledge that $c\left( \mathbf{x}\right) =1$ outside of $\Omega ,$ see (\[eq:coef\]). The Carleman Weight Function of numerical studies {#sec:6.1} ------------------------------------------------- The CWF $\varphi _{\lambda }\left( z\right) =\exp \left( 2\lambda \left( z+s\right) ^{-\nu }\right) ,$ which was introduced in (\[3.3\]), changes too rapidly due to the presence of the parameter $\nu >0.$ We have established in our computational experiments that such a rapid change does not allow us to obtain good numerical results, also, see page 1581 of [Baud]{} for a similar conclusion. Hence, we use in our numerical studies a simpler CWF $\psi _{\lambda }\left( z\right) ,$$$\psi _{\lambda }\left( z\right) =e^{-2\lambda z}. \label{6.1}$$We cannot prove an analog of Theorem \[thm:4.1\] for this CWF. Nevertheless, the following Carleman estimate is valid in the 1D case [@KlibanovKolesov17]:$$\int_{-\xi }^{d}\left( w^{\prime \prime }\right) ^{2}\psi _{\lambda }\left( z\right) dz\geq C_{3}\left[ \int_{-\xi }^{d}\left( w^{\prime \prime }\right) ^{2}\psi _{\lambda }\left( z\right) dz+\lambda \int_{-\xi }^{d}\left( w^{\prime }\right) ^{2}\psi _{\lambda }\left( z\right) dz+\lambda ^{3}\int_{-\xi }^{d}w^{2}\psi _{\lambda }\left( z\right) dz\right] , \label{6.2}$$for for all $\lambda >1$ and for any real valued function $w\in H^{2}\left( -\xi ,d\right) $ such that $w\left( -\xi \right) =w^{\prime }\left( -\xi \right) =0.$ Here and below the number $C_{3}=C_{3}\left( \xi ,d\right) >0$ depends only on numbers $\xi $ and $d$. To briefly justify the CWF (\[6.1\]) from the analytical standpoint, consider now the case when the Laplace operator is written in partial finite differences with respect to variables $x,y\in \left[ -b,b\right] $ (see ([eq:2.1]{})) with the uniform grid step size $h>0$ with respect to each variable $x$ and $y,$ $$\Delta ^{h}=\frac{\partial ^{2}}{\partial z^{2}}+\Delta _{x,y}^{h}. \label{6.20}$$Here $\Delta _{x,y}^{h}$ is the Laplace operator with respect to $x,y$, which is written in finite differences. Suppose that we have $M_{h}$ interior grid points in each direction $x$ and $y$. The domain $\Omega $ in (\[eq:2.1\]) becomes $$\Omega _{h}=\left\{ \left( x_{j},y_{s},z\right) :\left\vert x_{j}\right\vert ,\left\vert y_{s}\right\vert <b,z\in \left( -\xi ,d\right) \right\} ;\text{ }j,s=1,...,M_{h},$$where $\left( x_{j},y_{s}\right) $ are grid points. Then the finite difference analog of the integral of $\left( \Delta u\right) ^{2}\psi _{\lambda }\left( z\right) $ over the domain $\Omega $ is$$Z_{h}\left( u,\lambda \right) =\sum_{j,s=1}^{M_{h}}h^{2}\int_{-\xi }^{d}\left[ \left( u_{zz}+u_{xx}^{h}+u_{yy}^{h}\right) \left( x_{j},y_{s},z\right) \right] ^{2}\psi _{\lambda }\left( z\right) dz, \label{6.3}$$where $u\left( x_{j},y_{s},z\right) $ is the discrete real valued function defined in $\Omega _{h}$ and such that $u_{zz}\left( x_{j},y_{s},z\right) \in L_{2}\left( -\xi ,d\right) $ for all $\left( x_{j},y_{s}\right) .$ In addition, $u\left( -\xi ,x_{j},y_{s}\right) =\partial _{z}u\left( -\xi ,x_{j},y_{s}\right) =0.$ Also, in (\[6.3\]) $u_{xx}^{h}$ and $u_{yy}^{h}$ are corresponding finite difference derivatives of the function $u\left( x_{j},y_{s},z\right) $ at the point $\left( x_{j},y_{s},z\right) $. Interior" grid points are those located in $\overline{\Omega }\diagdown \partial \Omega .$ As to the grid points located at $\partial \Omega ,$ they are counted in the well known way in finite differences derivatives in (\[6.3\]). Obviously,$$Z_{h}\left( u,\lambda \right) \geq \frac{1}{2}\sum_{j,s=1}^{M_{h}}h^{2}\int_{-\xi }^{d}\left[ u_{zz}\left( x_{j},y_{s},z\right) \right] ^{2}\psi _{\lambda }\left( z\right) dz-\widehat{C}\sum_{j,s=1}^{M_{h}}\int_{-\xi }^{d}\left[ u\left( x_{j},y_{s},z\right) \right] ^{2}\psi _{\lambda }\left( z\right) dz. \label{6.4}$$Here and below in this section the number $\widehat{C}=\widehat{C}\left( 1/h\right) >0$ depends only on $1/h.$ Hence, the following analog of the Carleman estimate (\[6.2\]) for the case of the operator (\[6.20\]) follows immediately from (\[6.4\]): $$\label{6.5} \begin{gathered} Z_{h}\left( u,\lambda \right) \geq C_{3}\sum_{j,s=1}^{M_{h}}h^{2}\int_{-\xi }^{d}\left[ u_{zz}\left( x_{j},y_{s},z\right) \right] ^{2}\psi _{\lambda }\left( z\right) dz\\ +\widehat{C}\left[ \lambda \sum_{j,s=1}^{M_{h}}\int_{-\xi }^{d}\left[ u_{z}\left( x_{j},y_{s},z\right) \right] ^{2}\psi _{\lambda }\left( z\right) dz+\lambda ^{3}\sum_{j,s=1}^{M_{h}}\int_{-\xi }^{d}\left[ u\left( x_{j},y_{s},z\right) \right] ^{2}\psi _{\lambda }\left( z\right) dz\right] ,\forall \lambda \geq \widetilde{\lambda }\left( h\right) >1, \end{gathered}$$where $\widetilde{\lambda }\left( h\right) $ increases with the decrease of $h$. Suppose now that operators $\Delta $ and $\nabla $ in (\[eq:intdiff\]), (\[eq:Jtail\]), (\[eq:J\]) are rewritten in partial finite differences with respect to $x,y.$ As to the spaces $H^{3}(\Omega )$ and $H_{3+r},$ they were introduced to ensure that functions $p\in C_{1},V\in C^{1}\left( \overline{\Omega }\right) ,F\in C_{2},$ see (\[3.73\]), (\[3.74\]). Note that by the embedding theorem $H^{n}\left( -\xi ,d\right) \subset C^{n-1}\left[ -\xi ,d\right] ,n\geq 1$. Thus, we replace the space $H^{m}(\Omega )$ with $m=1,2,3$ in (\[3.700\]) with the following finite difference analog of it for complex valued functions $f$:$$H^{n,h}(\Omega _{h})=\left\{ f\left( x_{j},y_{s},z\right) :\left\Vert f\right\Vert _{H^{n,h}(\Omega _{h})}^{2}=\sum_{j,s=1}^{M_{h}}\sum_{r=0}^{n}h^{2}\int_{-\xi }^{d}\left\vert \partial _{z}^{r}f\left( x_{j},y_{s},z\right) \right\vert ^{2}dz\right\} ,n=1,2,$$and similarly for the replacement of $H_{m}$ with $H_{n,h}.$ So, we replace the regularization terms $\alpha \Vert V\Vert _{H^{3}(\Omega )}^{2}$ and $\rho \left\Vert p\right\Vert _{H_{3}}^{2}$ in (\[eq:Jtail\]) and ([eq:J]{}) with $\alpha \Vert V_{h}\Vert _{H^{2,h}(\Omega )}^{2}$ and $\rho \left\Vert p_{h}\right\Vert _{H_{2,h}}^{2}$ respectively. Also, we replace in (\[3.80\]) $H_{4}$ with $H_{3,h}$ and in (\[3.10\]) we replace $H_{3}$ with $H_{2,h}.$ The functionals $J_{\lambda ,\rho }\left( p+F\right) $ and $\widetilde{I}_{\mu ,\alpha }\left( W\right) $ in (\[eq:J\]) and (\[5.1\]) are replaced with their finite difference analogs, $$\widetilde{I}_{\mu ,\alpha }^{h}\left( W_{h}\right) =\exp \left( 2\mu d\right) \int_{\Omega }\left\vert \Delta ^{h}W_{h}+\Delta ^{h}Q_{h}\right\vert ^{2}\psi _{\mu }\left( z\right) d\mathbf{x}+\alpha \Vert W_{h}+Q_{h}\Vert _{H^{2,h}(\Omega _{h})}^{2},\text{ } \label{6.50}$$$$J_{\lambda ,\rho }^{h}\left( p_{h}\right) =\exp \left( 2\lambda d\right) \int_{\underline{k}}^{\overline{k}}\int_{\Omega }\left\vert L^{h}\left( p_{h}+F_{h}\right) \left( \mathbf{x},\kappa \right) \right\vert ^{2}\varphi _{\lambda }^{2}\left( z\right) d\mathbf{x}d\kappa +\rho \left\Vert p_{h}\right\Vert _{H_{2,h}}^{2}, \label{6.51}$$where $V_{h},W_{h},p_{h},Q_{h}$ and $F_{h}$ are finite difference analogs of functions $V,W,p,Q$ and $F$ respectively and $L^{h}$ is the finite difference analog of the operator $L$ in which operators $\Delta $ and $\nabla $ are replaced with their above mentioned finite difference analogs. Then the Carleman estimate (\[6.5\]) implies that the straightforward analogs of Theorems \[thm:4.2\]-\[thm:4.6\] are valid for functionals (\[6.50\]) and ([6.51]{}). The only restriction is that the grid step size $h$ should be bounded from the below, $$h\geq h_{0}=const.>0. \label{6.6}$$In other words, numerical experiments should not be conducted for the case when $h$ tends to zero, as it is done sometimes for forward problems for PDEs. It is our computational experience that condition (\[6.6\]) is sufficient for computations. So, we do not change $h$ in our numerical studies below. **Remarks 6.1:** 1. For brevity, we do not reformulate here those analogs of Theorems \[thm:4.2\]-\[thm:4.6\]. Also, both for brevity and convenience we describe our procedures below for the case of the continuous spatial variable $\mathbf{x}$. Still, we actually work in our computations with functionals (\[6.50\]) and ([6.51]{}). 2. The reason why we have presented the above theory for the case of the CWF (\[3.3\]) is that it is both consistent and is valid for the 3D case. We believe that this theory is interesting in its own right from the analytical standpoint. On the other hand, in the case of the CWF (\[6.1\]) and the assumption about partial finite differences, the corresponding theory (unlike computations!) is similar to the one which we (with coauthors) have developed in the 1D case of [@KlibanovKolesov17]. Data simulation and propagation {#sec:6.2} ------------------------------- To computationally simulate the boundary data $g_{0}(\mathbf{x},k)$ in ([eq:cisp]{}), we solve the Lippmann-Schwinger integral equation $$u(\mathbf{x},k)=e^{ikz}+k^{2}\int_{\Omega }\Phi (\mathbf{x},\mathbf{y},k)(c(\mathbf{y} )-1)u(\mathbf{y}k)d\mathbf{y}, \label{eq:LippmannSchwinger}$$where $\Phi (\mathbf{x},\mathbf{y},k)$ is the fundamental solution of the Helmholtz equation with $c(\mathbf{x})\equiv 1$: $$\Phi (\mathbf{x},\mathbf{y},k)=\frac{e^{ik|\mathbf{x}-\mathbf{y}|}}{4\pi |\mathbf{x}-\mathbf{y}|},\quad \mathbf{x}\neq \mathbf{y}.$$The spectral method of [@Vainikko00], which is based on the periodization technique and the fast Fourier transform, is used to solve (\[eq:LippmannSchwinger\]), see, e.g. [@LechleiterLiem14] for the numerical implementation of this method in MATLAB. We work with dimensionless variables. Typically linear sizes of antipersonnel land mines are between 5 and 10 centimeters (cm), see, e.g. [@Landmine]. Hence, just as in papers with experimental data of our research group [@KlibanovLiem17buried; @KlibanovLiem17exp], we introduce the dimensionless variables $\mathbf{x}^{\prime }=\mathbf{x}/(10\,\text{cm})$. Our mine-like targets are ball-shaped. Hence, their radii $r=0.3$ and $0.5,$ for example, correspond to diameters of those balls of 6 cm and 10 cm respectively. This change of variables leads to the dimensionless frequency $k$, which is also called the wavenumber". Hereafter, for convenience and brevity,  we will leave the same notations for dimensionless spatial variables $\mathbf{x}$ as before. Note that the dimensionless wavenumber $k=16.2,$ which we work with below (see (\[6.7\])), corresponds to the frequency of $f=7.7$ GHz. Since in [KlibanovLiem17buried,KlibanovLiem17exp]{} microwave experimental data were collected by our research group for the range of frequencies from 1 GHz to 10 GHz, then $f=7.7$ GHz is a realistic value of the frequency. Inclusion number $\max \left( c\right) $ in the inclusion Radius $r$ ------------------ ------------------------------------------ ------------ 1 3 0.3 2 3 0.5 3 5 0.3 4.1 7 0.3 4.2 3 0.5 : Mine-like and IED-like inclusions tested in our numerical studies. A single inclusion in cases 1-3. Two inclusions simultaneously: the left inclusion is 4.1, the right inclusion is 4.2[]{data-label="tab1"} All inclusions, which we have numerically tested, are listed in Table [tab1]{}, where $r$ denotes the radius of the corresponding ball-shaped inclusion. To have smooth target/background interfaces, the dielectric constants of inclusions were smoothed out for a better stability of the numerical method of solving the Lippmann-Schwinger equation ([eq:LippmannSchwinger]{}). But the maximal values of dielectric constants remain unchanged in this smoothing, and these values are reached in the centers of those balls. In our study, the center of each ball representing a single inclusion is at the point $\mathbf{x}=\left( x,y,z\right) =\left( 0,0,0\right) $ and centers of two inclusions in case number 4 of Table [tab1]{} are placed at points $\left( x,y,z\right) =(-0.75,0,0)$ (left inclusion) and $\left( x,y,z\right) =(0.75,0,0)$ (right inclusion). However, when running the inversion procedure, we do not assume the knowledge of neither those centers nor the shapes of those inclusions. In the setup for our computational experiments, we want to be close to the experimental setup of [@KlibanovLiem17buried; @KlibanovLiem17exp]. Actually, in [@KlibanovLiem17buried; @KlibanovLiem17exp] the data are collected not at the part $\Gamma $ (\[eq:2.1\]) of the boundary of the domain $\Omega $ as in (\[eq:cisp\]). Instead, they are collected on a square $P_{meas}$, which is a part of the so-called measurement plane $$P_{m}=\{z=-A\}, \label{60}$$ where $A=const.>\xi .$ We solve the Lippmann-Schwinger equation ([eq:LippmannSchwinger]{}) to obtain computationally simulated data $f(\mathbf{x},k)$ for $\mathbf{x}\in P_{meas}.$ We refer to $f(\mathbf{x},k)$ as the measured data". The measurement plane $P_{m}$ is located far from $\Gamma $. This causes several complications. First, we would need to solve our CISP in a large computational domain, which could be time-consuming process. Second, looking at the measured data is not clear enough how to distinguish inclusions; see Fig. \[fig:f\_noiseless\]. Hence, we need to propagate the measured data $f(\mathbf{x},k)$ generated by the Lippmann-Schwinger solver from the rectangle $P_{meas}$ to the so-called propagation plane $P_{p}=\{z=A^{\prime }\}$, $A^{\prime }\leq -\xi ,$ which is closer to our inclusions. In fact, we propagate to the plane which includes the rectangle $\Gamma .$ As a result we get the so-called propagated data (Fig. \[fig:g\_noiseless\]), which are more focused on the target of our interest than the original data. So, we can clearly see now the location of our inclusion in $x,y$ coordinates. The resulting function $u\left( \mathbf{x},k\right) $ is our given boundary data $g_{0}(\mathbf{x},k) $ in (\[eq:cisp\]) for our CISP. The derivative $u_{z}\left( \mathbf{x},k\right) $ for $\mathbf{x}\in \Gamma ,$ i.e. the function $g_{1}(\mathbf{x},k)$ in (\[eq:gz0\]), is calculated by propagating $f(\mathbf{x},k)$ into a plane $\left\{ z=-\xi -\varepsilon \right\} $ for a small number $\varepsilon >0$. Next, the finite difference is used to approximate $g_{1}(\mathbf{x},k)$. For brevity, we do not describe the data propagation procedure here. Instead, we refer to [@KlibanovKolesov17exp; @KlibanovLiem17exp; @KlibanovLiem17buried] for detailed descriptions. In fact, this procedure is quite popular in Optics under the name the *angular spectrum representation method* [@Nov]. We also remark that both in the data propagation procedure and in our convexification method we need to calculate the some derivatives of noisy data: the $\partial _{k,z}^{2}-$derivative of the propagated data and the $\partial _{k}-$derivative in the convexification. In all cases this is done using finite differences. We have not observed any instabilities probably because the step sizes of our finite differences were not too small. The same was in all previous above cited publications of this research group. To propagate the function $f(\mathbf{x},k)$ close to an inclusion, we need to figure out first where this inclusion is located, i.e. we need to estimate the number $\xi $ in (\[eq:2.1\]). Fortunately, the data propagation procedure allows us to do this. For example, consider two inclusions with the same size $r=0.3$, but with different dielectric constants $c=3$ and $c=5 $. Centers of both are located at the point $\left( 0,0,0\right) .$ We solve the Lippmann-Schwinger equation for each of these two cases to generate the data at the measurement plane $P_{m}$ with $A=8,$ see (\[60\]). Next, we propagate the data to several propagated planes $P_{p,a}=\left\{ z=a\right\} ,$ where $a=\left( -8,2\right] $. Here, we use $k=16.2$ (see (\[6.7\])). The dependence of the maximal absolute value of the propagated data $M\left( a\right) =\max_{P_{p,a}}\left\vert u\left( x,y,a,16.2\right) \right\vert $ on the number $a$ for these inclusions is depicted in Fig. \[fig:g\_max\]. We see that the function $M\left( a\right) $ attains its maximal value near the point $a_{0}=-0.5$ for both cases. This point is located reasonably close to the actual position of the front faces (at $z=-0.15)$ of the corresponding inclusions. The function $M\left( a\right) $ has attained its maximal value at points $a$ close to $a_{0}$ for all other inclusions we have tested. Therefore, we propagate the measured data for all inclusions to the propagated plane $P_{p,-0.5}=\left\{ z=-0.5\right\} $ and we set in ([eq:2.1]{}) $\xi =-0.5$. ![The depedence of maximum absolute value of propagated data $g(\mathbf{x}, k)$ on the locaton of propagation plane $a$ for inclusions with $c=3.0$ (solid line) and $c = 5.0$ (dashed line).[]{data-label="fig:g_max"}](Fig2){width="70.00000%"} We have found in our computations that the optimal interval of wavenumbers is: $$k\in \left[ 15.2,16.2\right] . \label{6.7}$$We divide this interval in ten (10) subintervals with the step size $\Delta k=0.1.$ For each $k=15.2,15.3,...16.1,16.2$ and for each inclusion under the consideration we solve Lippmann-Schwinger equation ([eq:LippmannSchwinger]{}) to generate the function $f(\mathbf{x},k).$ Next, by propagating this data, we obtain the functions $g_{0}(\mathbf{x},k)$ and $g_{1}(\mathbf{x},k)$ in (\[eq:cisp\]) and (\[eq:gz0\]) respectively. Using (\[eq:uinc\]), (\[eq:w\]), (\[eq:v\]) and (\[eq:q\]), consider the function $q(\mathbf{x},k)$ on the propagated plane $P_{p}$, i.e. at the boundary $\Gamma $. In fact, this function is denoted as $\phi _{0}\left( \mathbf{x},k\right) $ in (\[eq:intdiffbcs\]) and it is one of the two boundary conditions (the second one is $\phi _{1}\left( \mathbf{x},k\right) $ in (\[eq:intdiffbcs\])) which generate the function $F_{h}$ in the functional $J_{\lambda ,\rho }^{h}\left( p_{h}\right) $ in (\[6.51\]). Fig. \[fig:q\_noiseless\] displays the function $\phi _{0}(\mathbf{x},k)$ for the inclusion number 1 in Table 1 for $k=16.2$. Computational domain {#sec:6.3} -------------------- To model the experimental setup of [KlibanovLiem17buried,KlibanovLiem17exp]{} we use the following measurement plane: $$P_{m}=\left\{ z=-8\right\} ,P_{meas}=\{\mathbf{x}:(x,y)\in (-3,3)\times (-3,3),z=-8\},$$where $P_{meas}\subset P_{m}$ is the square on which measurements are conducted and $z=-8$ corresponds to the 80 cm. The latter is the approximate distance from the center of any inclusion to the plane $\left\{ z=0\right\} $ where detectors are located in [@KlibanovLiem17buried; @KlibanovLiem17exp]. Solving equation (\[eq:LippmannSchwinger\]), we generate the function $f(\mathbf{x},k),k\in \left[ 15.2,16.2\right] .$ Next, we propagate this function to $\Gamma ,$ $$\Gamma =\{\mathbf{x}:(x,y)\in (-3,3)\times (-3,3),z=-0.5\}\subset P_{p}.$$Here, $z=-0.5$ was found in section 6.2. Finally, we define our computational domain as $$\Omega =\{\mathbf{x}:(x,y,z)\in (-3,3)\times (-3,3)\times (-0.5,4.5)\}. \label{eq:omega}$$ Adding noise {#sec:6.4} ------------ We add a random noise to the simulated data $f(\mathbf{x},k)$ as follows: $$f_{noisy}(\mathbf{x},k)=f(\mathbf{x},k)+\delta \Vert f(\mathbf{x},k)\Vert _{L^{2}(\Gamma )}\frac{\sigma (\mathbf{x},k)}{\Vert \sigma (\mathbf{x},k)\Vert _{L^{2}(\Gamma )}}.$$Here, $\delta $ is the noise level. Next, $\sigma (\mathbf{x},k)=\sigma _{1}(\mathbf{x},k)+i\sigma _{2}(\mathbf{x},k)$, where $\sigma _{1}(\mathbf{x},k)$ and $\sigma _{2}(\mathbf{x},k)$ are random numbers uniformly distributed on the interval $(-1,1)$. We use below $\delta =0.15$, i.e. $15\%$ of the additive noise. Fig. \[fig:noisy\_data\] displays the absolute value of simulated data with noise $f_{noise}(\mathbf{x},k)$, the corresponding propagated data $g_{0,noisy}(\mathbf{x},k)$ and the function $\phi _{0,noisy}(\mathbf{x},k)$ for the same inclusion and the wavenumber $k=16.2$ as in Fig. [fig:noiseless\_data]{}. We see that the data propagation procedure has a smoothing effect on our noisy measured data, since $g_{0}(\mathbf{x},k)$ in Fig. \[fig:g\_noiseless\] and $g_{0,noisy}(\mathbf{x},k)$ in Fig. [fig:g\_noisy]{} are almost identical. The algorithm {#sec:6.5} ------------- Based on the above theory, we use the following algorithm for determining the function $c(\mathbf{x})$ from simulated data with noise $f(\mathbf{x},k)$ (here, the subscript $noisy$" is left out for convenience, also see item 1 in Remarks 6.1): 1. Using the data propagation procedure, calculate the boundary data $g_{0}(\mathbf{x},k)$ and $g_{1}(\mathbf{x},k)$. 2. Calculate the subsequent boundary conditions $\phi _{0}(\mathbf{x},k)$, $\phi _{1}(\mathbf{x},k)$, $\psi _{0}(\mathbf{x})$, and $\psi _{1}(\mathbf{x})$. 3. Compute the auxiliary functions $Q_{h}(\mathbf{x})$ and $F_{h}(\mathbf{x},k)$. 4. Compute the minimizer $W_{\min ,h}(\mathbf{x})$ of the functional $\widetilde{I}_{\mu ,\alpha }^{h}\left( W_{h}\right) $ in (\[6.50\]). 5. Using the computed function $V_{h}(\mathbf{x})=W_{\min ,h}(\mathbf{x})+Q_{h}(\mathbf{x})$, minimize the functional $J_{\lambda ,\rho }^{h}(p_{h})$ in (\[6.51\]). Let the function $p_{h,\min }(\mathbf{x},k)$ be its minimizer. Calculate the function $q_{h}(\mathbf{x},k)=p_{h,\min }(\mathbf{x},k)+F_{h}(\mathbf{x},k)$. 6. Compute the function $v_{h}(\mathbf{x},k)$ for $k=\underline{k}$ as follows: $$v_{h}(\mathbf{x},\underline{k})=-\int_{\underline{k}}^{\overline{k}}q_{h}(\mathbf{x},\kappa )d\kappa +V_{h}(\mathbf{x}).$$ 7. Calculate the approximation for the unknown coefficient $c(\mathbf{x})$ using the following formulae, see (\[eq:coef\]), (\[eq:intdiffv\]) $$\beta (\mathbf{x})=-\Delta ^{h}v_{h}(\mathbf{x},\underline{k})-\underline{k}^{2}\nabla v_{h}(\mathbf{x},\underline{k})\cdot \nabla v(\mathbf{x},\underline{k})+2i\underline{k}v_{z}(\mathbf{x},\underline{k}),$$$$c\left( \mathbf{x}\right) =\left\{ \begin{array}{c} \mathop{\rm Re}\beta \left( \mathbf{x}\right) +1,\text{ if }\mathop{\rm Re}\beta \left( \mathbf{x}\right) \geq 0\text{ and }\mathbf{x}\in \Omega , \\ 1,\text{ otherwise.}\end{array}\right.$$ Numerical implementation {#sec:6.6} ------------------------ We now present some details of the numerical implementation. When minimizing functionals $\widetilde{I}_{\mu ,\alpha }^{h}\left( W_{h}\right) $ and $J_{\lambda ,\rho }^{h}\left( p_{h}\right) $ in (\[6.50\]) and (\[6.51\]), we use finite differences not only with respect to $x,y$ but with respect to $z$ as well. Thus, $z-$derivatives in these functionals are also written in finite differences. For brevity we use the same notations $\widetilde{I}_{\mu ,\alpha }^{h}\left( W_{h}\right) $ and $J_{\lambda ,\rho }^{h}\left( p_{h}\right) $ for these functionals. This is the fully discrete case, unlike the semi-discrete case of (\[6.50\]), (\[6.51\]). The theory for the fully discrete cases of nonlinear ill-posed problems for PDEs is not yet developed well. It seems that such a theory is much more complicated than the one for the semi-discrete case. There are only a few results for the fully discrete case, and all are for linear ill-posed problems for PDEs, as opposed to our nonlinear case, see, e.g. [@Burman; @KS]. Since it is not yet clear to us how to extend above theorems for the fully discrete case, we are not concerned with such extensions here. We minimize resulting functionals with respect to the values of corresponding functions at grid points. In the computational domain ([eq:omega]{}), we use the uniform grid with $N_{x}=N_{y}=N_{z}=51$ points with the corresponding step sizes $h_{x},h_{y},h_{z}$, where $h_{x}=h_{y}=h.$ The grid point labeled $(j,s,l)$ corresponds to $\mathbf{x}=(x,y,z)=(x_{j},y_{s},z_{l})$. In addition, the interval $k=[\underline{k},\overline{k}]$ of wavenumbers is divided into $N_{k}=11$ points $k_{n}$ with the step size $h_{k}$. Hence, we use the following discrete functions $W_{h}(\mathbf{x})=W(x_{j},y_{s},z_{l})=W_{j,s,l}$ and $p_{h}(\mathbf{x},k)=p_{h}(x_{j},y_{s},z_{l},k_{n})=p_{j,s,l,n}$ at grid points. To minimize the functionals $\widetilde{I}_{\mu ,\alpha }^{h}(W_{j,s,l})$ and $J_{\lambda ,\rho }^{h}(p_{j,s,l,n}),$ we use the conjugate gradient method (CG) instead of the gradient projection method, which is suggested by our theory. Indeed, similarly with [@KlibanovKolesov17], we have observed that the results obtained by both these methods are practically the same. On the other hand, CG is easier to implement numerically than the gradient projection method. Note that we do not employ the standard line search algorithm for determining the step size of the CG. Instead, we start with the step size $10^{-4}$, which is reduced two times if the value of the corresponding functional on the current iteration exceeds its value on the previous iteration otherwise it remains the same. The minimization algorithm is stopped when the step size is less then $10^{-10}$. We use zero as the starting point of the CG for both functions $W_{j,s,l}$ and $p_{j,s,l,n}$. Gradients of both functionals $\widetilde{I}_{\mu ,\alpha }^{h}(W_{j,s,l})$ and $J_{\lambda ,\rho }^{h}(p_{j,s,l,n})$ are calculated analytically on each step, and we do not provide details of this for brevity. Rather, we refer to formulae (7.7) and (7.8) of [@KlibanovKuzhuget10], where gradients of similar functionals are calculated analytically using the Kronecker delta function. Also, due to the difficulty with the numerical implementation of the $H^{2,h}(\Omega _{h})-$norm, we use the simpler $L_{2}$ norm in (\[6.50\]). As to (\[6.51\]), we have established numerically that the minimization of the functional $J_{\lambda ,\rho }^{h}\left( p_{h}\right) $ works better if the regularization term is absent. Hence, we set $\rho =0$ in (\[6.51\]). Reconstruction results {#sec:6.7} ---------------------- In this section we present the results of our reconstructions for the inclusions listed in Table \[tab1\] using the above algorithm. These results are obtained using the Carleman Weight Function (\[6.1\]) with $\mu =8$ in (\[6.50\]) and $\lambda =8$ in (\[6.51\]). We have found that these are optimal values of the parameters $\mu $ and $\lambda .$ Table [tab3]{} lists each inclusion with the maximal value $c_{exact}$ of the exact coefficient $c_{exact}=\max_{inclusion}c\left( \mathbf{x}\right) $, radius $r $, the maximal value of the computed coefficient $c_{comp}=\max_{inclusion}c(\mathbf{x})$, the relative computational error $$\varepsilon =\frac{|c_{comp}-c_{exact}|}{c_{exact}}\cdot 100\%,$$and location, i.e. the $z$ coordinate of the point where the value of $c_{comp}$ is achieved. Note that while we have added $15\%$ noise in our simulated data, the relative computational errors of reconstructed coefficients do not exceed 9% in all cases, which is 1.67 times less than the level of noise in the data. Moreover, the locations of points where the values of $c_{comp}$ are achieved, are reconstructed with a good accuracy as well. Indeed, we need our reconstructed inclusions to be somewhere between $-r$ and $r$, where either $r=0.3$ or $r=0.5$. Fig. \[fig:c\_3\] displays the exact and computed images for the inclusion number 1 in Table \[tab1\]. Images are obtained in Paraview. Until now we have considered only the case of a single inclusion. The case of two inclusions, which is listed as number 4 in Table \[tab1\], is very similar. The absolute value of simulated data with noise $f_{noise}(\mathbf{x},k)$, the propagated data $g_{0,noisy}(\mathbf{x},k)$, and the function $\phi _{0,noisy}(\mathbf{x},k)$ for two inclusions and the wavenumber $k=16.2$ are displayed on Fig. \[fig:2inc\_data\]. Looking at the original data of Fig. \[fig:f\_2inc\], we cannot clearly distinguish these two inclusions. However, Figures \[fig:g\_2inc\] and [fig:q\_2inc]{} show that these two inclusions can be clearly separated after the data propagation procedure. Furthermore, these figures also indicate that the left inclusion has a larger dielectric constant and a smaller size than the right one, which is true. The reconstruction results of Fig. [fig:c\_2inc]{} reflect this fact too. Here, the locations of both inclusions are computed accurately and the larger inclusion appears larger in the reconstructed image \[fig:c\_comp2inc\]. The values of $c_{comp}$ in both inclusions are also computed with a good accuracy, see Table \[tab3\]. This result is obtained using the same parameters as in the case with a single inclusion. Inclusion number Exact coef. $c_{exact}$ Radius $r$ Computed coef. $c_{comp}$, error Location ------------------ ------------------------- ------------ ---------------------------------- ---------- 1 3 0.3 3.17, 5.7% 0.01 2 3 0.5 2.88, 4.0% 0.01 3 5 0.3 5.15, 3.0% -0.09 4.1 7 0.3 6.36, 9.0% 0.01 4.2 3 0.5 2.99, 0.3% 0.01 : Reconstruction results[]{data-label="tab3"} [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} M. V. Klibanov, A. E. Kolesov, L. Nguyen, A. Sullivan, [Globally strictly convex cost functional for a 1-D inverse medium scattering problem with experimental data]{}, SIAM J. on Applied Mathematics 77 (5) (2017) 1733–1755. L. Beilina, M. V. Klibanov, [A globally convergent numerical method for a coefficient inverse problem]{}, SIAM Journal on Scientific Computing 31 (1) (2008) 478–509. L. Beilina, M. V. Klibanov, [Approximate Global Convergence and Adaptivity for Coefficient Inverse Problems]{}, Springer, 2012. M. V. Klibanov, D.-L. Nguyen, L. H. Nguyen, H. Liu, [A globally convergent numerical method for a 3D coefficient inverse problem with a single measurement of multi-frequency data]{}, accepted for publication in Inverse Problems and Imaging, also available in arXiv: 1612.0414. A. E. Kolesov, M. V. Klibanov, L. H. Nguyen, D.-L. Nguyen, N. T. Thanh, [Single measurement experimental data for an inverse medium problem inverted by a multi-frequency globally convergent numerical method]{}, Applied Numerical Mathematics 120 (2017) 176–196. D.-L. Nguyen, M. V. Klibanov, L. H. Nguyen, M. A. Fiddy, [Imaging of buried objects from multi-frequency experimental data using a globally convergent inversion method]{}, J. Inverse and Ill-Posed Problems, accepted for publication (2017), available online, DOI: 10.1515/jiip-2017- 0047. D.-L. Nguyen, M. V. Klibanov, L. H. Nguyen, A. E. Kolesov, M. A. Fiddy, H. Liu, [Numerical solution of a coefficient inverse problem with multi-frequency experimental raw data by a globally convergent algorithm]{}, Journal of Computational Physics 345 (2017) 17–32. L. Beilina, M. V. Klibanov, [Globally strongly convex cost functional for a coefficient inverse problem]{}, Nonlinear Analysis: Real World Applications 22 (2015) 272–288. M. V. Klibanov, O. V. Ioussoupova, [Uniform strict convexity of a cost functional for three-dimensional inverse scattering problem]{}, SIAM Journal on Mathematical Analysis 26 (1) (1995) 147–179. M. V. Klibanov, [Global convexity in a three-dimensional inverse acoustic Problem]{}, SIAM Journal on Mathematical Analysis 28 (6) (1997) 1371–1388. M. V. Klibanov, [Global convexity in diffusion tomography]{}, Nonlinear World 4 (1997) 247–265. M. V. Klibanov, A. Timonov, [Carleman Estimates for Coefficient Inverse Problems and Numerical Applications]{}, de Gruyter, Utrecht, 2004. M. V. Klibanov, V. G. Kamburg, [Globally strictly convex cost functional for an inverse parabolic problem]{}, Mathematical Methods in the Applied Sciences 39 (4) (2016) 930–940. M. V. Klibanov, L. H. Nguyen, A. Sullivan, L. Nguyen, [A globally convergent numerical method for a 1-d inverse medium problem with experimental data]{}, Inverse Problems and Imaging 10 (4) (2016) 1057–1085. M. V. Klibanov, N. T. Th[à]{}nh, [Recovering dielectric constants of explosives via a globally strictly convex cost functional]{}, SIAM Journal on Applied Mathematics 75 (2) (2015) 518–537. G. Chavent, [Nonlinear Least Squares for Inverse Problems - Theoretical Foundations and Step-by-Step Guide for Applications]{}, Springer, 2009. A. Goncharsky, S. Romanov, [Supercomputer technologies in inverse problems of ultrasound tomography]{}, Inverse Problems 29 (2013) 075004. A. V. Goncharsky, S. Y. Romanov, [Iterative methods for solving coefficient inverse problems of wave tomography in models with attenuation]{}, Inverse Problems 33 (2) (2017) 025003. J. A. Scales, M. L. Smith, T. L. Fischer, [Global optimization methods for multimodal inverse problems]{}, Journal of Computational Physics 103 (2) (1992) 258–268. A. Lakhal, [KAIRUAIN-algorithm applied on electromagnetic imaging]{}, Inverse Problems 29 (2010) 095001. A. Lakhal, [A direct method for nonlinear ill-posed problems]{}, Inverse Problems, accepted for publication, available online at http://iopscience.iop.org/article/10.1088/1361-6420/aa91e0/pdf. M. V. Klibanov, N. A. Koshev, J. Li, A. G. Yagola, [Numerical solution of an ill-posed Cauchy problem for a quasilinear parabolic equation using a Carleman weight function]{}, Journal of Inverse and Ill-posed Problems 24 (2016) 761–776. A. B. Bakushinskii, M. V. Klibanov, N. A. Koshev, [Carleman weight functions for a globally convergent numerical method for ill-posed Cauchy problems for some quasilinear PDEs]{}, Nonlinear Analysis: Real World Applications 34 (2017) 201–224. M. V. Klibanov, [Carleman weight functions for solving ill-posed Cauchy problems for quasilinear PDEs]{}, Inverse Problems 31 (12) (2015) 125007. A. Bukhgeim, M. Klibanov, [Uniqueness in the large of a class of multidimensional inverse problems]{}, Soviet Math. Doklady 17 (1981) 244–247. M. V. Klibanov, [Carleman estimates for global uniqueness, stability and numerical methods for coefficient inverse problems]{}, Journal of Inverse and Ill-Posed Problems 21 (4) (2013) 477–560. L. Baudouin, M. d. Buhan, S. Ervedoza, [Convergent algorithm based on Carleman estimates for the recovert of a potential in the wave equation]{}, SIAM J. on Numerical Analysis 55 (2017) 1578–1613. H. Ammari, J. Garnier, W. Jing, H. Kang, M. Lim, K. Solna, H. Wang, [Mathematical and statistical methods for multistatic imaging]{}, Lecture Notes in Mathematics 2098 (2013) 125–157. H. Ammari, Y. Chow, J. Zou, [The concept of heterogeneous scattering and its applications in inverse medium scattering]{}, SIAM J. Mathematical Analysis 46 (2014) 2905–2935. H. Ammari, Y. Chow, J. Zou, [Phased and phaseless domain reconstruction in inverse scattering problem via scattering coefficients]{}, SIAM J. Applied Mathematics 76 (2016) 1000–1030. G. Bao, P. Li, J. Lin, F. Triki, [Inverse scattering problems with multi-frequencies]{}, Inverse Problems 31 (2015) 093001. M. de Buhan, M. Kray, [A new approach to solve the inverse scattering problem for waves: combining the TRAC and the Adaptive Inversion methods]{}, Inverse Problems 29 (2013) 085009. Y. T. Chow, J. Zou, [A numerical method for reconstructing the coefficient in a wave equation]{}, Numerical Methods in Partial Differential Equations 31 (2015) 289–307. Y. T. Chow, K. Ito, K. Liu, J. Zou, [Direct sampling method in diffuse optical tomography]{}, SIAM J. Scientific Computing 37 (2015) A1658–A1684. K. Ito, B. Jin, J. Zou, [A direct sampling method for inverse electromagnetic medium scattering]{}, Inverse Problems 29 (9) (2013) 095018. B. Jin, Z. Zhou, [A finite element method with singularity reconstruction for fractional boundary value problems]{}, ESAIM: Mathematical Modelling and Numerical Analysis 49 (2015) 1261–1283. S. Kabanikhin, A. Satybaev, M. Shishlenin, [Direct Methods of Solving Multidimensional Inverse Hyperbolic Problem]{}, VSP, 2004. S. Kabanikhin, K. Sabelfeld, N. Novikov, M. Shishlenin, [Numerical solution of the multidimensional Gelfand-Levitan equation]{}, J. Inverse and Ill-Posed Problems 23 (2015) 439–450. S. Kabanikhin, N. Novikov, I. Osedelets, M. Shishlenin, [Fast Toeplitz linear system inversion for solving two-dimensional acoustic inverse problem]{}, J. Inverse and Ill-Posed Problems 23 (2015) 687–700. A. Lakhal, [A decoupling-based imaging method for inverse medium scattering for Maxwell’s equations]{}, Inverse Problems 26 (2010) 015007. J. Li, H. Liu, Q. Wang, [Enhanced multilevel linear sampling methods for inverse scattering problems]{}, J. Comput. Phys. 257 (2014) 554–571. J. Li, P. Li, H. Liu, X. Liu, [Recovering multiscale buried anomalies in a two-layered medium]{}, Inverse Problems 31 (2015) 105006. H. Liu, Y. Wang, C. Yang, [Mathematical design of a novel gesture-based instruction/input device using wave detection]{}, SIAM J. Imaging Sci. 9 (2016) 822–841. M. V. Klibanov, D.-L. Nguyen, L. H. Nguyen, [A coefficient inverse problem with a single measurement of phaseless scattering data]{}, arXiv:1710.04804. M. V. Klibanov, V. Romanov, [Two reconstruction procedures for a 3-D phaseless inverse scattering problem for the generalized Helmholtz equation]{}, Inverse Problems 32 (2016) 0150058. V. Romanov, [ Inverse Problems of Mathematical Physics]{}, VNU Science Press, 1987. D. Gilbarg, N. Trudinger, [ Elliptic Partial Differential Equations of Second Order]{}, Springer, 1984. V. Romanov, [Inverse problems for differential equations with memory]{}, Eurasian J. of Mathematical and Computer Applications 2 (4) (2014) 51–80. M. V. Klibanov, [Carleman estimates for the regularization of ill-posed Cauchy problems]{}, Applied Numerical Mathematics 94 (2015) 46–74. A. Tikhonov, A. Goncharsky, V. Stepanov, A. Yagola, [Numerical Methods for the Solution of Ill-Posed Problems]{}, Kluwer, London, 1995. N. T. Th[à]{}nh, L. Beilina, M. V. Klibanov, M. A. Fiddy, [Imaging of buried objects from experimental backscattering time-dependent measurements using a globally convergent inverse algorithm]{}, SIAM Journal on Imaging Sciences 8 (1) (2015) 757–786. G. Vainikko, [Fast solvers of the Lippmann-Schwinger equation]{}, in: D. Newark (Ed.), Direct and Inverse Problems of Mathematical Physics, Int. Soc. Anal. Appl. Comput. 5, Kluwer, Dordrecht, 2000, p. 423. A. Lechleiter, D.-L. Nguyen, [A trigonometric Galerkin method for volume integral equations arising in TM grating scattering]{}, Advanced Computational Mathematics 40 (2014) 1–25. <https://en.wikipedia.org/wiki/M14_mine>. L. Novotny, B. Hecht, [Principles of Nano-Optics]{}, 2nd Edition, Cambridge University Press, Cambridge, 2012. E. Burman, J. Ish-Horowicz, L. Oksanen, [Fully discrete finite element data assimilation method for the heat equation]{}, arXiv:1707.06908. M. Klibanov, F. Santosa, [A computational quasi-reversibility method for Cauchy problems for Laplace’s equation]{}, SIAM J. Applied Mathematics 51 (1991) 1653–1675. A. V. Kuzhuget, M. Klibanov, [Global convergence for a 1-D inverse problem with application to imaging of land mines]{}, Applicable Analysis 89 (2010) 125–157. [^1]: The corresponding author [^2]: Department of Mathematics & Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA ([email protected], [email protected]) [^3]: Institute of Mathematics and Information Science, North-Eastern Federal University, Yakutsk, Russia ([email protected]) [^4]: Supported by US Army Research Laboratory and US Army Research Office grant W911NF-15-1-0233 and by the Office of Naval Research grant N00014-15-1-2330. In addition, the work of Kolesov A.E. was partially supported by Mega-grant of the Russian Federation Government (N14.Y26.31.0013) and RFBR (project N17-01-00689A)
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $D$ be an integral domain with quotient field $K$. For any set $\XX$, the ring $\Int(D^\XX)$ of [*integer-valued polynomials on $D^\XX$*]{} is the set of all polynomials $f \in K[\XX]$ such that $f(D^\XX) \subseteq D$. Using the $t$-closure operation on fractional ideals, we find for any set $\XX$ a $D$-algebra presentation of $\Int(D^\XX)$ by generators and relations for a large class of domains $D$, including any unique factorization domain $D$, and more generally any Krull domain $D$ such that $\Int(D)$ has a [*regular basis*]{}, that is, a $D$-module basis consisting of exactly one polynomial of each degree. As a corollary we find for all such domains $D$ an intrinsic characterization of the $D$-algebras that are isomorphic to a quotient of $\Int(D^\XX)$ for some set $\XX$. We also generalize the well-known result that a Krull domain $D$ has a regular basis if and only if the Pólya-Ostrowski group of $D$ (that is, the subgroup of the class group of $D$ generated by the images of the factorial ideals of $D$) is trivial, if and only if the product of the height one prime ideals of finite norm $q$ is principal for every $q$.' address: | Department of Mathematics\ California State University, Channel Islands\ Camarillo, California 93012 author: - Jesse Elliott title: 'Presentations and module bases of integer-valued polynomial rings' --- **** Å[[A]{}]{} å[[**[a]{}**]{}]{} \[section\] \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Conjecture]{} \[theorem\][Lemma]{} \[theorem\][Definition]{} \[theorem\][Problem]{} \[theorem\][Remark]{} \[theorem\][Example]{} Introduction ============ Let $D$ be an integral domain with quotient field $K$. The ring of [*integer-valued polynomials on $D$*]{} is the subring $$\Int(D) = \{f \in K[X] : f(D) \subseteq D\}$$ of the polynomial ring $K[X]$. More generally, if $\XX$ is a set, then the ring of [*integer-valued polynomials on $D^\XX$*]{} is the subring $$\Int(D^\XX) = \{f \in K[\XX] : f(D^\XX) \subseteq D\}$$ of $K[\XX]$ [@cah]. The study of integer-valued polynomial rings—on number rings—began with Pólya and Ostrowski circa 1919 [@cah p. xiv]. They showed that, for any number ring $D$, the $D$-module $\Int(D)$ has a [*regular basis*]{}, that is, a $D$-module basis consisting of exactly one polynomial of each degree, if and only if the product $\Pi_q$ of the prime ideals of $D$ of norm $q$ is a principal ideal for every $q$. In fact this equivalence holds for any Dedekind domain $D$. More generally, if $D$ is a Krull domain, then $\Int(D)$ has a regular basis if and only if the product $\Pi_q$ of the height one prime ideals of norm $q$ is principal for every $q$ [@cha Corollary 2.5]. In particular, if $D$ is a unique factorization domain, then $\Int(D)$ has a regular basis. Moreover, for any Krull domain $D$, there is a subgroup $\PO(D)$ of the class group $\Cl(D)$ of $D$, generated by the images of the so-called [*factorial ideals*]{} $n!_D$ of $D$, that in some sense measures the extent to which $\Int(D)$ fails to have a regular basis; specifically, $\Int(D)$ has a regular basis if and only if the group $\PO(D)$ is trivial [@cha Corollary 2.5]. One of our main results, Theorem \[equivthm\] (in Section \[sec:4\]), generalizes these results on Krull domains to a much larger class of integral domains, including the domains of Krull type (equivalently the Prüfer $v$-multiplication domains (PVMDs) of finite $t$-character), hence the TV PVMDs. (The latter classes of domains are defined in Sections \[sec:3\] and \[sec:4\].) Since Pólya’s and Ostrowski’s seminal work, much attention has been given to finding $D$-module bases of integer-valued polynomial rings. For any Dedekind domain $D$ for which $\Int(D)$ has a regular basis, [@cah Proposition II.3.14] provides an algorithm to construct any finite number of elements of such a basis. (Theorem \[regbasisalg\] generalizes this algorithm.) Moreover, [@cha Corollary 3.11] provides a characterization of all cyclic number fields $K$ such that $\Int(\mathcal{O}_K)$ has a regular basis, and [@cah Corollary II.4.5] and [@arm Propositions 3.4, 3.6, and 3.19] explicitly construct all such $K$ of degree $2$, $3$, $4$, and $6$ over $\QQ$. The number field $K = \QQ(\sqrt{-5})$ is an example where $\PO(\mathcal{O}_K)$ has order $2$; so is $K = \QQ(\sqrt{-29})$, where one also has $\PO(\mathcal{O}_K) \subsetneq \Cl(\mathcal{O}_K)$ [@cah Exercise II.31]. Unfortunately we do not know a characterization of the number fields $K$ such that $\PO(\mathcal{O}_K)$ has order $2$. Although $\Int(D)$ does not have a regular basis for many (and probably most) number rings $D$, the $D$-module $\Int(D)$ is free for any Dedekind domain $D$ [@cah Remark II.3.7(iii)]. Surprisingly, however, there are no confirmed examples in the literature (of which we are aware) of an integral domain $D$ such that $\Int(D)$ is not free as a $D$-module. In an earlier paper [@ell2], we showed that $\Int(D)$ is locally free if $D$ is a Krull domain, or more generally if $D$ is a TV PVMD [@ell2 Theorem 1.2]. We also conjectured that $\Int(D)$ is not flat as a $D$-module for $D = \FF_2[[T^2, T^3]]$ and for $D = \FF_2 +T\FF_4[[T]]$. This conjecture is still open. In this paper we also consider a related problem, that of finding a $D$-algebra presentation of $\Int(D)$ by generators and relations. This problem is motivated by results in the existing literature on integer-valued polynomial rings as follows. First, a presentation for $\Int(\ZZ^\XX)$ for any set $\XX$ is given in [@jess2], where the presentation is used to provide several equivalent conditions for a ring to be [*binomial*]{} in the sense of [@wil]. In [@des] it is shown that, for any finite extension $K$ of the field $\QQ_p$ of $p$-adic rational numbers, one can construct from any Lubin-Tate formal group law $F \in \mathcal{O}_K[[X,Y]]$ a minimal set $\{f_{i}: i \geq 0\}$ of generators of $\Int(\mathcal{O}_K)$ as an $\mathcal{O}_K$-algebra. (For example, if $K = \QQ_p$ and $F = X+Y+XY$, then $f_i = {X \choose p^i}$ for all $n$.) However, we do not know a complete set of relations for the generators $f_i$. A more well-known result, [@cah Proposition II.3.14], implies that, for any Dedekind domain $D$ such that $\Int(D)$ has a regular basis, $\Int(D)$ is generated as a $D$-algebra by the [*$q^n$th $q$-Fermat polynomial*]{} $F_q^{\circ n} = F_q \circ F_q \circ \cdots \circ F_q$ for all positive integers $n$ and all prime powers $q$, where $F_q = \frac{X^q-X}{\pi_q}$ and $\pi_q$ is any generator of $\Pi_q$. The question of the relations among these generators has not been raised. To this end we show in Theorem \[presentation3\] that the obvious relations $(F_q^{\circ n})^q - F_q^{\circ n} = \pi_q F_q^{\circ (n+1)}$ are a complete set of relations for the generators $F_q^{\circ n}$ of $\Int(D)$. More generally, Theorem \[presentation3\], for a large class $\mathcal{C}$ of domains $D$, including the Krull domains $D$ for which $\Int(D)$ has a regular basis, provides for any set $\XX$ a complete set of generators and relations for the $D$-algebra $\Int(D^\XX)$, using an apporpriate generalization of the $q$-Fermat polynomials. A nontrivial application of this algebra presentation of integer-valued polynomial rings is as follows. It is known that a ring $A$ is isomorphic to a quotient of $\Int(\ZZ^\XX)$ for some set $\XX$ if and only if the endomorphism $a \longmapsto a^p$ of $A/pA$ is the identity for every prime number $p$ [@jess2 Theorem 4.1]. Theorem \[polybthm\] generalizes this by showing that, for any integral domain $D$ and any (commutative) $D$-algebra $A$, if $D$ is in the class $\mathcal{C}$ mentioned above or if $A$ is $D$-torsion-free, then the following conditions are equivalent. 1. $A$ is isomorphic to a quotient of $\Int(D^\XX)$ for some set $\XX$. 2. For every $a \in A$ there is a $D$-algebra homomorphism $\Int(D) \longrightarrow A$ sending $X \in \Int(D)$ to $a$. 3. The endomorphism $a \longmapsto a^{N(\ppp)}$ of $A/\ppp A$ is the identity for every $t$-maximal prime ideal $\ppp$ of $D$ of finite norm $N(\ppp) = |D/\ppp|$. (The $t$-maximal ideals of an integral domain are defined in Section \[sec:3\].) Our proof of the equivalence of conditions (2) and (3) above for domains in the class $\mathcal{C}$ uses the presentation of $\Int(D)$ mentioned above in an essential way; for this reason we suspect that the equivalence does not hold for all Dedekind domains $D$ and all $D$-algebras $A$, although we do not know a counterexample. One of the main tools we use in this paper is that of a star operation, or $'$-operation, introduced by Krull in [@kru Section 6.43], on fractional ideals. Specifically, we use the well-known star operations of divisorial closure, $t$-closure, and $w$-closure. These are immensely useful for generalizing known results on Dedekind domains and Noetherian domains to larger classes of domains. All of the definitions and facts we need are summarized in Section \[sec:3\]; proofs can be found in [@gil], which is a classic reference on multiplicative ideal theory. The main results in this paper—those results labeled “Theorem”—are Theorems \[factprop\], \[trivcor\], \[regbasisalg\], \[equivthm\], \[presentation3\], \[polyathm\], \[polybthm\], and \[numthm\]. In Section \[sec:2\] we provide a $D$-algebra presentation for $\Int(D)$ when $D$ is a finite dimensional local domain with principal maximal ideal. In Sections \[sec:3\] and \[sec:4\] we generalize the results in [@cah Sections II.1 and II.3], which focus on Dedekind domains, to much larger classes of domains. There we define and prove new results on the characteristic ideals and factorial ideals of a domain, as well as its Pólya-Ostrowski group, which we define when the factorial ideals are $t$-invertible. In Section \[sec:5\] we find a $D$-algebra presentation of $D$ when $D$ is in a large class $\mathcal{C}$ of integral domains, including all Krull domains such that $\Int(D)$ has a regular basis. Finally, in Sections \[sec:6\] through \[sec:8\] we apply our previous results to the study of $D$-algebras that are isomorphic to $\Int(D^\XX)$ for some set $\XX$, and also to $D$-algebras $A$ such that for every $a \in A$ there is a $D$-algebra homomorphism $\Int(D) \longrightarrow A$ sending $X \in \Int(D)$ to $a$. All rings and algebras in this paper are commutative with identity. For any ring $R$, and for any $f \in R[X]$ and any nonnegative integer $n$, we let $f^{\circ n}$ denote the $n$-fold composition of $f$ with itself, where $f^{\circ 0} = X$. The local case {#sec:2} ============== In this section we find a $D$-algebra presentation for $\Int(D)$ when $D$ is a finite dimensional local domain with principal maximal ideal. \[nonzerodiv\] If $B \supseteq A$ is an integral extension of rings, and if $a \in A$ is a nonzerodivisor, then $a$ is a nonzerodivisor in $B$. This is clear. \[presentation\] Let $D$ be a local domain with principal maximal ideal $\pi D$ and finite residue field of order $q$. Let $F_q = \frac{X^q - X}{\pi} \in \Int(D)$. The unique $D$-algebra homomorphism $$\varphi: {\begin{array}{rrr} D[X_0, X_1, X_2, \ldots] & \longrightarrow & \Int(D) \\ X_{k} & \longmapsto & F_{q}^{\circ k} \end{array}}$$ is surjective, and if $D$ has finite Krull dimension then $\ker \varphi$ is equal to the ideal $I$ generated by $X_{k}^q - X_{k} - \pi X_{k+1}$ for all $k \in \ZZ_{\geq 0}$. The homomorphism $\varphi$ is surjective by [@cah Remark II.2.14] and the proof of [@cah Proposition II.3.14]. Suppose that $D$ has finite Krull dimension. For any positive integer $n$, let $$A_n = D[X_{0}, X_{1}, X_{2}, \ldots, X_{n}]$$ and $$B_n = D[X, F_q, F_q^{\circ 2}, \ldots, F_q^{\circ n}].$$ One has $\varphi(A_n) = B_n$, so $\varphi$ restricts to a surjective homomorphism $\varphi_n: A_n \longrightarrow B_n$. Moreover, one has $\ker \varphi = \bigcup_n \ker \varphi_n$. It therefore suffices to show that $\ker \varphi_n = J_n$, where $J_n$ is the ideal in $A$ generated by $X_{k}^q - X_{k} - \pi X_{k+1}$ for $0 \leq k \leq n-1$. Clearly $J_n$ is contained in $\ker \varphi_n$, so we have a surjective ring homomoprhism $$\psi_n: A_n^\prime \longrightarrow B_n,$$ where $A_n^\prime = A_n/J_n$. We must show that $\ker \psi_n = 0$. Now, $A_n^\prime$ is integral over $D[X_n]$, and likewise $B_n$ is integral over $D[F_q^{\circ n}] \cong D[X]$. Therefore by [@ati Exercise 11.6] both rings have Krull dimension $\dim D[X] \leq 1+ 2 \dim D < \infty$. Thus, since $B_n$ is an integral domain, the kernel of $\psi_n$ must be a mimimal prime ideal in $A_n^\prime$. But by Lemma \[nonzerodiv\], $\pi$ is a nonzerodivisor in $A_n^\prime$, so the map $$A_n^\prime \longrightarrow A_n^\prime[\pi^{-1}] = D[\pi^{-1}][X_0]$$ is an inclusion of rings. Therefore $A_n^\prime$ is a domain, so the kernel of $\psi_n$, being a minimal prime in $A^\prime_n$, is zero. We do not know if the hypothesis in Proposition \[presentation\] of the finite dimensionality of $D$ is necessary. If $D$ is the ring of integers $\mathcal{O}_K$ for some finite extension of the field $\QQ_p$ of $p$-adic rational numbers, then for any Lubin-Tate formal group law $F \in D[[X,Y]]$ over $D$ and for any $a \in D$, there is a unique formal power series $$[a]_F(T) = \sum_{n = 1} c_n(a) T^n \in D[[T]]$$ such that $ [a]_F(F(X,Y)) = F([a]_F(X),[a]_F(Y))$ in $D[[X,Y]]$ and $c_1(a) = a$; moreover, for each $n$ one has $c_n(a) = f_n(a)$ for all $a \in D$ for a unique $f_n \in \Int(D)$, and $\deg f_n \leq n$ [@des]. By [@des Theorem 3.1] one has $$\Int(D) = D[f_1, f_2, f_3, \ldots],$$ and in fact $\{f_{q^i}: i \geq 0\}$ is a minimal set of generators of $\Int(D)$ as a $D$-algebra, where $q$ is the cardinality of the residue field of $D$. For example, if $K = \QQ_p$ and $F = X+Y+XY$, then $f_n = {X \choose n}$ for all $n$, and in that case a complete set of relations among the $f_n$ is known. However, for general $K$ and $F$ we do not know a complete set of relations for the $f_n$. Such a complete set of relations would provide an alternative $D$-algebra presentation for the ring $\Int(D)$ in this case. Regular bases and characteristic and factorial ideals {#sec:3} ===================================================== Let $D$ be an integral domain with quotient field $K$. A [*regular basis*]{} of $\Int(D)$ is a $D$-module basis of $\Int(D)$ consisting of exactly one polynomial of each degree. By [@cha Corollary 2.5], if $D$ is a Krull domain, then $\Int(D)$ has a regular basis if and only if the product $\Pi_q$ of all height one prime ideals $\ppp$ of $D$ with $|D/\ppp| = q$ is a principal ideal for every prime power $q$. In particular, $\Int(D)$ has a regular basis if $D$ is a unique factorization domain, since in that case every height one prime ideal of $D$ is principal. In this section and the next we find a more general characterization for a much larger class of domains, including, for example, all TV PVMDs, using the $t$-closure operation on fractional ideals, described below. A [*fractional ideal*]{} of $D$ is a $D$-submodule $I$ of $K$ such that $I^{-1} = (D :_K I)$ is nonzero, or equivalently such that $aI \subseteq D$ for some nonzero $a \in D$. A [*star operation*]{} on $D$ is a closure operation $*$ on the partially ordered set $\mathcal{F}(D)$ of nonzero fractional ideals of $D$ such that $D^* = D$ and $I^* J^* \subseteq (IJ)^*$ for all $I,J \in \mathcal{F}(D)$. Equivalently, a star operation on $D$ is a self-map $*$ of $\mathcal{F}(D)$ satisfying the following conditions for all $I,J \in \mathcal{F}(D)$ and all nonzero $a \in D$. 1. $I \subseteq I^* = (I^*)^*$, and $I \subseteq J \Longleftrightarrow I^* \subseteq J^*$. 2. $D^* = D$. 3. $(aI)^* = aI^*$. For any star operation $*$ one also has the following. 1. $(I^* J^*)^* = (IJ)^*$. 2. $(I^*+J^*)^* = (I+J)^*$. 3. $(I^* \cap J^*)^* = I^* \cap J^*$. We will make use of the following important star operations. First, the [*$d$-closure*]{} star operation $d$ is the identity operation. The [*divisorial closure*]{} star operation $v$, also known as the [*$v$-operation*]{}, acts by $v: I \longrightarrow I_v = (I^{-1})^{-1}$. The [*$t$-closure*]{} star operation $t$ acts by $$t: I \longrightarrow I_t = \bigcup \{J_v: J \subseteq I \mbox{ and } J \mbox{ is finitely generated}\} .$$ Finally, the [*$w$-closure*]{} star operation $w$ acts by $$w: I \longrightarrow I_w = \bigcup \{(I :_K J): J \subseteq D \textup{ and } J_t = D\}.$$ One has $d \leq w \leq t \leq v$, where one writes $*_1 \leq *_2$ if $I^{*_1} \subseteq I^{*_2}$ for all $I \in \mathcal{F}(D)$. Under this partial ordering of star operations, $d$ is the smallest star operation on $D$; $v$ is the largest star operation on $D$; $t$ is the largest finite type star operation on $D$, where $*$ is of [*finite type*]{} if $I^* = \bigcup \{J^*: J \subseteq I \mbox{ and } J \mbox{ is finitely generated}\}$ for all $I$; and $w$ is the largest stable finite type star operation on $D$, where $*$ is [*stable*]{} if $(I \cap J)^* = I^* \cap J^*$ for all $I,J$. If $v = d$ on $\mathcal{F}(D)$, or equivalently, if $d$ is the only star operation on $D$, then $D$ is said to be [*divisorial*]{}. For example, a Dedekind domain is equivalently a divisorial Krull domain. If $t = v$ on $\mathcal{F}(D)$, or equivalently, if $v$ is of finite type, then $D$ is said to be a [*TV domain*]{}. Any Noetherian or Krull domain is a TV domain. Let $*$ be a star operation on $D$. A fractional ideal $I$ of $D$ is said to be a [*$*$-ideal*]{} if $I^* = I$, and an [*integral $*$-ideal*]{} if $I$ is a $*$-ideal contained in $D$. For example, every invertible fractional ideal of $D$ is a $*$-ideal. An ideal of $D$ that is maximal among the proper integral $*$-ideals of $D$ is said to be [*$*$-maximal*]{}. Every $*$-maximal ideal of $D$ is prime. Moreover, if $*$ is of finite type, then every proper integral $*$-ideal of $D$ is contained in some $*$-maximal ideal of $D$. We let $*\Max(D)$ denote the set of all $*$-maximal ideals of $D$, which is nonempty if $*$ is of finite type. The operation $(I,J) \longmapsto (IJ)^*$ on $\mathcal{F}(D)$ is called [*$*$-multiplication*]{}. The set of all $*$-ideals of $D$ is a partially ordered monoid under $*$-multiplication. Its group of units is the group of $*$-invertible $*$-ideals, where a fractional ideal $I$ is [*$*$-invertible*]{} if $(IJ)^* = D$ for some fractional ideal $J$, in which case $(II^{-1})^* = D$ and $I^{-1} = J^*$ is the inverse of $I$ under $*$-multiplication. The [*$*$-class group*]{} $\Cl^*(D)$ of $D$ is the group of $*$-invertible $*$-ideals of $D$ under $*$-multiplication modulo the subgroup of principal fractional ideals of $D$. Since any invertible ideal of $D$ is a $*$-invertible $*$-ideal, one has $\operatorname{Pic}(D) \subseteq \Cl^*(D)$, where $\operatorname{Pic}(D) = \Cl_d(D)$ is the Picard group of $D$. Here we will be particularly interested in the $t$-class group (or $w$-class group) $\Cl_t(D) = \Cl_w(D)$, which in general carries more information than the classical object $\operatorname{Pic}(D)$. Note that a $*$-invertible $*$-ideal is a $v$-invertible $v$-ideal and $\Cl^*(D)$ is a subgroup of $\Cl_v(D)$; and if $*$ is of finite type, then a $*$-invertible $*$-ideal is a $t$-invertible $t$-ideal and $\Cl^*(D)$ is a subgroup of $\Cl_t(D)$. A domain $D$ is a Krull domain if and only if every nonzero fractional ideal of $D$ is $t$-invertible. A domain $D$ is a unique factorization domain if and only if the $t$-closure of every nonzero fractional ideal of $D$ is principal, if and only if $D$ is a Krull domain with trivial $t$-class group. For any Krull domain $D$, one has $v = t = w$ on $\mathcal{F}(D)$; a fractional ideal of $D$ is a $t$-ideal if and only if it is divisorial; an ideal of $D$ is $t$-maximal if and only if it is a prime ideal of height one; and the $t$-class group $\Cl_t(D)$ is equal to the usual class group $\Cl(D)$ of $D$. For any fractional ideal $I$ of a domain $D$ one has $t\Max(D) = w\Max(D)$ and $I_w = \bigcap_{\ppp \in t\Max(D)} I D_\ppp$, and in particular $D = \bigcap_{\ppp \in t\Max(D)} D_\ppp$. In fact one has $A = \bigcap_{\ppp \in t\Max(D)} A_\ppp$ for any flat extension $A$ of $D$. Any weak Bourbaki associated prime of the $D$-module $K/D$, and more generally any Nortcott attached prime of $K/D$, is a prime $t$-ideal; and a prime ideal $\ppp$ of $D$ is a Northcott attached prime of $K/D$ if and only if $\ppp D_\ppp$ is a $t$-maximal ideal of $D_\ppp$ [@ell2 Lemma 2.2]. Moreover, if $D$ is a TV domain (or a Noetherian or Krull domain), then every $t$-maximal ideal of $D$ is a weak Bourbaki associated prime of $K/D$. A nonzero fractional ideal $I$ of $D$ is $t$-invertible if and only if $I$ is $w$-invertible, if and only if $I_t = J_t$ for some finitely generated fractional ideal $J$ of $D$ and $I_t D_\ppp$ is principal for every $t$-maximal ideal $\ppp$ of $D$. If $D$ is a [*Mori domain*]{}, that is, if $D$ satisfies the ascending chain condition on integral $v$-ideals, then a nonzero fractional ideal $I$ of $D$ is $t$-invertible if and only if $I_v D_\ppp$ is principal for every weak Bourbaki associated prime $\ppp$ of $D$. Any Noetherian or Krull domain is Mori, and any Mori domain is a TV domain. A domain $D$ is said to be a [*PVMD*]{} if every finitely generated ideal of $D$ is $t$-invertible, which holds if and only if $D$ is integrally closed and $t = w$ on $\mathcal{F}(D)$, if and only if $D_\ppp$ is a valuation domain for every $t$-maximal ideal $\ppp$ of $D$. A Krull domain is equivalently a Mori PVMD. The reader is referred to [@gil] for proofs of the facts listed above. Let $D$ be an integral domain with quotient field $K$ and $n$ be a nonnegative integer. The [*norm*]{} $N(\aaa)$ of an ideal $\aaa$ of $D$ is defined to be the cardinality $|D/\aaa|$ of the quotient ring $D/\aaa$. Let $\Pi_n(D)$ denote the ideal $$\Pi_n(D) = \bigcap_{{\ppp \in t\Max(D)} \atop {N(\ppp) = n}} \ppp$$ of $D$; that is, let $\Pi_n(D)$ denote the intersection of all $t$-maximal ideals $\ppp$ of $D$ of norm $n$ (which is equal to $D$ if $n$ is not a power of a prime). Following [@cah Chapter II] we let $\Int_n(D)$ denote the $D$-submodule $$\Int_n(D) = \{f \in \Int(D): \deg f \leq n\}$$ of $\Int(D)$, and we let $\II_n(D)$ denote the $D$-submodule $$\II_n(D) = \left\{f_n: f = \sum_{i = 0}^n f_i X^i \in \Int_n(D)\right\}$$ of $K$ consisting of the coefficient of $X^n$ for each polynomial $f \in \Int(D)$ of degree at most $n$. By [@cah Proposition I.3.1] $\II_n(D)$ is a fractional ideal of $D$; it is called the [*$n$th characteristic (fractional) ideal of $D$*]{}. Following [@cha2 Definition 1.2] we let $$n!_D = \II_n(D)^{-1},$$ which is an integral $v$-ideal (hence $t$-ideal) of $D$ called the [*$n$th factorial ideal of $D$*]{}. We will write $\Pi_n = \Pi_n(D)$ and $\II_n = \II_n(D)$ when the domain $D$ is understood. By [@ell3 Proposition 7.3], the ideal $(D[X]:_D \Int_n(D))$ of $D$ is a (nonzero) $v$-ideal, and therefore $\Int_n(D)$ lies between two free $D$-modules of rank $n+1$. Note that $n!_D$ contains $(D[X]:_D \Int_n(D))$; moreover, equality holds, and also $\II_n = n!_D^{-1}$, if $D$ is a Krull domain or $\II_n$ is invertible for all $n$, by [@cha Remark 2.6] and [@cah Proposition II.1.7]. We will show, more generally, that $n!_D = (D[X]:_D \Int_n(D))$ and $\II_n = n!_D^{-1}$ if $\II_n$ is $t$-invertible for all $n$, which holds, for example, if $D$ is a TV PVMD. (See Theorem \[trivcor\] and Corollary \[pvmdcor\].) The following well-known result explains the significance of the characteristic ideals. \[charideal\] Let $D$ be an integral domain. Then $\Int(D)$ has a regular basis if and only if the $n$th characteristic ideal $\II_n(D)$ of $D$ is principal for every $n$. In fact, a set $\{f_0, f_1, f_2, \ldots\}$ of elements of $\Int(D)$ with $\deg f_n = n$ for all $n$ is a regular basis of $\Int(D)$ if and only if $\II_n(D) = a_n D$ for all $n$, where $a_n$ is the leading coefficient of $f_n$. Next we examine properties of the characteristic ideals $\II_n(D)$ and some consequences of the assumption that they are $t$-invertible. \[tinv\] If a nonzero fractional ideal $I$ of an integral domain $D$ is $t$-invertible, then $D_\ppp = II^{-1} D_\ppp$ and $(ID_\ppp)^{-1} = I^{-1} D_\ppp$ for every $t$-maximal ideal $\ppp$ of $D$. It is well-known that $I$ is $t$-invertible if and only if $I$ is $w$-invertible. Suppose that $I$ is $w$-invertible, so $$D = (II^{-1})_w = \bigcap_{\ppp \in t\Max(D)} II^{-1}D_\ppp.$$ It follows that $$D_\ppp = II^{-1}D_\ppp = (ID_\ppp)( I^{-1}D_\ppp),$$ and therefore $(ID_\ppp)^{-1} = I^{-1} D_\ppp$, for all $\ppp \in t\Max(D)$. For any multiplicative subset $S$ of a domain $D$, one has $S^{-1}\Int(D) \subseteq \Int(S^{-1}D)$, by [@cah Proposition I.2.2]; however, by [@cah Example VI.4.15] the reverse inclusion need not hold, even if $D$ is locally a discrete valuation ring (DVR). Following [@ell2], we say that a domain $D$ is [*polynomially L-regular*]{} if $S^{-1}\Int(D) = \Int(S^{-1}D)$ for every multiplicative subset $S$ of $D$. For example, by [@ell2 Propositon 2.4] any TV domain (hence any Mori domain, hence any Noetherian domain) is polynomially L-regular. Equivalent conditions for a domain to be polynomially L-regular are given in [@ell2 Proposition 2.3]. \[factprop\] Let $D$ be an integral domain. 1. $D = \II_0(D) = \II_1(D) \subseteq \II_2(D) \subseteq \II_3(D) \subseteq \cdots$ and $\II_k(D) \II_l(D) \subseteq \II_{k+l}(D)$ for all $k,l$. 2. $D = 0!_D = 1!_D \supseteq 2!_D \supseteq 3!_D \supseteq \cdots$ and $(k+l)!_D \subseteq (k!_D^{-1} l!_D^{-1})^{-1}$ for all $k,l$. 3. For any multiplicative subset $S$ of $D$, one has $S^{-1}\Int(D) = \Int(S^{-1}D)$ if and only if $S^{-1}\II_n(D) = \II_n(S^{-1}D)$ for every nonnegative integer $n$. 4. If $\ppp$ is a prime ideal of $D$ that is not $t$-maximal of finite norm, then $\Int(D)_\ppp = \Int(D_\ppp) = D_\ppp[X]$ and $\II_n(D)_\ppp = \II_n(D_\ppp) = D_\ppp$ for every nonnegative integer $n$. 5. $\II_n(D)$ is a $w$-ideal and $n!_D$ is a $v$-ideal for every nonnegative integer $n$. 6. If $\II_n(D)$ is $t$-invertible, where $n$ is a nonnegative integer, then $\II_n(D)$ is a $v$-ideal and $\II_n(D) = n!_D^{-1}$. 7. Suppose that $\II_k(D)$ is $t$-invertible for all $k \leq n$. 1. For any $f \in \Int_n(D)$, all of the coefficients of $f$ lie in $\II_n(D)$. 2. $n!_D = (D[X]:_D \Int_n(D))$. 3. $(k+l)!_D \subseteq (k!_D l!_D)_t$ for all $k,l \leq n$. 8. If $D$ is polynomially L-regular, then one has the following. 1. $S^{-1}\II_n(D) = \II_n(S^{-1}D)$ for every $n$ and every multiplicative subset $S$ of $D$. 2. If $\II_n(D)$ is $t$-invertible, where $n$ is a nonnegative integer, then $n!_D D_\ppp = n!_{D_\ppp}$ for every $t$-maximal ideal $\ppp$ of $D$, and $\II_n(D_\ppp)$ is principal for every prime ideal $\ppp$ of $D$. 3. If $\II_n(D)$ is $t$-invertible for all $n$, then $\Int(D_\ppp)$ has a regular basis for every prime ideal $\ppp$ of $D$ and the $D$-module $\Int(D)$ is locally free. Statements (1) and (2) are clear. Suppose that $S^{-1}\II_n(D) = \II_n(S^{-1}D)$ for every nonnegative integer $n$. Let $f \in \Int(S^{-1}D)$, and let $n = \deg f$. Since $f_n \in \II_n(S^{-1}D) = S^{-1}\II_n(D)$, one has $f_n = g_n/u$, where $g = \sum_{i = 0}^n g_iX^i \in \Int_n(D)$ and $u \in S$. Then $f - g/u \in \Int_{n-1}(S^{-1}D)$, so by induction on $n$ we may assume $f-g/u \in S^{-1}\Int_{n-1}(D)$. Therefore $f \in S^{-1}\Int_n(D)$. Thus we have $S^{-1}\Int(D) = \Int(S^{-1}D)$, assuming that $S^{-1}\II_n(D) = \II_n(S^{-1}D)$ for every nonnegative integer $n$. Since the converse is clear, this proves (3). Next, let $\ppp$ be a prime ideal of $D$ that is not $t$-maximal of finite norm. Then $\Int(D)_\ppp = \Int(D_\ppp) = D_\ppp[X]$ by [@ell2 Lemma 2.2], whence $\II_n(D)_\ppp = \II_n(D_\ppp) = D_\ppp$ for all $n$ by statement (3). This proves (4). Now let $\mathcal{S} = \operatorname{Spec}(D) \backslash t\Max(D)$. One has $$\begin{aligned} \II_n(D)_w & = & \bigcap_{\ppp \in t\Max(D)} \II_n(D)_\ppp \\ & \subseteq & \bigcap_{\qqq \in \mathcal{S}} \bigcap_{\ppp \in t\Max(D)} (\II_n(D)_\qqq)_\ppp \\ & = & \bigcap_{\qqq \in \mathcal{S}} \bigcap_{\ppp \in t\Max(D)} (D_\qqq)_\ppp \\ & = & \bigcap_{\qqq \in \mathcal{S}} D_\qqq. \end{aligned}$$ Therefore $$\II_n(D)_w = \left( \bigcap_{\ppp \in t\Max(D)} \II_n(D)_\ppp \right) \cap \left(\bigcap_{\qqq \in \mathcal{S}} D_\qqq \right) = \bigcap_{\ppp \in \operatorname{Spec}(D)} \II_n(D)_\ppp = \II_n(D).$$ Moreover, one has $(n!_D)_v = ((\II_n(D)^{-1})_v = \II_n(D)^{-1} = n!_D$. This proves (5). Next, suppose that $\II_n(D)$ is $t$-invertible, where $n$ is a nonnegative integer. Then $\II_n(D)$ is a $w$-invertible $w$-ideal, hence a $v$-ideal. Therefore $\II_n(D) = \II_n(D)_v = n!_D^{-1}$. This proves (6). We prove (7)(a) by induction. Suppose that $\II_k = \II_k(D)$ is $t$-invertible, hence $w$-invertible, for all $k \leq n$. If $f \in \Int(D)$ is constant, then $f \in D = \II_0$. Let $f = \sum_{i = 0}^n f_i X^i \in \Int_n(D)$. Consider the fractional ideal $I = \II_{n-1}\II_n^{-1}$ of $D$. Since $\II_{n-1} \subseteq \II_n$, one has $I \subseteq D$. Let $a \in I$. One has $a f_n \in I \II_n \subseteq \II_{n-1}$, so there exists $g = \sum_{i = 0}^{n-1} g_i X^i \in \Int_{n-1}(D)$ with $a f_n = g_{n-1} \in \II_{n-1}$. The polynomial $h = a f - Xg$ lies in $\Int(D)$ and has degree at most $n-1$. By the induction hypothesis, one has $g_{i-1} \in \II_{n-1}$ and $h_i = a f_i -g_{i-1} \in \II_{n-1}$, and therefore $a f_i \in \II_{n-1}$, for $0 \leq i \leq n-1$. Therefore $I f_i \subseteq \II_{n-1}$ for all $i \leq n$. Since $I$ is $w$-invertible (being the product of two $w$-invertible fractional ideals), we have $$f_i D = (f_i II^{-1})_w \subseteq (\II_{n-1}(\II_{n-1}\II_n^{-1})^{-1})_w = (\II_n)_w = \II_n,$$ and therefore $f_i \in \II_n$, for all $i \leq n$. Thus all of the coefficients of $f$ lie in $\II_n$. This proves (7)(a). It follows that, for any $r \in n!_D$, one has $r \II_n \subseteq D$, whence $r \Int_n(D) \subseteq D[X]$ and so $r \in (D[X]:_D \Int_n(D))$. Therefore $n!_D = (D[X]:_D \Int_n(D))$. This proves (7)(b). To prove (7)(c), note that by (2) one has $(k+l)!_D \subseteq (k!_D^{-1}l!_D^{-1})^{-1} = (k!_D l!_D)_t.$ Suppose now that $D$ is polynomially L-regular. Statement (8)(a) follows from statement (3). Suppose that $\II_n(D)$ is $t$-invertible, and let $\ppp$ be a prime ideal of $D$. If $\ppp$ is $t$-maximal, then $(\II_n(D))_t D_\ppp = \II_n(D_\ppp)$ is principal, and by Lemma \[tinv\] one also has $$n!_{D_\ppp} = \II_n(D_\ppp)^{-1} = (\II_n(D)D_\ppp)^{-1} = \II_n(D)^{-1}D_\ppp = n!_D D_\ppp.$$ If $\ppp$ is not $t$-maximal, then $\II_n(D_\ppp) = D_\ppp$ is again principal. This proves (8)(b). Finally, (8)(c) follows Proposition \[charideal\] and statements (8)(a) and (8)(b). The [*Pólya-Ostrowski group of $D$*]{} is defined for any Dedekind domain $D$ in [@cah Section II.3] and more generally for any Krull domain $D$ in [@cha]. We generalize that definition as follows. Let $D$ be an integral domain such that $\II_n(D)$ is $t$-invertible for all nonnegative integers $n$. The [*Pólya-Ostrowski group $\PO(D)$ of $D$*]{} is the subgroup of the $t$-class group $\Cl_t(D)$ generated by (the image in $\Cl_t(D)$ of) the $t$-invertible $t$-ideals $\II_n(D)$ for all $n$. With this definition, Theorem \[factprop\] yields the following. \[trivcor\] Let $D$ be an integral domain such that $\II_n(D)$ is $t$-invertible for all $n$. Then for any $f \in \Int_n(D)$, all of the coefficients of $f$ lie in $\II_n(D)$; one has $n!_D = (D[X]:_D \Int_n(D))$ and $\II_n(D) = n!_D^{-1}$; the Pólya-Ostrowski group $\PO(D)$ is generated by the factorial ideals $n!_D$ for all $n$; and the following conditions are equivalent. 1. $\Int(D)$ has a regular basis. 2. $\II_n(D)$ is principal for every nonnegative integer $n$. 3. $n!_D$ is principal for every nonnegative integer $n$. 4. $\PO(D)$ is trivial. Moreover, if $D$ is polynomially L-regular, then $\Int(D_\ppp)$ has a regular basis for every prime ideal $\ppp$ of $D$ and the $D$-module $\Int(D)$ is locally free. Since every nonzero fractional ideal of a Krull domain is $t$-invertible, the above corollary generalizes the same result already known for Krull domains. In Theorem \[equivthm\] of the next section we will show that $\II_n(D)$ is $t$-invertible for all $n$, and therefore the Pólya-Ostrowski group $\PO(D)$ is defined, for a much larger class of domains $D$, including, for example, all TV PVMDs. The Pólya-Ostrowski group {#sec:4} ========================= In this section we use the results of the previous section to generalize the results of [@cah Section II.3] on Dedekind domains to a much larger class of domains, including, for example, all TV PVMDs. For the remainder this paper we will be interested in the following conditions on an integral domain $D$. 1. $D$ is polynomially L-regular. 2. For any nonnegative integer $n$ there exist only finitely many $t$-maximal ideals $\ppp$ of $D$ with $N(\ppp) \leq n$. 3. $\Pi_q$ is principal for every prime power $q$. 4. Every $t$-maximal ideal of $D$ of finite norm has finite height. 5. Every $t$-maximal ideal of $D$ of finite norm is $t$-invertible. Note that $(\mathcal{C}2)$ implies that $$\Pi_n = \prod_{{\ppp \in t\Max(D)} \atop {N(\ppp) = n}} \ppp,$$ and therefore $\Pi_n$ is the unique ideal of $D$ such that, for any prime ideal $\ppp$ of $D$, one has $\Pi_n D_\ppp = \ppp D_\ppp$ if $\ppp$ is $t$-maximal of norm $n$ and $\Pi_n D_\ppp = D_\ppp$ otherwise. Following [@cah Chapter II], for any nonnegative integer $n$ and any integer $k> 1$ we let $$w_k(n) = \sum_{i = 1}^\infty \left\lfloor \frac{n}{k^i} \right\rfloor.$$ Alternatively, by [@cah Exercise II.8 and Lemma II.2.4] one has $$w_k(n) = \frac{n-s}{k-1} = \sum_{i = 1}^n v_k(i),$$ where $s$ is the sum of the digits of the $k$-adic expansion of $n$ and $v_k(i)$ for any positive integer $i$ is the largest nonnegative integer $t$ such that $k^t$ divides $i$. \[prinlemma\] Let $D$ be a local domain with principal maximal ideal $\ppp$ of finite norm $q$. Then one has $\Pi_q = \ppp$ and $\Pi_n = D$ if $n \neq q$, and $\II_n = \left(\ppp^{w_q(n)}\right)^{-1} = \left(\ppp^{-1}\right)^{w_q(n)}$ for all $n$. In particular, $\Int(D)$ has a regular basis. This follows from [@cah Remark II.2.14] and the proof of [@cah Corollary II.2.9]. The following result generalizes the above lemma to nonlocal domains. \[regbasisalg\] Let $D$ be an integral domain satisfying conditions $(\mathcal{C}1)$, $(\mathcal{C}2)$, and $(\mathcal{C}3)$. For every $n$ let $\pi_n \in D$ be a generator of $\Pi_n$. Then $\sigma_n = \displaystyle \prod_{1 < k \leq n} \pi_k^{-w_k(n)}$ is a generator of $\II_n$ for all $n$, and therefore $\Int(D)$ has a regular basis. For all $n$ and all $k > 1$, let $F_n = \frac{X^n-X}{\pi_n} \in \Int(D)$, and let $F_{k,n} = \prod_{i = 0}^r (F_k^{\circ i})^{n_i} \in \Int(D)$, where $n = n_0+ n_1k + \cdots + n_r k^r$ is the $k$-adic expansion of $n$. Then $F_{k,n}$ has degree $n$ and leading coefficient $\pi_k^{-w_k(n)}$ for all $n,k$. For every integer $n > 1$ there exist $a_{2,n}, a_{3,n}, \ldots, a_{n,n} \in D$ such that $$\sigma_n = \sum_{1< k \leq n} a_{k,n} \pi_k^{-w_k(n)}.$$ Let $G_0 = 1$, $G_1 = X$, and $\displaystyle G_n = \sum_{1 < k \leq n} a_{k,n} F_{k,n}$ for all $n > 1$. Then $G_n \in \Int(D)$ has degree $n$ and leading coefficient $\sigma_n$ for every $n$, and therefore $\{G_0, G_1, G_2, \ldots\}$ is a regular basis of $\Int(D)$. Let $$I = \prod_{1 < k \leq n} \pi_k^{-w_k(n)}D,$$ which is a fractional ideal of $D$. Let $\ppp$ be a prime ideal of $D$. Suppose first that $\ppp$ is $t$-maximal of finite norm $q$. Then $\ppp D_\ppp = \pi_q D_\ppp$ is principal, and therefore by Theorem \[factprop\](3) and Lemma \[prinlemma\] we have $$\II_n(D)_\ppp = \II_n(D_\ppp) = \left(\ppp^{w_q(n)} D_\ppp\right)^{-1} = \pi_q^{-w_q(n)}D_\ppp = I D_\ppp.$$ (If $N(\ppp) > n$ then $\II_n(D)_\ppp = D_\ppp = ID_\ppp$.) If, on the other hand, $\ppp$ is not $t$-maximal of finite norm, then $\II_n(D)_\ppp = \II_n(D_\ppp) = D_\ppp = ID_\ppp$ by Theorem \[factprop\](4). Thus $\II_n(D) _\ppp = I D_\ppp$ for every prime ideal $\ppp$ of $D$, whence $\II_n(D) = I$. Finally, the remainder of the proposition follows exactly as in the proof of [@cah Propositions II.3.13 and II.3.14]. \[equivthm\] Let $D$ be an integral domain satisfying conditions $(\mathcal{C}1)$, $(\mathcal{C}2)$, and $(\mathcal{C}5)$. Then for every nonnegative integer $n$ the fractional ideals $\II_n$, $n!_D$, and $\Pi_n$ are $t$-invertible $t$-ideals, and one has $$\II_n = n!_D^{-1} = \prod_{{\ppp \in t\Max(D)} \atop {N(\ppp) \leq n}} \left(\ppp^{w_{N(\ppp)}(n)}\right)^{-1}$$ and $$n!_D = \prod_{{\ppp \in t\Max(D)} \atop {N(\ppp) \leq n}} \left( \ppp^{w_{N(\ppp)}(n)}\right)_t = \prod_{1 < q \leq n} \left(\Pi_q^{w_q(n)}\right)_t.$$ Moreover, the group $\PO(D)$ is generated by any of the following sets: $\{q!_D: q \textup{ is a prime power}\}$; $\{\II_q: q \textup{ is a prime power}\}$; and $\{\Pi_q: q \textup{ is a prime power}\}$. In particular, the following conditions are equivalent. 1. $\Int(D)$ has a regular basis. 2. $\PO(D)$ is trivial. 3. $\II_n$ is principal for every nonnegative integer $n$. 4. $\II_q$ is principal for every prime power $q$. 5. $n!_D$ is principal for every nonnegative integer $n$. 6. $q!_D$ is principal for every prime power $q$. 7. $\Pi_n$ is principal for every nonnegative integer $n$. 8. $\Pi_q$ is principal for every prime power $q$. Let $$I = \prod_{{\ppp \in t\Max(D)} \atop {N(\ppp) \leq n}} \left(\ppp^{w_{N(\ppp)}(n)}\right)^{-1},$$ which is a well-defined fractional ideal of $D$. Let $\ppp$ be a prime ideal of $D$. If $\ppp$ is $t$-maximal of finite norm, hence $t$-invertible, then $\ppp D_\ppp$ is principal, and therefore by Theorem \[factprop\](3) and Lemmas \[prinlemma\] and \[tinv\] we have $$\II_n(D)_\ppp = \II_n(D_\ppp) = \left(\ppp^{w_{N(\ppp)}(n)} D_\ppp\right)^{-1} = \left(\ppp^{w_{N(\ppp)}(n)}\right)^{-1} D_\ppp = I D_\ppp.$$ If, on the other hand, $\ppp$ is not $t$-maximal of finite norm, then $\II_n(D)_\ppp = \II_n(D_\ppp) = D_\ppp = ID_\ppp$ by Theorem \[factprop\](4). Thus $\II_n(D) _\ppp = I D_\ppp$ for every prime ideal $\ppp$ of $D$, whence $\II_n(D) = I$. Now $\Pi_n$ is a finite intersection, and product, of $t$-invertible $t$-maximal ideals and is therefore a $t$-invertible $t$-ideal. If $J, J'$ are $t$-invertible fractional ideals, then so are $J^{-1}$ and $JJ'$, and one has $(J^{-1}J'^{-1})^{-1} = (JJ')_t$. The given product for $\II_n(D)$, then, implies that $\II_n(D)$ is $t$-invertible, and therefore by Theorem \[trivcor\] one has $\II_n(D) = n!_D^{-1}$ and $\II_n(D)$ is a $v$-ideal, hence a $t$-ideal. Moreover, $n!_D = \II_n(D)^{-1}$ is also a $t$-invertible $t$-ideal, and one has $$n!_D = \left(\prod_{{\ppp \in t\Max(D)} \atop {N(\ppp) \leq n}} \left(\ppp^{w_{N(\ppp)}(n)}\right)^{-1} \right)^{-1} = \prod_{{\ppp \in t\Max(D)} \atop {N(\ppp) \leq n}} \left(\ppp^{w_{N(\ppp)}(n)}\right)_t$$ and therefore $$n!_D = \prod_{1 < q \leq n} \left(\Pi_q^{w_q(n)}\right)_t.$$ It follows that $\PO(D)$ is contained in the subgroup $G_1$ of $\Cl_t(D)$ generated by the image of $\Pi_q$ for all prime powers $q$. Moreover, since $w_q(q) = 1$, one has $$q!_D = \Pi_q \prod_{1 < q' < q} \left(\Pi_{q'}^{w_{q'}(q)}\right)_t,$$ so by induction on $q$ the image of $\Pi_q$ in $\Cl_t(D)$ for all $q$ is in the subgroup $G_2$ of $\PO(D)$ generated by $q!_D$ for all $q$, and therefore $G_1 \subseteq G_2$. Therefore $\PO(D) \subseteq G_1 \subseteq G_2 \subseteq \PO(D)$, so equalities holds. Moreover, $\PO(D) = G_2$ is also generated by $\II_q(D) = q!_D^{-1}$ for all $q$. The equivalence of statements (1) through (8), then, follows from these equalities of groups and from Theorem \[trivcor\]. \[polyafield\] Let $K$ be a finite Galois extension of $\QQ$. Then for any prime power $q = p^r$, where $p$ is a prime, one has the following. 1. $\Pi_q = \sqrt{p \mathcal{O}_K}$ if $p$ is ramified in $K$ and $r$ equal to the inertial degree $f_p$. 2. $\Pi_q = \mathcal{O}_K$ if $p$ is ramified in $K$ and $r \neq f_p$. 3. $\Pi_q = p\mathcal{O}_K$ is principal if $p$ is unramified in $K$. In particular, $\PO(\mathcal{O}_K)$ is generated by $\sqrt{p\mathcal{O}_K}$ for the set of primes $p$ dividing the discriminant $\Delta_{K/\QQ}$, and $\Int(D)$ has a regular basis if and only if $\sqrt{p\mathcal{O}_K}$ is principal for all such $p$. Motivated by Theorem \[equivthm\], we find sufficient conditions for $(\mathcal{C}1)$, $(\mathcal{C}2)$, and $(\mathcal{C}5)$ to hold. A domain $D$ is said to be [*of finite character*]{} if every nonzero element of $D$ is contained in only finitely many maximal ideals of $D$. Similarly, $D$ is said to be [*of finite $t$-character*]{} if every nonzero element of $D$ is contained in only finitely many $t$-maximal ideals of $D$. Note, for example, that every TV domain is of finite $t$-character, and a PVMD of finite $t$-character is equivalently a domain of Krull type. \[normlemma\] Any domain $D$ of finite character or of finite $t$-character satisfies conditions $(\mathcal{C}1)$ and $(\mathcal{C}2)$. By [@ell2 Proposition 2.4] any domain of finite character or finite $t$-character is polynomially L-regular. To verify condition $(\mathcal{C}2)$ we may suppose that $D$ is infinite. Let $q$ be a power of a prime. Then there exists $a \in R$ with $a^q - a \neq 0$. For every maximal ideal $\ppp$ of norm $q$ one has $a^q - a \in \ppp$, so since $D$ is of finite character or of finite $t$-character there are only finitely many such $\ppp$ that are $t$-maximal. The lemma follows. \[equivcor\] Let $D$ be an integral domain such that every $t$-maximal ideal of $D$ is $t$-invertible. Then $D$ is of finite $t$-character if and only if every $t$-ideal $I$ of $D$ such that $I D_\ppp$ is principal for every $t$-maximal ideal $\ppp$ of $D$ is $t$-invertible. For any such domain $D$, the all of the hypotheses (conditions $(\mathcal{C}1)$, $(\mathcal{C}2)$, and $(\mathcal{C}5)$) of Theorem \[equivthm\] hold. This follows from Proposition \[normlemma\], [@ell2 Proposition 2.4], and [@zaf Corollary 2]. Let $D$ be an integral domain such that a $t$-ideal $I$ of $D$ is principal provided that $I$ is $t$-maximal or $ID_\ppp$ is principal for every $t$-maximal ideal $\ppp$ of $D$. Then conditions $(\mathcal{C}1)$, $(\mathcal{C}2)$, $(\mathcal{C}3)$, and $(\mathcal{C}5)$ hold; in particular, statements (1) through (8) of Theorem \[equivthm\] hold, and $\Int(D)$ has a regular basis. By Proposition \[equivcor\], $D$ is a domain of finite $t$-character such that every $t$-maximal ideal of $D$ is $t$-invertible. Moreover, every $t$-invertible $t$-ideal of $D$ is principal, hence $\Cl_t(D)$ is trivial, so $\PO(D)$ is trivial. The result therefore follows from Proposition \[equivcor\]. An [*H domain*]{} is a domain in which every $v$-invertible ideal is $t$-invertible. Every TV domain is an H domain of finite $t$-character; we do not know if the converse holds, even for PVMDs. By the following corollary, Theorem \[equivthm\] and Proposition \[equivcor\] apply in particular to any H PVMD of finite $t$-character, hence to any TV PVMD. \[pvmdcor\] Let $D$ be a PVMD. Then $D$ is an H domain if and only if every $t$-maximal ideal of $D$ is $t$-invertible. Also, $D$ is of finite $t$-character if and only if every $t$-ideal $I$ of $D$ such that $I D_\ppp$ is principal for every $t$-maximal ideal $\ppp$ of $D$ is $t$-invertible. If $D$ is an H PVMD of finite $t$-character (or if $D$ is a TV PVMD), then all of the hypotheses (conditions $(\mathcal{C}1)$, $(\mathcal{C}2)$, and $(\mathcal{C}5)$) of Theorem \[equivthm\] hold. This follows from [@ell2 Proposition 1.5] and [@zaf Proposition 5]. The global case {#sec:5} =============== In this section we extend Proposition \[presentation\] to nonlocal domains. For the remainder of this paper we will use the following notation. Let $\mathcal{C}$ denote the class of integral domains $D$ satisfying the following four conditions. 1. $D$ is polynomially L-regular. 2. For any nonnegative integer $n$ there exist only finitely many $t$-maximal ideals $\ppp$ of $D$ with $N(\ppp) \leq n$. 3. $\Pi_q$ is principal for every prime power $q$. 4. Every $t$-maximal ideal of $D$ of finite norm has finite height. <!-- --> 1. By Theorem \[regbasisalg\] $\Int(D)$ has a regular basis for any domain $D$ in the class $\mathcal{C}$. 2. Any finite dimensional local domain with principal maximal ideal is in the class $\mathcal{C}$. More generally, a local domain is in the class $\mathcal{C}$ if and only if its maximal ideal has infinite norm or else is principal and has finite height and norm. 3. A Krull domain $D$ is in the class $\mathcal{C}$ if and only if $\Int(D)$ has a regular basis. 4. Any H PVMD of finite $t$-character (or any TV PVMD) such that $\Int(D)$ has a regular basis satisfies conditions $(\mathcal{C}1)$, $(\mathcal{C}2)$, and $(\mathcal{C}3)$. 5. Any domain $D$ of finite character or of finite $t$-character satisfies conditions $(\mathcal{C}1)$ and $(\mathcal{C}2)$. For any domain $D$ in the class $\mathcal{C}$, the $D$-algebra $\Int(D)$ has a presentation by generators and relations as in the following theorem. \[polyaprop\] Let $D$ be an integral domain in the class $\mathcal{C}$. For each $q$ let $\pi_q$ be a generator of $\Pi_q$ and let $F_q = \frac{X^q - X}{\pi_q} \in \Int(D)$. Then the unique $D$-algebra homomorphism $$D[\{X, X_{q,k} : q \textup{ is a prime power and } k \in \ZZ_{\geq 0}\}] \longrightarrow \Int(D)$$ $$X \longmapsto X$$ $$X_{q,k} \longmapsto F_q^{\circ k}$$ is surjective, and its kernel is equal to the ideal $J$ generated by $X_{q,0} - X$ and $X_{q,k}^q - X_{q,k} - \pi_q X_{q,{k+1}}$ for all $q,k$. Let $\varphi$ denote the given $D$-algebra homomorphism, and let $$A = D[\{X, X_{q,k} : q \textup{ is a prime power and } k \in \ZZ_{\geq 0}]/J.$$ The homomorphism $\varphi$ induces a $D$-algebra homomorphism $$\psi: A \longrightarrow \Int(D).$$ We show that $\psi$ is an isomorphism. Let $\ppp$ be a prime ideal of $D$. If $\ppp$ is not $t$-maximal of finite norm, then $\pi_q$ is a unit in $D_\ppp$ for all $q$ since $\Pi_{q} \nsubseteq \ppp D_\ppp$, and $A_\ppp \cong D_\ppp[X]$. Thus, by [@ell2 Lemma 2.2], the localization $\psi_{\ppp}$ of $\psi$ at $\ppp$ is the isomorphism $$A_\ppp \longrightarrow \Int(D)_\ppp = \Int(D_\ppp) = D_\ppp[X],$$ Suppose, on the other hand, that $\ppp$ is $t$-maximal of finite norm, say $|D/\ppp| = q$. Then $\pi_q D_\ppp = \ppp D_\ppp$, and $\pi_{q^\prime}$ is a unit in $D_\ppp$ for all prime powers $q^\prime \neq q$. It follows, then, that $$A_\ppp \cong D_\ppp[X_0, X_1, X_2, \ldots]/I,$$ where $I$ is defined as in Proposition \[presentation\]. Moreover, $\psi_\ppp$ is the same as the homomorphism $$D_\ppp[X_0, X_1, X_2, \ldots]/I \longrightarrow \Int(D)_\ppp = \Int(D_\ppp)$$ $$X_k \longmapsto F_q^{\circ k},$$ of Proposition \[presentation\]. By that proposition, then, $\psi_\ppp$ is an isomorphism. Therefore $\psi_\ppp$ is an isomorphism for all prime ideals $\ppp$ of $D$, so $\psi$ is an isomorphism. Now, let $D$ be a domain and $\XX$ a set. There exists a unique $D$-algebra homomorphism $$\theta_\XX: \bigotimes_{X \in \XX} \Int(D) \longrightarrow \Int(D^\XX)$$ sending $X \in \Int(D)$ to $X \in \Int(D^\XX)$ for all $X \in \XX$, where the tensor product is over $D$. By [@jess Proposition 6.8(a) and 6.10(d)], the map $\theta_\XX$ is an isomorphism if $\Int(D)$ is free as a $D$-module or if $D$ is polynomially L-regular and $\Int(D)$ is locally free as a $D$-module. The isomorphisms $\theta_\XX$ allow us to extend Propositions \[presentation\] and \[polyaprop\] to multivariate integer-valued polynomials, as follows. \[presentation2\] Let $D$ be a local integral domain with principal maximal ideal $\pi D$ and finite residue field of order $q$. Let $F_q = \frac{X^q - X}{\pi} \in \Int(D)$. Let $\XX = \{X_i\}_{i \in I}$ be a set of variables. The unique $D$-algebra homomorphism $$\varphi: {\begin{array}{rrr} D[\{X_{i,k} : i \in I, k \in \ZZ_{\geq 0}\}] & \longrightarrow & \Int(D^\XX) \\ X_{i,k} &\longmapsto & F_q^{\circ k}(X_i) \end{array}}$$ is surjective, and if $D$ has finite Krull dimension then $\ker \varphi$ is equal to the ideal generated by $X_{i,k}^q - X_{i,k} - \pi X_{i,k+1}$ for all $i,k$. \[presentation3\] Let $D$ be an integral domain in the class $\mathcal{C}$. For each $q$ let $\pi_q$ be a generator of $\Pi_q$ and let $F_q = \frac{X^q - X}{\pi_q} \in \Int(D)$. Let $\XX = \{X_i\}_{i \in I}$ be a set of variables. The unique $D$-algebra homomorphism $$D[\{X_i, X_{i,q,k} : i \in I, q \textup{ is a prime power, and } k \in \ZZ_{\geq 0}\}] \longrightarrow \Int(D^\XX)$$ $$X_i \longmapsto X_i$$ $$X_{i,q,k} \longmapsto F_q^{\circ k}(X_i)$$ is surjective, and its kernel is equal to the ideal generated by $X_{i,q,0} - X_i$ and $X_{i,q,k}^q - X_{i,q,k} - \pi_q X_{i,q,{k+1}}$ for all $i,q,k$. Quotients of integer-valued polynomial rings {#sec:6} ============================================ A ring $A$ is said to be [*binomial*]{} if $A$ is $\ZZ$-torsion-free and is closed under the operation $x \longmapsto \frac{x(x-1)(x-2)\cdots(x-n+1)}{n!}$ on $A \otimes_\ZZ \QQ$ for every positive integer $n$. For example, any $\QQ$-algebra is binomial; any localization or completion of $\ZZ$ is binomial; and the domain $\Int(D^\XX)$ is binomial for any binomial domain $D$ and any set $\XX$. Binomial rings were introduced by Philip Hall in his groundbreaking work [@hal] on nilpotent groups. Hall proved the existence of an action of any binomial ring on a class of nilpotent groups, generalizing exponentiation of elements of an abelian group by the integers and analogous to exponentiation of elements of a uniquely divisible group by the rational numbers. In [@ell7], it is shown that a binomial ring is equivalently (1) a $\lambda$-ring on which all Adams operations are the identity; (2) a $\ZZ$-torsion-free ring $A$ such that the Frobenius endomorphism of $A/pA$ is the identity for every prime number $p$; (3) a $\ZZ$-torsion-free ring isomorphic to a quotient of a (possibly infinite) tensor power of $\Int(\ZZ)$; and (4) a $\ZZ$-torsion-free ring isomorphic to a quotient of $\Int(\ZZ^\XX)$ for some set $\XX$. In this section and the next we generalize the equivalences (2) through (4) above to domains more general than $\ZZ$. Let $D$ be an integral domain and $A$ a $D$-algebra. Following [@ell], we say that $A$ is [*weakly polynomially complete*]{}, or [*WPC*]{}, if for every $a \in A$ there is a $D$-algebra homomorphism $\Int(D) \longrightarrow A$ sending $X$ to $a$. Any quotient of a WPC $D$-algebra is WPC. If $A$ is a domain extension of $D$, then $A$ is WPC if and only if $\Int(D) \subseteq \Int(A)$. A $\ZZ$-torsion-free ring $A$ is a WPC $\ZZ$-algebra if and only if $A$ is a binomial; and for any number field $K$, the localization $S^{-1}\mathcal{O}_K$ of $\mathcal{O}_K$ at the multiplicative subset $S$ of $\ZZ$ generated by the set of prime numbers $p$ that do not split completely in $\mathcal{O}_K$ is the smallest WPC extension of $\ZZ$ containing $\mathcal{O}_K$ [@jess2 Example 7.3(3)]. Characterizations of the divisorial (or flat) weakly polynomially complete extensions of any Krull domain are given in [@ell2 Theorem 1.2]. We remark that, if $D$ is a principal ideal domain with finite residue fields, then $\Int(D)$ left-represents a right adjoint for the inclusion functor from $D$-torsion-free WPC $D$-algebras to $D$-algebras [@ell Theorem 1.6]. The problem we consider is to characterize the WPC $D$-algebras if $D$ is a Krull domain (or Dedekind domain). Let us say that $A$ is [*almost polynomially complete*]{}, or [*APC*]{}, if for every set $\XX$ and for any $(a_X)_{X \in \XX} \in A^\XX$ there exists a $D$-algebra homomorphism $\Int(D^\XX) \longrightarrow A$ sending $X$ to $a_X$ for all $X \in \XX$. In other words, $A$ is APC if and only if $A$ is isomorphic as a $D$-algebra to a quotient of $\Int(D^\XX)$ for some set $\XX$. By [@jess Propositions 7.4 and 7.7], if $A$ is a domain extension of $D$, then $A$ is APC if and only if $A$ is an [*almost polynomially complete extension*]{} of $D$ in the sense of [@jess Section 7], that is, $\Int(D^n) \subseteq \Int(A^n)$ for all positive integers $n$. Any quotient of an APC $D$-algebra is APC. Clearly any APC $D$-algebra is WPC; we suspect that the converse does not hold but do not know a counterexample. However, by [@ell2 Theorem 3.11] and the universal property of tensor products, we have the following. Let $D$ be an integral domain and $\XX$ a set. There exists a unique $D$-algebra homomorphism $$\theta_\XX: \bigotimes_{X \in \XX} \Int(D) \longrightarrow \Int(D^\XX)$$ sending $X \in \Int(D)$ to $X \in \Int(D^\XX)$ for all $X \in \XX$, where the tensor product is over $D$. If $\theta_\XX$ is an isomorphism for all finite sets $\XX$, which holds, for example, if $\Int(D)$ is free as a $D$-module or if $D$ is polynomially L-regular and $\Int(D)$ is locally free as a $D$-module, then $\theta_\XX$ is an isomorphism for all sets $\XX$, and a $D$-algebra $A$ is WPC if and only if it is APC. As in [@jess Section 1], we say that a domain $A$ containing $D$ is a [*polynomially complete extension of $D$*]{} if $D$ is a polynomially dense subset of $A$. By [@jess Proposition 7.2], every polynomially complete extension of $D$ is APC, and if $D$ is infinite then a $D$-algebra $A$ is APC if and only if $A$ is isomorphic as a $D$-algebra to a quotient of some polynomially complete extension of $D$. On the other hand, if $D$ is finite, or more generally if $\Int(D) = D[X]$, then by [@jess Lemma 7.1] every $D$-algebra is APC. The WPC and polynomially complete extensions of a domain are studied, for example, in [@cah1 Sections 5 and 6], [@cah Section IV.3], [@ell2], [@ger], and [@jess]. The following result as a special case characterizes the WPC algebras over a discrete valuation domain. \[dvd\] Let $D$ be a finite dimensional local domain with principal maximal ideal $\pi D$ and finite residue field of order $q$. Let $A$ be a $D$-algebra. Then $A$ is WPC if and only if $a^q \equiv a \ (\mod \pi A)$ for all $a \in A$. Suppose first that $A$ is WPC. Let $a \in A$, and let $\varphi: \Int(D) \longrightarrow A$ be a $D$-algebra homomorphism sending $X$ to $a$. Since the polynomial $f = \frac{X^q - X}{\pi}$ lies in $\Int(D)$, one has $a^q - a = \varphi(X^q - X) = \pi \varphi(f) \in \pi A$, so $a^q \equiv a \ (\mod \pi A)$. Conversely, suppose that $a^q \equiv a \ (\mod \pi A)$ for all $a \in A$. Let $a \in A$. Define an infinite sequence $a_0, a_1, a_2, \ldots$ recursively as follows. Let $a_0 = a$, and let $a_{k+1}$ be any element of $A$ such that $a_{k}^q - a_{k} = \pi a_{k+1}$. Consider the unique $D$-algebra homomorphism $$\varphi: {\begin{array}{rrr} D[X_0, X_1, X_2, \ldots] & \longrightarrow & A \\ X_{k} & \longmapsto & a_k. \end{array}}$$ The polynomials $X_{k}^q - X_{k} - \pi X_{k+1}$ lie in $\ker \varphi$ for all $k$. By Proposition \[presentation\], therefore, $\varphi$ induces a $D$-algebra homomorphism from $\Int(D)$ into $A$ sending $X$ to $a$. Thus $A$ is WPC. Next, let $\mathcal{D}$ be the class of domains $D$ such that $\Pi_q = \pi_q D$ is principal for all $q$ and $\Int(D)$ has a $D$-algebra presentation as in Proposition \[polyaprop\]. Then $\mathcal{C} \subseteq \mathcal{D}$ by Proposition \[polyaprop\]. We do not know if the domains $D$ satisfying conditions $(\mathcal{C}1)$, $(\mathcal{C}2)$, and $(\mathcal{C}3)$ are in the class $\mathcal{D}$. \[polyathm\] For any integral domain $D$ in the class $\mathcal{D}$ (or in the class $\mathcal{C}$), the following conditions are equivalent. 1. $A$ is a WPC $D$-algebra. 2. $a^{N(\ppp)} \equiv a \ (\mod \ppp A)$ for all $a \in A$ and for every $t$-maximal ideal $\ppp$ of $D$ of finite norm $N(\ppp)$. 3. $a^q \equiv a \ (\mod \Pi_q A)$ for all $a \in A$ and for every prime power $q$. Moreover, if $D$ is in the class $\mathcal{C}$, then the above conditions are equivalent to the following. 1. $A$ is isomorphic as a $D$-algebra to a quotient of $\Int(D^\XX)$ for some set $\XX$. The proof is a straightforward extension of the proof of Proposition \[dvd\]. Local versus global behavior {#sec:7} ============================ In this section we investigate the local/global behavior of WPC algebras, and we characterize the “locally WPC” algebras over any Krull domain. \[lreg\] Let $D$ be an integral domain with quotient field $K$ and $A$ a $D$-torsion-free $D$-algebra. 1. $A$ is a WPC $D$-algebra if and only if for every $f \in \Int(D)$ one has $f(a) \in A \subseteq A \otimes_D K$ for every $a \in A$. 2. If $A$ is a WPC $D$-algebra and $D$ is polynomially L-regular, then $S^{-1}A$ is a WPC $S^{-1}D$-algebra for every multiplicative subset $S$ of $D$. 3. Suppose that $\mathcal{P}$ is a set of prime ideals of $D$ such that $A = \bigcap_{\ppp \in \mathcal{P}} A_\ppp$. If $A_\ppp$ is a WPC $D_\ppp$-algebra for all $\ppp \in \mathcal{P}$, then $A$ is a WPC $D$-algebra. This is clear. \[extext\] Let $D$ be an integral domain, $D'$ an extension $D$, and $A$ a $D'$-algebra. 1. If $A$ is a WPC $D$-algebra, $D'$ is flat over $D$, and $D$ is polynomially L-regular, then $A$ is a WPC $D'$-algebra. 2. If $A$ is a WPC $D'$-algebra and $D'$ is a WPC $D$-algebra, then $A$ is a WPC $D$-algebra. To prove (1) let $a \in A$. By hypothesis there is a $D$-algebra homomorphism $\Int(D) \longrightarrow A$ sending $X$ to $a$. By [@ell2 Proposition 2.3], then, there is a $D'$-algebra homomorphism $$\Int(D') = D' \Int(D) \cong D' \otimes_D \Int(D) \longrightarrow D' \otimes_D A \longrightarrow A$$ sending $X \in \Int(D')$ to $a$. Therefore $A$ is a WPC $D'$-algebra. To prove (2), again let $a \in A$. By hypothesis there is a $D$-algebra homomorphism $$\Int(D) \subseteq \Int(D') \longrightarrow A$$ sending $X \in \Int(D)$ to $a$. Therefore $A$ is a WPC $D$-algebra. Because any localization of a domain $D$ at a multiplicative subset is a flat WPC $D$-algebra, we have the following corollary. \[extextcor\] Let $D$ be a polynomially L-regular integral domain and $A$ a $D$-algebra. Let $S$ be a multiplicative subset of $D$. Then $S^{-1}A$ is a WPC $D$-algebra if and only if $S^{-1}A$ is a WPC $S^{-1}D$-algebra. Let us say that a $D$-algebra $A$ is [*locally WPC*]{} if $A_\ppp$ is a WPC $D_\ppp$-algebra for every prime ideal $\ppp$ of $D$. If $D$ is polynomially L-regular, then by Corollary \[extextcor\] this holds if and only if $A_\ppp$ is a WPC $D$-algebra for every prime ideal $\ppp$ of $D$. \[localprop\] The following conditions are equivalent for any integral domain $D$ and any $D$-algebra $A$. 1. $A$ is locally WPC. 2. $A_\ppp$ is a WPC $D_\ppp$-algebra for every maximal ideal $\ppp$ of $D$. 3. $A_\ppp$ is a WPC $D_\ppp$-algebra for every $t$-maximal ideal $\ppp$ of $D$ of finite norm. If $\ppp$ is a prime ideal of $D$ that is not $t$-maximal of finite norm, then by [@ell2 Lemma 2.2] one has $\Int(D_\ppp) = D_\ppp[X]$, and therefore $A_\ppp$ is a WPC $D_\ppp$-algebra. Therefore (3) implies (1) and so the three conditions are equivalent. Any domain $D$ in the class $\mathcal{C}$ satisfies the hypothesis of the following theorem. \[polybthm\] Let $D$ be an integral domain and $A$ a $D$-algebra. Suppose that $\ppp D_\ppp$ is principal and has finite height for every $t$-maximal ideal $\ppp$ of $D$ of finite norm. Then $A$ is locally WPC if and only if, for every $t$-maximal ideal $\ppp$ of $D$ of finite norm $q = N(\ppp)$, any of the following equivalent conditions holds. 1. $a^{N(\ppp)} \equiv a \ (\mod \, \ppp A)$ for all $a \in A$. 2. The endomorphism $a \longmapsto a^q$ of $A/\ppp A$ is the identity. 3. $A/\ppp A$ is locally isomorphic to $D/\ppp$ as a $D$-algebra. 4. $A/\ppp A$ is reduced, and every residue field of $A/\ppp A$ is isomorphic to $D/\ppp$ as a $D$-algebra. 5. $A/\ppp A$ is isomorphic to a subring of $\FF_q^Y$ for some set $Y$. 6. For every maximal ideal $\MM$ of $A$ lying over $\ppp$, one has $\ppp A_\MM = \MM A_\MM$ and $A/\MM \cong D/\ppp$ as $D$-algebras. 7. For every prime ideal $\PPP$ of $A$ lying over $\ppp$, one has $\ppp A_\PPP = \PPP A_\PPP$ and $A/\PPP \cong D/\ppp$ as $D$-algebras. Moreover, if $A/\ppp A$ is semi-local or Noetherian, then each of the above conditions is equivalent to the following. 1. $\ppp A = \MM_1 \MM_2 \cdots \MM_r$ for distinct maximal ideals $\MM_i$ of $A$ such that $A/\MM_i \cong D/\ppp$. By [@jess Propositions 4.1 and 4.2], we need only show that $A$ is locally WPC if and only if condition (1) holds for all $t$-maximal ideals $\ppp$ of $D$ of finite norm. Suppose first that $A$ is locally WPC, so $A_\ppp$ is a WPC $D_\ppp$-algebra for every prime ideal $\ppp$ of $D$. Let $\ppp$ be a $t$-maximal ideal of $D$ of finite norm $q = N(\ppp)$, and let $a \in A$. Since $q = |D_\ppp/\ppp D_\ppp|$, by Proposition \[dvd\] one has $a^q \equiv a \ (\mod \, \ppp A_\ppp)$. Therefore $a^q - a \in \ppp A_\ppp$, so there exists $u_\ppp \in D\backslash \ppp$ so that $u_\ppp (a^q -a) \in \ppp A$. Let $v_\ppp \in D \backslash \ppp$ with $v_\ppp u_\ppp \equiv 1 \ (\mod \, \ppp)$. Then $$a^q -a \equiv v_\ppp u_\ppp(a^q - a) \equiv 0 \ (\mod \, \ppp A),$$ whence $a^q \equiv a \ (\mod \, \ppp A)$. Suppose, conversely, that $a^{N(\ppp)} \equiv a \ (\mod \, \ppp A)$ for all $a \in A$ and for every $t$-maximal ideal $\ppp$ of $D$ of finite norm. Then for all $a \in A$ and $u \in D \backslash \ppp$ one has$$a^{N(\ppp)}u \equiv au^{N(\ppp)} \ (\mod \, \ppp A_\ppp)$$ and therefore $$(a/u)^{N(\ppp)} \equiv a^{N(\ppp)}/u^{N(\ppp)} \equiv a/u \ (\mod \, \ppp A_\ppp).$$ Therefore $x^q \equiv x \ (\mod \, \ppp A_\ppp)$ for all $x \in A_\ppp$, where $q = |D_\ppp/\ppp D_\ppp|$, so $A_\ppp$ is a WPC $D_\ppp$-algebra by Proposition \[dvd\]. Thus $A$ is locally WPC by Lemma \[localprop\]. \[polybcor\] Let $D$ be an integral domain and $A$ a $D$-algebra. If $D$ is in the class $\mathcal{C}$, or if $A$ is $D$-torsion-free, then $A$ is WPC if and only if $A$ is locally WPC. \[polybcor1\] Let $D$ be a Krull domain and $A$ a $D$-algebra. Then $A$ is locally WPC if and only if $a^{N(\ppp)} \equiv a \ (\mod \, \ppp A)$ for every height one prime ideal $\ppp$ of $D$ of finite norm. Corollaries \[polybcor\] and \[polybcor1\] motivate the following problem. \[dedekindconjecture\] Let $D$ be a Dedekind domain and $A$ a $D$-algebra. Is it true that $A$ is WPC if and only if $A$ is locally WPC? Equivalently, is it true that $A$ is WPC if and only if for every height one prime ideal $\ppp$ of $D$ of finite norm $N(\ppp)$ one has $a^{N(\ppp)} \equiv a \ (\mod \ppp A)$ for all $a \in A$? If so, then does the equivalence hold more generally if $D$ is a Krull domain? If not, then which, if either, implication is true? Corollary \[polybcor\] shows that the answer to the above problem is affirmative under the added hypothesis that $\Int(D)$ has a regular basis or $A$ is $D$-torsion-free. Weak polynomial completions {#sec:8} =========================== Let $D$ be an integral domain with quotient field $K$. For any domain extension $A$ of $D$, there is a smallest WPC extension of $D$ containing $A$ and contained in $A \otimes_D K$, called the [*weak polynomial completion of $A$*]{} and denoted $w_D(A)$ [@jess Section 8]. An example of weak polynomial completion is as follows. Let $A$ be a domain extension of $D$ and $\alpha \in A$. Following [@cah1 Section 5] we let $D_\alpha = \{f(\alpha): f \in \Int(D)\}$ denote the [*ring of values of $\Int(D)$ at $\alpha$*]{}, which is a $D$-subalgebra of $A$ containing $D[\alpha] = \{f(\alpha): f \in D[X]\}$. An easy argument shows that $D_\alpha = w_D(D[\alpha])$. By [@ell2 Example 7.3], if $D = \ZZ[T]$, then $\Int(D) = D[X]$ and therefore any extension $A$ of $D$ is WPC, but if $A = \ZZ[T/2]$ then $A$ is not a polynomially complete extension of $D$. In particular, $A = D_{T/2}$ is not a polynomially complete extension of $D$. This provides a negative answer to [@cah1 Question 5.8]. \[numthm\] Let $D$ be a Dedekind domain with quotient field $K$, and let $L$ be a finite Galois extension of $K$ with ring of integers $D'$. Let $S$ denote the complement of the union of the prime ideals of $D$ that split completely in the Dedekind domain $D'$, and let $D'' = w_D(D')$. 1. If $\ppp$ is a prime ideal of $D$ with $\ppp D'' \neq D''$, then $\ppp$ splits completely in $D'$. 2. $D'' \supseteq S^{-1}D'$. 3. $D'' = S^{-1}D'$ if the class group of $D$ is torsion. First we prove (1). Suppose that $\ppp D'' \neq D''$, so $\ppp D'' \subseteq \ppp''$ for some maximal ideal $\ppp''$ of $D''$. Let $\ppp' = D' \cap \ppp''$. Note that $D''$, being an overring of the Dedekind domain $D'$, is a Dedekind domain and is flat over $D'$, and one has $D''_{\ppp''} = D'_{\ppp'}$ and therefore $\ppp'' D_{\ppp''} = \ppp' D'_{\ppp'}$. Therefore, by Theorem \[polybthm\] one has $$\ppp D'_{\ppp'} = \ppp D''_{\ppp''} = \ppp'' D''_{\ppp''} = \ppp' D'_{\ppp'}$$ and $D''/\ppp'' \cong D'/\ppp' \cong D/\ppp$ as $D$-algebras. Therefore $e_{\ppp' | \ppp} = f_{\ppp' | \ppp} = 1$, so since $L$ is Galois over $K$ it follows that $\ppp$ splits completely in $D'$. Now, let $x \in D \backslash S$, so $x$ is not in any prime ideal of $D$ that splits completely in $D'$. Writing $xD = \ppp_1^{k_1} \cdots \ppp_r^{k_r}$ with each $\ppp_i \subseteq D$ prime, we see from (1) that $\ppp_i D'' = D''$ for each $i$, and therefore $xD'' = D''$, that is, $x$ is a unit in $D''$. Therefore $S^{-1}D' \subseteq D''$. This proves (2). Finally, suppose that the class group of $D$ is torsion. To show that $D'' = S^{-1}D'$, by (2) it suffices to show that $S^{-1}D'$ is a WPC extension of $D$. Let $\ppp$ be a prime ideal of $D$ with finite residue field, and let $\ppp'$ be any prime ideal of $S^{-1}D'$ lying over $\ppp$. Then $\ppp'$ doesn’t intersect $S$, so $\ppp = D \cap \ppp'$ is contained in the complement of $S$, which is equal to the union of the prime ideals of $D$ that split completely in $D'$. Since $\ppp^k$ is principal for some $k$, say, $\ppp = (a)$, it follows that $a \in D \backslash S$, so $\ppp^k \subseteq \qqq$ for some prime $\qqq$ that splits completely in $D'$. Therefore $\ppp = \qqq$ splits completely in $D'$. Thus $\ppp D' = \qqq_1 \qqq_2 \cdots \qqq_r$, where the $\qqq_i$ are distinct prime ideals of $D'$ for which $D'/\qqq_i \cong D/\ppp$. Reindexing the $\qqq_i$, we may assume there is a nonnegative integer $s$ such that $\qqq_i$ meets $S$ if and only if $i > s$. It follows, then, that $$\ppp S^{-1}D' = (\qqq_1 S^{-1}D')(\qqq_2 S^{-1}D') \cdots (\qqq_s S^{-1}D'),$$ where the $\qqq_i S^{-1}D'$ are distinct prime ideals of $S^{-1}D'$ for which $$S^{-1}D'/\qqq_i S^{-1}D' \cong S^{-1}(D/\qqq_i) \cong S^{-1} (D/\ppp) \cong D/\ppp.$$ Therefore $S^{-1}D'$ is a locally WPC extension of $D$ by Theorem \[polybthm\], hence a WPC extension of $D$ since $S^{-1}D'$ is $D$-torsion-free. Let $D$ be a Dedekind domain with torsion class group and with quotient field $K$, and let $L$ be a finite Galois extension of $K$ with ring of integers $D'$. Let $S$ denote the complement of the union of the prime ideals of $D$ that split completely in the Dedekind domain $D'$, and let $D''$ be an overring of $D'$. Then $D''$ is a WPC extension of $D$ containing $D'$ if and only if $D'' \supseteq S^{-1}D'$. M. F. Atiyah and I. G. MacDonald, [*Introduction to Commutative Algebra*]{}, Addison-Wesley Publishing Company, New York, 1969. P.-J. Cahen, Polynomial closure, J. Number Theory 61 (1996) 226–247. P.-J. Cahen and J.-L. Chabert, [*Integer-Valued Polynomials*]{}, Mathematical Surveys and Monographs, Volume 48, American Mathematical Society, 1997. J.-L. Chabert, Factorial groups and Pólya groups in Galoisian extension of $\QQ$, in: [*Commutative Ring Theory and Applications: Proceedings of the Fourth International Conference*]{}, Eds. Fontana, Kabbaj, and Wiegand, Lecture Notes in Pure and Applied Mathematics, Volume 231, Marcel Dekker, Inc., New York, 2003. J.-L. Chabert, Generalized factorial ideals, Arabian J. Sc. and Eng. 26 (2001) 51–68. E. de Shalit and E. Iceland, Integer-valued polynomials and Lubin-Tate formal groups, J. Number Theory 129 (3) (2009) 632–639. J. Elliott, Biring and plethory structures on integer-valued polynomial rings, submitted. J. Elliott, Integer-valued polynomial rings, $t$-closure, and associated primes, http://arxiv.org/abs/1105.0142, to appear in Comm. Algebra. J. Elliott, Some new approaches to integer-valued polynomial rings, in: [*Commutative Algebra and its Applications: Proceedings of the Fifth Interational Fez Conference on Commutative Algebra and Applications*]{}, Eds. Fontana, Kabbaj, Olberding, and Swanson, de Gruyter, New York, 2009. J. Elliott, Binomial rings, integer-valued polynomials, and $\lambda$-rings, J. Pure Appl. Alg. 207 (2006) 165–185. J. Elliott, Universal properties of integer-valued polynomial rings, J. Algebra 318 (2007) 68–92. J. Elliott, Binomial rings, integer-valued polynomials, and $\lambda$-rings, J. Pure Appl. Alg. 207 (2006) 165–185. R. Gilmer, [*Multiplicative Ideal Theory*]{}, Marcel Dekker, Inc., New York, 1972. G. Gerboud, Substituabilité d’un anneau de Dedekind, C. R. Acad. Sci. Paris Sér. A 317 (1993) 29–32. P. Hall, [*The Edmonton Notes on Nilpotent Groups*]{}, Queen Mary College Mathematics Notes, Mathematics Department, Queen Mary College, London, 1969. W. Krull, [*Idealtheorie*]{}, Springer-Verlag, Berlin, 1935. A. Leriche, [*Groupes, Corps et Extensions de Pólya: une Question de Capitulation*]{}, Ph.D. Thesis, Université de Picardie Jules Verne, 2010. C. Wilkerson, $\lambda$-rings, binomial domains, and vector bundles over $CP(\infty)$, Comm. Algebra  10 (1982) 311–328. M. Zafrullah, $t$-invertibility and Bazzoni-like statements, J. Pure Appl. Algebra 214 (5) (2010) 654–657.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Cylindrical-like coordinates for constant-curvature 3-spaces are introduced and discussed. This helps to clarify the geometrical properties, the coordinate ranges and the meaning of free parameters in the static vacuum solution of Linet and Tian. In particular, when the cosmological constant is positive, the spacetimes have toroidal symmetry. One of the two curvature singularities can be removed by matching the Linet–Tian vacuum solution across a toroidal surface to a corresponding region of the dust-filled Einstein static universe. Some other properties and limiting cases of these space-times are also described, together with their generalisation to higher dimensions.' address: - | $^1$ Institute of Theoretical Physics, Charles University in Prague,\ V Holešovičkách 2, 180 00 Prague 8, Czech Republic - | $^2$ Department of Mathematical Sciences, Loughborough University,\ Loughborough, LE11 3TU, UK author: - Jiří Podolský$^1$ and Jerry Griffiths$^2$ title: Cylindrically and toroidally symmetric solutions with a cosmological constant --- Introduction ============ The class of static cylindrically symmetric vacuum solutions, which was found by Levi-Civita in 1919 \[1\], can be written in the form $$\d s^2 =-\rho^{4\sigma/\Sigma}\d t^2 + \rho^{-4\sigma(1-2\sigma)/\Sigma}\d z^2 +C^{2}\rho^{2(1-2\sigma)/\Sigma} \d\phi^2 +\d\rho^2, \label{LeviCivita}$$ where ${\Sigma=1-2\sigma+4\sigma^2}$. The parameter ${\sigma\in(0,{1\over4})}$ may be interpreted as the mass per unit length of the source located along the axis ${\rho=0}$, while $C$ is the conicity parameter (see \[2\] for more details). When ${\sigma=0}$, Minkowski space in cylindrical coordinates is recovered. In 1986, a generalisation of (\[LeviCivita\]) to include a non-zero cosmological constant ${\Lambda}$ was obtained by Linet \[3\] and Tian \[4\] (see also \[5\]) in the form $$\d s^2=Q^{2/3}\Big( -P^{-2(1-8\sigma+4\sigma^2)/3\Sigma}\,\d t^2 +P^{-2(1+4\sigma-8\sigma^2)/3\Sigma}\,\d z^2 +C^{2}P^{4(1-2\sigma-2\sigma^2)/3\Sigma}\,\d\phi^2 \Big) +\d\rho^2, \label{LinetTianmetric}$$ where $\rho$ is a proper radial distance from the axis and $$Q(\rho)={1\over\sqrt{3\Lambda}}\sin\Big(\sqrt{3\Lambda}\,\rho\Big), \qquad P(\rho)={2\over\sqrt{3\Lambda}}\tan\bigg({\sqrt{3\Lambda}\over2}\,\rho\bigg). \label{PQdefs}$$ Both ${\Lambda>0}$ and ${\Lambda<0}$ are admitted (in the latter case the trigonometric functions are replaced by hyperbolic ones and $\Lambda$ by ${|\Lambda|}$). The metric (\[LinetTianmetric\]), (\[PQdefs\]) locally approaches the Levi-Civita solution (\[LeviCivita\]) either as ${\Lambda\to0}$ or near the axis as ${\rho\to0}$ because ${Q\approx\rho }$ and ${P\approx\rho }$ in these limits. Constant-curvature 3-spaces in cylindrical coordinates ====================================================== In order to understand the global geometrical properties of the Linet–Tian solution, it seems to be important first to rewrite the [*maximally symmetric*]{} 3-spaces in cylindrical-like coordinates. Such constant-curvature spaces are usually written in the spherical form $$\d s^2=R^2\,\Big(\,\frac{\d r^2}{1-k\,r^2}+r^2(\d \theta^2+\sin^2\theta\,\d \phi^2)\Big), \label{constcurvspher}$$ where ${k=0, +1, -1}$ corresponds to the geometries of ${E^3,\, S^3,\, H^3}$, respectively (as is well-known from the FLRW cosmology). Performing the transformation $$\hat\rho=r\,\sin \theta, \qquad \hat z,\, \tan\hat z,\, \tanh\hat z=\frac{r\,\cos\theta}{\sqrt{1-k\,r^2}}, \label{transf}$$ the metric (\[constcurvspher\]) becomes $$\d s^2=R^2\Big( (1-k\,\hat\rho^2)^{-1}\d\hat\rho^2 +(1-k\,\hat\rho^2)\,\d\hat z^2 + \hat\rho^2\,\d\phi^2\Big). \label{constcurvcyl}$$ In the case when ${k=0}$, this is the flat space $E^3$ in cylindrical coordinates. Since ${\hat z\in(-\infty,+\infty)}$,  ${\phi\in[0,2\pi)}$, the surfaces ${\hat\rho=\,}$const. are obviously [*cylinders*]{} with topology $R^1\times S^1$. In the case when ${k=+1}$, the metric of the three-sphere $S^3$ in cylindrical-like coordinates is $$\d s^2=R^2\left( \frac{\d\hat\rho^2}{1-\hat\rho^2} +(1-\hat\rho^2)\,\d\psi^2 + \hat\rho^2\,\d\phi^2\right), \label{3spherecyl}$$ where ${\psi=\hat z-{\pi\over2}}$. Since the 3-space is bounded in this case, both $\psi$ and $\phi$ are [*periodic*]{} coordinates with ${\psi,\phi\in[0,2\pi)}$, ${\hat\rho\in[0,1]}$, and the surfaces ${\hat\rho=}$ const. are [*tori*]{} with topology ${S^1\times S^1}$. There are thus [*two*]{} nonintersecting circular axes at ${\,\hat\rho=0\,}$ and ${\,\hat\rho=1}$ around the closed space $S^3$. This can be explicitly seen from the parametrisation of $S^3$ as the 3-surface ${{x_1}^2+{x_2}^2+{x_3}^2+{x_4}^2={R}^2}$ in a flat space ${\d s^2=\d{x_1}^2+\d{x_2}^2+\d{x_3}^2+\d{x_4}^2}$. The metric of (\[3spherecyl\]) is obtained by $$\begin{aligned} && \hspace*{-13mm} x_1=R\,\sqrt{1-\hat\rho^2}\,\cos\psi, \hspace*{4mm} x_3=R\,\hat\rho\,\cos\phi, \hspace*{3mm} \hbox{i.e.}, \hspace*{3mm} x_1^2+x_2^2=R^2(1-\hat\rho^2), \hspace*{4mm} x_3^2+x_4^2=R^2\hat\rho^2, \nonumber\\ && \hspace*{-13mm} x_2=R\,\sqrt{1-\hat\rho^2}\,\sin\psi, \hspace*{4.5mm} x_4=R\,\hat\rho\,\sin\phi, \hspace*{20.8mm} \frac{x_2}{x_1}=\tan\psi, \hspace*{20.8mm} \frac{x_4}{x_3}=\tan\phi, \label{paramet}\end{aligned}$$ see figure \[f1\]. ![\[f1\] Two sections through the three-sphere $S^3$, namely ${\phi=0}$ (left) and ${\psi=0}$ (right). Here $\psi$ and $\phi$ are “complementary” angular coordinates in different directions. Thin tori ${\hat\rho=\,}$const. around the axes ${\hat\rho=0}$ and ${\hat\rho=1}$ do not intersect. ](torus3) Interpreting the Linet–Tian solution with $\Lambda>0$ ===================================================== In view of the above geometry, it seems necessary to relabel $z$ in (\[LinetTianmetric\]) as the angular coordinate $\psi$ and to introduce the related conicity parameter $B$. The Linet–Tian vacuum metric becomes $$\d s^2=Q^{2/3}\Big( -P^{-2(1-8\sigma+4\sigma^2)/3\Sigma}\,\d t^2 +B^{2}P^{-2(1+4\sigma-8\sigma^2)/3\Sigma}\,\d\psi^2 +C^{2}P^{4(1-2\sigma-2\sigma^2)/3\Sigma}\,\d\phi^2 \Big) +\d\rho^2, \label{LinetTianmetric2}$$ where ${Q(\rho), P(\rho)}$ are given by (\[PQdefs\]) and ${\phi,\psi\in[0,2\pi)}$. Apart from the cosmological constant $\Lambda$, the metric has three parameters: $B$, $C$ and $\sigma$. There are curvature singularities along the “axes” ${\rho=0}$ and ${\rho=\pi/\sqrt{3\Lambda}}$, with the deficit angles ${2\pi(1-C)}$ and ${2\pi(1-B)}$ in the weak-field limit. Either of these two singularities can be removed and specific values of $B$ and $C$ can be established, for example, by [*matching*]{} this vacuum solution across a surfaces on which $\rho$ is constant (${\rho=\rho_1}$) to a corresponding [*toroidal region of the Einstein static universe*]{}. This well-known homogeneous and isotropic dust-filled universe has the spatial geometry of $S^3$ with radius ${R=1/\sqrt\Lambda}$ and thus, using the toroidal coordinates of (\[3spherecyl\]), can be written in the form $$\d s^2= -A_1^2\d t^2 + \frac{B_1^{2}}{\Lambda} \cos^2\Big(\sqrt\Lambda\,(\rho-\rho_{0})\Big)\d \psi^2 + \frac{C_1^{2}}{\Lambda} \sin^2\Big(\sqrt\Lambda\,(\rho-\rho_{0})\Big)\d \phi^2 +\d\rho^2. \label{Einstein}$$ For convenience, we applied a simple transformation ${\hat\rho\equiv\sin\Big(\sqrt\Lambda\,(\rho-\rho_{0})\Big)}$ and introduced the free constants $A_1, B_1, C_1$ and $\rho_0$ (see \[5\]). Usual matching conditions, namely that the metrics (\[LinetTianmetric2\]) and (\[Einstein\]) and their first derivatives are continuous across the surface ${\rho=\rho_1}$, can indeed be consistently satisfied when $$\begin{aligned} && \hspace*{-16mm}\cos\Big(\sqrt{3\Lambda}\,\rho_1\Big)={1-8\sigma+4\sigma^2\over1-2\sigma+4\sigma^2},\qquad\quad \tan^2\Big(\sqrt\Lambda\,(\rho_1-\rho_0)\Big) = {4\sigma(1-\sigma)\over1-4\sigma}, \label{match12}\\ A_1 \!\!\!\!&=&\!\!\!\! Q(\rho_1)^{1/3}\>P(\rho_1)^{-(1-8\sigma+4\sigma^2)/3\Sigma}, \label{match3}\\ B_1 \!\!\!\!&=&\!\!\!\! B\,\sqrt{\Lambda}\, Q(\rho_1)^{1/3}\>P(\rho_1)^{-(1+4\sigma-8\sigma^2)/3\Sigma} /\cos\Big(\sqrt\Lambda(\rho_1-\rho_{0})\Big), \label{match4} \\ C_1 \!\!\!\!&=&\!\!\!\! C\,\sqrt{\Lambda}\, Q(\rho_1)^{1/3}\>P(\rho_1)^{2(1-2\sigma-2\sigma^2)/3\Sigma} /\sin\Big(\sqrt\Lambda(\rho_1-\rho_{0})\Big), \label{match5} \end{aligned}$$ which uniquely determine ${\rho_1, \rho_0}$ in terms of ${\sigma\in[0,\frac{1}{4}]}$ and relate ${A_1, B_1, C_1}$ to ${B,C}$. ![\[f2\] Toroidal region of the Einstein static universe (grey) serves as the dust matter source of the Linet–Tian vacuum solution.](3spherecyl.eps) In the resulting composite spacetime, the curvature singularity at ${\rho=0}$ is removed. As shown in figure \[f2\], it is replaced by the [*toroidal region*]{} ${\rho\in[\,\rho_0, \rho_1)}$ which is a part of uniform Einstein static space filled with dust — [*the matter source*]{} (this is regular at ${\rho_0}$ when ${C_1=1}$). In the [*external region*]{} ${\rho\in[\,\rho_1,\pi/\sqrt{3\Lambda})}$ there is the [*Linet–Tian static vacuum solution*]{}. However, there still remains the curvature singularity at ${\rho=\pi/\sqrt{3\Lambda}}$ (which could alternatively be removed by a complementary toroidal dust matter source, keeping the singularity at ${\rho=0}$). It is now straightforward to calculate the total mass of such a toroidal dust source of density ${\mu=\frac{\Lambda}{4\pi}}$, yielding ${\int_{\rho_0}^{\rho_1}\!\!\int_0^{2\pi}\!\!\int_0^{2\pi}\mu\,\sqrt{g_3}\,\d\rho\,\d\psi\,\d\phi =\frac{2\pi B_1C_1}{\sqrt\Lambda}\,\frac{\sigma(1-\sigma)}{(1-4\sigma^2)}}$. The [*mass per unit length*]{} of the toroid is thus ${\sigma(1-\sigma)/(1-4\sigma^2)}$, i.e., it is determined just by the parameter $\sigma$. This demonstrates that $\sigma$ in the Linet–Tian class of solutions with ${\Lambda\not=0}$ retains its physical meaning known from the Levi-Civita spacetime (for the discussion of possible shell sources see \[6\]). Interestingly, the “no source” limit ${\sigma=0}$ [*is not the (anti-)de Sitter space*]{}, as one would naturally expect! For ${\sigma=0}$, the Linet–Tian metric reduces to $$\d s^2 =p^2(-\d t^2+B^{2}\,\d\psi^2) +\frac{4C^2}{3\Lambda}\frac{(1-p^3)}{p}\,\d\phi^2 +\frac{3}{\Lambda}\frac{p}{(1-p^3)}\,\d p^2, \label{nosource}$$ where the coordinate $p\equiv\cos^{2/3}\Big(\frac{\sqrt{3\Lambda}}{2}\,\rho\Big)$ was introduced. This spacetime of algebraic type D belongs to the Plebański–Demiański family and also to the Kundt family of solutions. In fact, it is a generalization of the BIII metric \[3,7\]. Its geometrical properties still need to be investigated. Extension to higher dimensions ============================== The Linet–Tian class of solutions described above can be extended to any higher $D$-dimensions. Such toroidally symmetric static vacuum metrics read $$\d s^2=R(\rho)^\alpha\bigg( -S(\rho)^{2p_0}\,\d t^2 +\sum_{i=1}^{D-2}C_i^{2}\,S(\rho)^{2p_i}\,\d\phi_i^{\,2} \bigg)+\d\rho^2, \label{HDG}$$ where $$R(\rho)=\cos(\beta\rho), \qquad S(\rho)=\tan(\beta\rho), \qquad \alpha=\frac{4}{D-1}, \qquad \beta=\sqrt{\frac{(D-1)\Lambda}{2(D-2)}}, \label{alphabeta}$$ ${\phi_i\in[0,2\pi)}$, $C_i$ are the corresponding conicity parameters, and the constants $p_i$ satisfy $$\sum_{i=0}^{D-2}\,p_i=1, \qquad \sum_{i=0}^{D-2}\,p_i^{\,2}=1. \label{pis}$$ For ${D=4}$ this reduces to the Linet–Tian solution (\[LinetTianmetric\]), (\[PQdefs\]) with the identification of parameters $$p_0=\frac{2\sigma}{\Sigma}, \qquad p_1=\frac{2\sigma(2\sigma-1)}{\Sigma}, \qquad p_2=\frac{1-2\sigma}{\Sigma} . \label{identif}$$ The ${\Lambda<0}$ counterpart of the metric (\[HDG\])–(\[pis\]) has been recently presented in \[8,9\]. We acknowledge financial support from the grants GAČR 202/08/0187, GAČR 202/09/0772, and the Project No. LC06014 of the Czech Ministry of Education. [9]{} Levi-Civita T 1919 [*Rend. Acc. Lincei*]{} [**28**]{} 101 Griffiths J B and Podolský J 2009 [*Exact space-times in Einstein’s general relativity*]{} (Cambridge: Cambridge University Press) Linet B 1986 [*J. Math. Phys.*]{} [**27**]{} 1817 Tian Q 1986 [*Phys. Rev.*]{} D [**33**]{} 3549 Griffiths J B and Podolský J 2010 [*Phys. Rev.*]{} D [**81**]{} 064015 Žofka M and Bičák J 2008 [*Class. Quantum Grav.*]{} [**25**]{} 015011 Bonnor W B 2008 [*Class. Quantum Grav.*]{} [**25**]{} 225005 Sar[i]{}oğlu Ö and Tekin B 2009 [*Phys. Rev.*]{} D [**79**]{} 087502 Sar[i]{}oğlu Ö and Tekin B 2009 [*Class. Quantum Grav.*]{} [**26**]{} 048001
{ "pile_set_name": "ArXiv" }
--- abstract: 'We introduce a method to provide vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations. Given a dataset with ground-truth labels and a loss function defined over those labels, we process images through a “probe network” and compute an embedding based on estimates of the Fisher information matrix associated with the probe network parameters. This provides a fixed-dimensional embedding of the task that is independent of details such as the number of classes and does not require any understanding of the class label semantics. We demonstrate that this embedding is capable of predicting task similarities that match our intuition about semantic and taxonomic relations between different visual tasks ([*e.g.*]{}, tasks based on classifying different types of plants are similar). We also demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a new task. We present a simple meta-learning framework for learning a metric on embeddings that is capable of predicting which feature extractors will perform well. Selecting a feature extractor with task embedding obtains a performance close to the best available feature extractor, while costing substantially less than exhaustively training and evaluating on all available feature extractors.' author: - | Alessandro Achille\ UCLA and AWS\ [[email protected]]{} - | Michael Lam\ AWS\ [[email protected]]{} - | Rahul Tewari\ AWS\ [[email protected]]{} - | Avinash Ravichandran\ AWS\ [[email protected]]{} - | Subhransu Maji\ UMass and AWS\ [[email protected]]{} - | Charless Fowlkes\ UCI and AWS\ [[email protected]]{} - | Stefano Soatto\ UCLA and AWS\ [[email protected]]{} - | Pietro Perona\ Caltech and AWS\ [[email protected]]{} bibliography: - 'references.bib' title: '<span style="font-variant:small-caps;">Task2Vec:</span> Task Embedding for Meta-Learning' --- ![image](figures/tsne_combined.pdf){width=".8\linewidth"} Introduction ============ The success of Deep Learning hinges in part on the fact that models learned for one task can be used on other related tasks. Yet, no general framework exists to describe and learn relations between tasks. We introduce the <span style="font-variant:small-caps;">task2vec</span> embedding, a technique to represent tasks as elements of a vector space based on the Fisher Information Matrix. The norm of the embedding correlates with the complexity of the task, while the distance between embeddings captures semantic similarities between tasks (Fig. \[fig:taxonomic\_qualitative\]). When other natural distances are available, such as the taxonomical distance in biological classification, we find that the embedding distance correlates positively with it (Fig. \[fig:taxonomical\_distance\]). Moreover, we introduce an asymmetric distance on tasks which correlates with the transferability between tasks. Computation of the embedding leverages a duality between network parameters (weights) and outputs (activations) in a deep neural network (DNN): Just as the activations of a DNN trained on a complex visual recognition task are a rich representation of the input images, we show that the gradients of the weights relative to a task-specific loss are a rich representation of the task itself. Specifically, given a task defined by a dataset ${\mathcal{D}}={\{{(x_i,y_i)}\}}_{i=1}^N$ of labeled samples, we feed the data through a pre-trained reference convolutional neural network which we call “*probe network*”, and compute the diagonal Fisher Information Matrix (FIM) of the network filter parameters to capture the structure of the task (Sect. \[sec:fisher\]). Since the architecture and weights of the probe network are fixed, the FIM provides a fixed-dimensional representation of the task. We show this embedding encodes the “difficulty” of the task, characteristics of the input domain, and which features of the probe network are useful to solve it (Sect. \[sec:task2vec\]). Our task embedding can be used to reason about the space of tasks and solve meta-tasks. As a motivating example, we study the problem of selecting the best pre-trained feature extractor to solve a new task. This can be particularly valuable when there is insufficient data to train or fine-tune a generic model, and transfer of knowledge is essential. [<span style="font-variant:small-caps;">task2vec</span> ]{}depends solely on the task, and ignores interactions with the model which may however play an important role. To address this, we learn a joint task and model embedding, called [<span style="font-variant:small-caps;">model2vec</span>]{}, in such a way that models whose embeddings are close to a task exhibit good perfmormance on the task. We use this to select an expert from a given collection, improving performance relative to fine-tuning a generic model trained on ImageNet and obtaining close to ground-truth optimal selection. We discuss our contribution in relation to prior literature in Sect. \[sec:related-work\], after presenting our empirical results in Sect. \[sec:experiments\]. Task Embeddings via Fisher Information {#sec:fisher} ====================================== Given an observed input $x$ ([*e.g.*]{}, an image) and an hidden task variable $y$ ([*e.g.*]{}, a label), a deep network is a family of functions $p_w(y|x)$ parametrized by weights $w$, trained to approximate the posterior $p(y|x)$ by minimizing the (possibly regularized) cross entropy loss $H_{p_w,\hat{p}}(y|x) = {\mathbb{E}}_{x,y \sim \hat{p}}[-\log p_w(y|x)]$, where $\hat{p}$ is the empirical distribution defined by the training set ${\mathcal{D}}={\{{(x_i,y_i)}\}}_{i=1}^N$. It is useful, especially in transfer learning, to think of the network as composed of two parts: a feature extractor which computes some representation $z=\phi_w(x)$ of the input data, and a “head,” or classifier, which encodes the distribution $p(y|z)$ given the representation $z$. Not all network weights are equally useful in predicting the task variable: the importance, or “informative content,” of a weight for the task can be quantified by considering a perturbation $w'=w + \delta w$ of the weights, and measuring the average Kullbach-Leibler (KL) divergence between the original output distribution $p_{w}(y|x)$ and the perturbed one $p_{w'}(y|x)$. To second-order approximation, this is $${\mathbb{E}}_{x\sim \hat{p}} {\mathop{KL}({\textstyle p_{w'}(y|x)}\,\|\,{{\textstyle p_w(y|x)}})} = \delta w \cdot F \delta w + o(\delta w^2),$$ where $F$ is the Fisher information matrix (FIM): $$F = {\mathbb{E}}_{x, y \sim \hat{p}(x) p_w (y|x)} \left[ \nabla_w \log p_w(y|x) \nabla_w \log p_w(y|x)^T \right].$$ that is, the expected covariance of the scores (gradients of the log-likelihood) with respect to the model parameters. The FIM is a Riemannian metric on the space of probability distributions [@amari2000methods], and provides a measure of the information a particular parameter (weight or feature) contains about the joint distribution $p_w(x,y)=\hat{p}(x)p_w(y|x)$: If the classification performance for a given task does not depend strongly a parameter, the corresponding entry in the FIM will be small. The FIM is also related to the (Kolmogorov) complexity of a task, a property that can be used to define a computable metric of the learning distance between tasks [@achille2018kolmogorov]. Finally, the FIM can be interpreted as an easy-to-compute positive semidefinite upper-bound to the Hessian of the cross-entropy loss, and coincides with it at local minima [@martens14new]. In particular, “flat minima” correspond to weights that have, on average, low (Fisher) information [@achille2018emergence; @hochreiter1997flat]. [<span style="font-variant:small-caps;">task2vec</span> ]{}embedding using a probe network {#sec:task2vec} ------------------------------------------------------------------------------------------ While the network activations capture the information in the input image which are needed to infer the image label, the FIM indicates the set of feature maps which are more informative for solving the current task. Following this intuition, we use the FIM to represent the task itself. However, the FIMs computed on different networks are not directly comparable. To address this, we use single “probe” network pre-trained on ImageNet as a feature extractor and re-train only the classifier layer on any given task, which usually can be done efficiently. After training is complete, we compute the FIM for the feature extractor parameters. Since the full FIM is unmanageably large for rich probe networks based on CNNs, we make two additional approximations. First, we only consider the diagonal entries, which implicitly assumes that correlations between different filters in the probe network are not important. Second, since the weights in each filter are usually not independent, we average the Fisher Information for all weights in the same filter. The resulting representation thus has fixed size, equal to the number of filters in the probe network. We call this embedding method <span style="font-variant:small-caps;">task2vec</span>. #### Robust Fisher computation Since the FIM is a local quantity, it is affected by the local geometry of the training loss landscape, which is highly irregular in many deep network architectures [@li2017visualizing], and may be too noisy when trained with few samples. To avoid this problem, instead of a direct computation, we use a more robust estimator that leverages connections to variational inference. Assume we perturb the weights $\hat{w}$ of the network with Gaussian noise ${\mathcal{N}}(0, \Lambda)$ with precision matrix $\Lambda$, and we want to find the optimal $\Lambda$ which yields a good expected error, while remaining close to an isotropic prior ${\mathcal{N}}(\hat{w}, \lambda^2 I)$. That is, we want to find $\Lambda$ that minimizes: $$\begin{gathered} L(\hat{w}; \Lambda) = {\mathbb{E}}_{w\sim {\mathcal{N}}(\hat{w},\Lambda)} [H_{p_w,\hat{p}}p(y|x)] \\ + \beta {\mathop{KL}({\textstyle {\mathcal{N}}(0, \Lambda)}\,\|\,{{\textstyle {\mathcal{N}}(0, \lambda^2 I)}})},\end{gathered}$$ where $H$ is the cross-entropy loss and $\beta$ controls the weight of the prior. Notice that for $\beta=1$ this reduces to the Evidence Lower-Bound (ELBO) commonly used in variational inference. Approximating to the second order, the optimal value of $\Lambda$ satisfies (see Supplementary Material): $$\frac{\beta}{2N} \Lambda = F + \frac{\beta\lambda^2}{2N} I.$$ Therefore, $\frac{\beta}{2N} \Lambda \sim F + o(1)$ can be considered as an estimator of the FIM $F$, biased towards the prior $\lambda^2 I$ in the low-data regime instead of being degenerate. In case the task is trivial (the loss is constant or there are too few samples) the embedding will coincide with the prior $\lambda^2 I$, which we will refer to as the **trivial embedding**. This estimator has the advantage of being easy to compute by directly minimizing the loss $L(\hat{w}; \Sigma)$ through Stochastic Gradient Variational Bayes [@kingma2015variational], while being less sensitive to irregularities of the loss landscape than direct computation, since the value of the loss depends on the cross-entropy in a neighborhood of $\hat{w}$ of size $\Lambda^{-1}$. As in the standard Fisher computation, we estimate one parameter per filter, rather than per weight, which in practice means that we constrain $\Lambda_{ii} = \Lambda_{jj}$ whenever $w_i$ and $w_j$ belongs to the same filter. In this case, optimization of $L(\hat{w}; \Lambda)$ can be done efficiently using the local reparametrization trick of [@kingma2015variational]. Properties of the [<span style="font-variant:small-caps;">task2vec</span> ]{}embedding -------------------------------------------------------------------------------------- The task embedding we just defined has a number of useful properties. For illustrative purposes, consider a two-layer sigmoidal network for which an analytic expression can be derived (see Supplementary Materials). The FIM of the feature extractor parameters can be written using the Kronecker product as $$F = {\mathbb{E}}_{x,y\sim \hat{p}(x)p_w(y|x)} [(y-p)^2 \cdot S \otimes xx^T]$$ where $p = p_w(y = 1 | x)$ and the matrix $S = ww^T \odot zz^T \odot (1-z)(1-z)^T$ is an element-wise product of classifier weights $w$ and first layer feature activations $z$. It is informative to compare this expression to an embedding based only on the dataset domain statistics, such as the (non-centered) covariance $C_0 = {\mathbb{E}}\left[ xx^T\right]$ of the input data or the covariance $C_1 = {\mathbb{E}}\left[ zz^T\right]$ of the feature activations. One could take such statistics as a representative [*domain embedding*]{} since they only depend on the marginal distribution $p(x)$ in contrast to the FIM [*task embedding*]{}, which depends on the joint distribution $p(x,y)$. These simple expressions highlight some important (and more general) properties of the Fisher embedding we now describe. **Invariance to the label space:** The task embedding does not directly depend on the task labels, but only on the predicted distribution $ p_w(y| x)$ of the trained model. Information about the ground-truth labels $y$ is encoded in the weights $w$ which are a sufficient statistic of the task [@achille2018emergence]. In particular, the task embedding is invariant to permutations of the labels $y$, and has fixed dimension (number of filters of the feature extractor) regardless of the output space (e.g., k-way classification with varying k). **Encoding task difficulty:** As we can see from the expressions above, if the fit model is very confident in its predictions, ${\mathbb{E}}[(y-p)^2]$ goes to zero. Hence, the norm of the task embedding $\|F\|_\star$ scales with the difficulty of the task for a given feature extractor $\phi$. Figure \[fig:taxonomical\_distance\] (Right) shows that even for more complex models trained on real data, the FIM norm correlates with test performance. **Encoding task domain:** Data points $x$ that are classified with high confidence, i.e., $p$ is close to 0 or 1, will have a lower contribution to the task embedding than points near the decision boundary since $p(1-p)$ is maximized at $p=1/2$. Compare this to the covariance matrix of the data, $C_0$, to which all data points contribute equally. Instead, in [<span style="font-variant:small-caps;">task2vec</span> ]{}information on the domain is based on data near the decision boundary (task-weighted domain embedding). **Encoding useful features for the task:** The FIM depends on the curvature of the loss function with the diagonal entries capturing the sensitivity of the loss to model parameters. Specifically, in the two-layer model one can see that, if a given feature is uncorrelated with $y$, the corresponding blocks of $F$ are zero. In contrast, a domain embedding based on feature activations of the probe network (e.g., $C_1$) only reflects which features vary over the dataset without indication of whether they are relevant to the task. Similarity Measures on the Space of Tasks ========================================= What metric should be used on the space of tasks? This depends critically on the meta-task we are considering. As a motivation, we concentrate on the meta-task of selecting the pre-trained feature extractor from a set in order to obtain the best performance on a new training task. There are several natural metrics that may be considered for this meta-task. In this work, we mainly consider: #### Taxonomic distance For some tasks, there is a natural notion of semantic similarity, for instance defined by sets of categories organized in a taxonomic hierarchy where each task is classification inside a subtree of the hierarchy (*e.g.*, we may say that classifying breeds of dogs is closer to classification of cats than it is to classification of species of plants). In this setting, we can define $$D_\text{tax}(t_a,t_b) = \min_{i\in S_a, j\in S_b} d(i,j),$$ where $S_a,S_b$ are the sets of categories in task $t_a,t_b$ and $d(i,j)$ is an ultrametric or graph distance in the taxonomy tree. Notice that this is a proper distance, and in particular it is symmetric. #### Transfer distance. We define the transfer (or fine-tuning) gain from a task $t_a$ to a task $t_b$ (which we improperly call distance, but is not necessarily symmetric or positive) as the difference in expected performance between a model trained for task $t_b$ from a fixed initialization (random or pre-trained), and the performance of a model fine-tuned for task $t_b$ starting from a solution of task $t_a$: $$D_{\text{ft}}(t_a \to t_b) = \frac{{\mathbb{E}}[\ell_{a\to b}] - {\mathbb{E}}[\ell_b]}{{\mathbb{E}}[\ell_b]},$$ where the expectations are taken over all trainings with the selected architecture, training procedure and network initialization, $\ell_{b}$ is the final test error obtained by training on task $b$ from the chosen initialization, and $\ell_{a \to b}$ is the error obtained instead when starting from a solution to task $a$ and then fine-tuning (with the selected procedure) on task $t_b$. Symmetric and asymmetric [<span style="font-variant:small-caps;">task2vec</span> ]{}metrics ------------------------------------------------------------------------------------------- By construction, the Fisher embedding on which [<span style="font-variant:small-caps;">task2vec</span> ]{}is based captures fundamental information about the structure of the task. We may therefore expect that the distance between two embeddings correlate positively with natural metrics on the space of tasks. However, there are two problems in using the Euclidean distance between embeddings: the parameters of the network have different scales, and the norm of the embedding is affected by complexity of the task and the number of samples used to compute the embedding. #### Symmetric [<span style="font-variant:small-caps;">task2vec</span> ]{}distance To make the distance computation robust, we propose to use the cosine distance between normalized embeddings: $$d_{\text{sym}}(F_a, F_b) = d_\text{cos} \Big(\frac{F_a}{F_a + F_b}, \frac{F_b}{F_a + F_b}\Big),$$ where $d_\text{cos}$ is the cosine distance, $F_a$ and $F_b$ are the two task embeddings ([*i.e.*]{}, the diagonal of the Fisher Information computed on the same probe network), and the division is element-wise. This is a symmetric distance which we expect to capture semantic similarity between two tasks. For example, we show in Fig. \[fig:taxonomical\_distance\] that it correlates well with the taxonomical distance between species on iNaturalist. On the other hand, precisely for this reason, this distance is ill-suited for tasks such as model selection, where the (intrinsically asymmetric) transfer distance is more relevant. #### Asymmetric [<span style="font-variant:small-caps;">task2vec</span> ]{}distance In a first approximation, that does not consider either the model or the training procedure used, positive transfer between two tasks depends both on the similarity between two tasks and on the complexity of the first. Indeed, pre-training on a general but complex task such as ImageNet often yields a better result than fine-tuning from a close dataset of comparable complexity. In our case, complexity can be measured as the distance from the trivial embedding. This suggests the following asymmetric score, again improperly called a “distance” despite being asymmetric and possibly negative: $$d_{\text{asym}}(t_a \to t_b) = d_{\text{sym}}(t_a, t_b) - \alpha d_\text{sym}(t_a, t_0),$$ where $t_0$ is the trivial embedding, and $\alpha$ is an hyperparameter. This has the effect of bring more complex models closer. The hyper-parameter $\alpha$ can be selected based on the meta-task. In our experiments, we found that the best value of $\alpha$ ($\alpha=0.15$ when using a ResNet-34 pretrained on ImageNet as the probe network) is robust to the choice of meta-tasks. <span style="font-variant:small-caps;">model2vec</span>: task/model co-embedding {#sec:model_embedding} ================================================================================ By construction, the [<span style="font-variant:small-caps;">task2vec</span> ]{}distance ignores details of the model and only relies on the task. If we know what task a model was trained on, we can represent the model by the embedding of that task. However, in general we may not have such information ([*e.g.*]{}, black-box models or hand-constructed feature extractors). We may also have multiple models trained on the same task with different performance characteristics. To model the joint interaction between task and model ([*i.e.*]{}, architecture and training algorithm), we aim to learn a joint embedding of the two. We consider for concreteness the problem of learning a joint embedding for model selection. In order to embed models in the task space so that those near a task are likely to perform well on that task, we formulate the following meta-learning problem: Given $k$ models, their [<span style="font-variant:small-caps;">model2vec</span>]{} embedding are the vectors $m_i = F_i + b_i$, where $F_i$ is the task embedding of the task used to train model $m_i$ (if available, else we set it to zero), and $b_i$ is a learned “model bias” that perturbs the task embedding to account for particularities of the model. We learn $b_i$ by optimizing a $k$-way cross entropy loss to predict the best model given the task distance (see Supplementary Material): $$\mathcal{L} = {\mathbb{E}}[ -\log p(m\,|\,d_\text{asym}(t, m_0), \ldots, d_\text{asym}(t, m_k))].$$ After training, given a novel query task $t$, we can then predict the best model for it as the $\arg\max_i d_\text{asym}(t, m_i)$, that is, the model $m_i$ embedded closest to the query task. ![image](figures/violin_plot_full.pdf){height=".38\linewidth"} Experiments {#sec:experiments} =========== We test [<span style="font-variant:small-caps;">task2vec</span> ]{}on a large collection of tasks and models, related to different degrees. Our experiments aim to test both qualitative properties of the embedding and its performance on meta-learning tasks. We use an off-the-shelf ResNet-34 pretrained on ImageNet as our probe network, which we found to give the best overall performance (see Sect. \[sec:model-selection\]). The collection of tasks is generated starting from the following four main datasets. **iNaturalist** [@van2018inaturalist]: Each task extracted corresponds to species classification in a given taxonomical order. For instance, the *“Rodentia task”* is to classify species of rodents. Notice that each task is defined on a separate subset of the images in the original dataset; that is, the domains of the tasks are disjoint. **CUB-200** [@WahCUB_200_2011]: We use the same procedure as iNaturalist to create tasks. In this case, all tasks are classifications inside orders of birds (the *aves* taxonomical class), and have generally much less training samples than corresponding tasks in iNaturalist. **iMaterialist** [@iMatFGVC5] and **DeepFashion** [@liu2016deepfashion]: Each image in both datasets is associated with several binary attributes ([*e.g.*]{}, style attributes) and categorical attributes ([*e.g.*]{}, color, type of dress, material). We binarize the categorical attributes, and consider each attribute as a separate task. Notice that, in this case, all tasks share the same domain and are naturally correlated. In total, our collection of tasks has 1460 tasks (207 iNaturalist, 25 CUB, 228 iMaterialist, 1000 DeepFashion). While a few tasks have many training examples ([*e.g.*]{}, hundred thousands), most have just hundreds or thousands of samples. This simulates the heavy-tail distribution of data in real-world applications. Together with the collection of tasks, we collect several “expert” feature extractors. These are ResNet-34 models pre-trained on ImageNet and then fine-tuned on a specific task or collection of related tasks (see Supplementary Materials for details). We also consider a “generic”expert pre-trained on ImageNet without any finetuning. Finally, for each combination of expert feature extractor and task, we trained a linear classifier on top of the expert in order to solve the selected task using the expert. In total, we trained 4,100 classifiers, 156 feature extractors and 1,460 embeddings. The total effort to generate the final results was about 1,300 GPU hours. #### Meta-tasks. In Sect. \[sec:model-selection\], for a given task we aim to predict, using [<span style="font-variant:small-caps;">task2vec</span> ]{}, which expert feature extractor will yield the best classification performance. In particular, we formulate two model selection meta-tasks: **iNat + CUB** and **Mixed**. The first consists of 50 tasks and experts from iNaturalist and CUB, and aims to test fine-grained expert selection in a restricted domain. The second contains a mix of 26 curated experts and 50 random tasks extracted from all datasets, and aims to test model selection between different domains and tasks (see Supplementary Material for details). Task Embedding Results ---------------------- #### Task Embedding qualitatively reflects taxonomic distance for iNaturalist For tasks extracted from the iNaturalist dataset (classification of species), the taxonomical distance between orders provides a natural metric of the semantic similarity between tasks. In Figure \[fig:taxonomical\_distance\] we compare the symmetric [<span style="font-variant:small-caps;">task2vec</span> ]{}distance with the taxonomical distance, showing strong agreement. #### Task embedding for iMaterialist In Fig. \[fig:taxonomic\_qualitative\] we show a t-SNE visualization of the embedding for iMaterialist and iNaturalist tasks. Task embedding yields interpretable results: Tasks that are correlated in the dataset, such as binary classes corresponding to the same categorical attribute, may end up far away from each other and close to other tasks that are semantically more similar ([*e.g.*]{}, the *jeans* category task is close to the *ripped* attribute and the *denim* material). This is reflected in the mixture of colors of semantically related nearby tasks, showing non-trivial grouping. We also compare the [<span style="font-variant:small-caps;">task2vec</span> ]{}embedding with a domain embedding baseline, which only exploits the input distribution $p(x)$ rather than the task distribution $p(x,y)$. While some tasks are highly correlated with their domain ([*e.g.*]{}, tasks from iNaturalist), other tasks differ only on the labels ([*e.g.*]{}, all the attribute tasks of iMaterialist, which share the same clothes domain). Accordingly, the domain embedding recovers similar clusters on iNaturalist. However, on iMaterialst domain embedding collapses all tasks to a single uninformative cluster (not a single point due to slight noise in embedding computation). #### Task Embedding encodes task difficulty The scatter-plot in Fig. \[fig:model\_recommendation\] compares the norm of embedding vectors vs. performance of the best expert (or task specific model for cases where we have the diagonal computed). As shown analytically for the two-layers model, the norm of the task embedding correlates with the complexity of the task also on real tasks and architectures. ![**[<span style="font-variant:small-caps;">task2vec</span> ]{}improves results at different dataset sizes and training conditions:** Performance of model selection on a subset of 4 tasks as a function of the number of samples available to train relative to optimal model selection (dashed orange). Training a classifier on the feature extractor selected by [<span style="font-variant:small-caps;">task2vec</span> ]{}(solid red) is always better than using a generic ImageNet feature extractor (dashed red). The same holds when allowed to fine-tune the feature extractor (blue curves). Also notice that in the low-data regime fine-tuning the ImageNet feature extractor is more expensive and has a worse performance than accurately selecting a good fixed feature extractor. []{data-label="fig:data_efficiency"}](figures/data_efficiency.pdf){width="0.8\columnwidth"} Probe network Top-10 All --------------- ------------ ------------ Chance +13.95% +59.52% VGG-13 +4.82% +38.03% DenseNet-121 +0.30% +10.63% ResNet-13 **+0.00%** **+9.97%** : **Choice of probe network.** Mean relative error increase over the ground-truth optimum on the iNat+CUB meta-task for different choices of the probe-network. We also report the performance on the top 10 tasks with more samples to show how data size affect different architectures.[]{data-label="fig:probe_network_choice"} Model Selection {#sec:model-selection} --------------- Given a task, our aim is to select an expert feature extractor that maximizes the classification performance on that task. We propose two strategies: (1) embed the task and select the feature extractor trained on the most similar task, and (2) jointly embed the models and tasks, and select a model using the learned metric (see Section \[sec:model\_embedding\]). Notice that (1) does not use knowledge of the model performance on various tasks, which makes it more widely applicable but requires we know what task a model was trained for and may ignore the fact that models trained on slightly different tasks may still provide an overall better feature extractor (for example by over-fitting less to the task they were trained on). In Table \[fig:model\_recommendation\_table\] we compare the overall results of the various proposed metrics on the model selection meta-tasks. On both the iNat+CUB and Mixed meta-tasks, the Asymmetric [<span style="font-variant:small-caps;">task2vec</span> ]{}model selection is close to the ground-truth optimal, and significantly improves over both chance, and over using an generic ImageNet expert. Notice that our method has $O(1)$ complexity, while searching over a collection of $N$ experts is $O(N)$. #### Error distribution In Fig. \[fig:model\_recommendation\] we show in detail the error distribution of the experts on multiple tasks. It is interesting to notice that the classification error obtained using most experts clusters around some mean value, and little improvement is observed over using a generic expert. On the other hand, a few optimal experts can obtain a largely better performance on the task than a generic expert. This confirms the importance of having access to a large collection of experts when solving a new task, especially if few training data are available. But this collection can only be efficiently exploited if an algorithm is given to efficiently find one of the few experts for the task, which we propose. Meta-task Optimal Chance ImageNet [<span style="font-variant:small-caps;">task2vec</span> ]{} Asymmetric [<span style="font-variant:small-caps;">task2vec</span> ]{} [<span style="font-variant:small-caps;">model2vec</span>]{} ------------ --------- ---------- ---------- ------------------------------------------------------------- ------------------------------------------------------------------------ ------------------------------------------------------------- iNat + CUB 31.24 +59.52% +30.18% +42.54% +9.97% **+6.81%** Mixed 22.90 +112.49% +75.73% +40.30% +29.23% **+27.81%** #### Dependence on task dataset size Finding experts is especially important when the task we are interested in has relatively few samples. In Fig. \[fig:data\_efficiency\] we show how the performance of [<span style="font-variant:small-caps;">task2vec</span> ]{}varies on a model selection task as the number of samples varies. At all sample sizes [<span style="font-variant:small-caps;">task2vec</span> ]{}is close to the optimum, and improves over selecting a generic expert (ImageNet), both when fine-tuning and when training only a classifier. We observe that the best choice of experts is not affected by the dataset size, and that even with few examples [<span style="font-variant:small-caps;">task2vec</span> ]{}is able to find the optimal experts. #### Choice of probe network In Table \[fig:probe\_network\_choice\] we show that DenseNet [@huang2017densely] and ResNet architectures [@he2016deep] perform significantly better when used as probe networks to compute the [<span style="font-variant:small-caps;">task2vec</span> ]{}embedding than a VGG [@simonyan2014very] architecture. Related Work {#sec:related-work} ============ #### Task and Domain embedding. Tasks distinguished by their domain can be understood simply in terms of image statistics. Due to the bias of different datasets, sometimes a benchmark task may be identified just by looking at a few images [@torralba2011unbiased]. The question of determining what summary statistics are useful (analogous to our choice of probe network) has also been considered, for example [@edwards2016towards] train an autoencoder that learns to extract fixed dimensional summary statistics that can reproduce many different datasets accurately. However, for general vision tasks which apply to all natural images, the domain is the same across tasks. Taskonomy [@zamir2018taskonomy] explores the structure of the space of tasks, focusing on the question of effective knowledge transfer in a curated collection of 26 visual tasks, ranging from classification to 3D reconstruction, defined on a common domain. They compute pairwise transfer distances between pairs of tasks and use the results to compute a directed hierarchy. Introducing novel tasks requires computing the pairwise distance with tasks in the library. In contrast, we focus on a larger library of 1,460 fine-grained classification tasks both on same and different domains, and show that it is possible to represent tasks in a topological space with a constant-time embedding. The large task collection and cheap embedding costs allow us to tackle new meta-learning problems. #### Fisher kernels Our work takes inspiration from Jaakkola and Hausler [@jaakkola1999exploiting]. They propose the “Fisher Kernel”, which uses the gradients of a generative model score function as a representation of similarity between data items $$K(x^{(1)}, x^{(2)}) = \nabla_\theta \log P (x^{(1)}|\theta)^{T} F^{-1}\nabla_\theta \log P (x^{(2)}|\theta).$$ Here $P(x | \theta)$ is a parameterized generative model and $F$ is the Fisher information matrix. This provides a way to utilize generative models in the context of discriminative learning. Variants of the Fisher kernel have found wide use as a representation of images [@perronnin2010improving; @sanchez2013image], and other structured data such as protein molecules [@jaakkola1999using] and text [@saunders2003string]. Since the generative model can be learned on unlabelled data, several works have investigated the use of Fisher kernel for unsupervised learning [@holub2005combining; @seeger2000learning]. [@van2011learning] learns a metric on the Fisher kernel representation similar to our metric learning approach. Our approach differs in that we use the FIM as a representation of a whole dataset (task) rather than using model gradients as representations of individual data items. #### Fisher Information for CNNs Our approach to task embedding makes use of the Fisher Information matrix of a neural network as a characterization of the task. Use of Fisher information for neural networks was popularized by Amari [@amari1998natural] who advocated optimization using natural gradient descent which leverages the fact that the FIM is an appropriate parameterization-independent metric on statistical models. Recent work has focused on approximates of FIM appropriate in this setting (see e.g., [@heskes2000natural; @finn2017model; @martens2015optimizing]). FIM has also been proposed for various regularization schemes [@achille2018emergence; @arora2018stronger; @liang2017fisher; @mroueh2017fisher], analyze learning dynamics of deep networks [@achille2017critical], and to overcome catastrophic forgetting [@kirkpatrick2017overcoming]. #### Meta-learning and Model Selection The general problem of meta-learning has a long history with much recent work dedicated to problems such as neural architecture search and hyper-parameter estimation. Closely related to our problem is work on selecting from a library of classifiers to solve a new task [@smith2014recommending; @abdulrahman2018speeding; @leite2012selecting]. Unlike our approach, these usually address the question via land-marking or active testing, in which a few different models are evaluated and performance of the remainder estimated by extension. This can be viewed as a problem of completing a matrix defined by performance of each model on each task. A similar approach has been taken in computer vision for selecting a detector for a new category out of a large library of detectors [@matikainen2012model; @zhang2014predicting; @wang2015model]. Discussion ========== [<span style="font-variant:small-caps;">task2vec</span> ]{}is an efficient way to represent a task, or the corresponding dataset, as a fixed dimensional vector. It has several appealing properties, in particular its norm correlates with the test error obtained on the task, and the cosine distance between embeddings correlates with natural distances between tasks, when available, such as the taxonomic distance for species classification, and the fine-tuning distance for transfer learning. Having a representation of tasks paves the way for a wide variety of meta-learning tasks. In this work, we focused on selection of an expert feature extractor in order to solve a new task, especially when little training data is present, and showed that using [<span style="font-variant:small-caps;">task2vec</span> ]{}to select an expert from a collection can sensibly improve test performance while adding only a small overhead to the training process. Meta-learning on the space of tasks is an important step toward general artificial intelligence. In this work, we introduce a way of dealing with thousands of tasks, enough to enable reconstruct a topology on the task space, and to test meta-learning solutions. The current experiments highlight the usefulness of our methods. Even so, our collection does not capture the full complexity and variety of tasks that one may encounter in real-world situations. Future work should further test effectiveness, robustness, and limitations of the embedding on larger and more diverse collections.
{ "pile_set_name": "ArXiv" }
--- abstract: 'There exists two types of semi-direct products between a Lie group $G$ and a vector space $V$. The left semi-direct product, $G \ltimes V$, can be constructed when $G$ is equipped with a left action on $V$. Similarly, the right semi-direct product, $G \rtimes V$, can be constructed when $G$ is equipped with a right action on $V$. In this paper, we will construct a new type of semi-direct product, $G \Join V$, which can be seen as the ‘sum’ of right and left semi-direct products. We then proceed to the parallel existing semi-direct product Euler-Poincaré theory. We find that the group multiplication, the Lie bracket, and the diamond operator can each be seen as a sum of the associated concepts in right and left semi-direct product theory. Finally, we conclude with a toy example and the group of $2$-jets of diffeomorphisms above a fixed point. This final example has potential use in the creation of particle methods for problems on diffeomorphism groups.' author: - 'Leonardo Colombo & Henry O. Jacobs' date: 15 March 2013 title: 'Lagrangian mechanics on centered semi-direct products' --- Introduction ============ Let $G$ be a Lie group and $V$ be a vector space on which $G$ acts by a left action. Given these ingredients, we may form the Lie group $G \ltimes V$, which is isomorphic to $G \times V$ as a set, but equipped with the composition $$(g,v) \cdot_{\ltimes} (h,w) = (g \cdot h, g \cdot w + v) \quad , \quad \forall (g,v), (h,w) \in G \ltimes V.$$ A standard example of a system which evolves on a left semi-direct product is the heavy top, where $G = \operatorname{SO}(3)$ and $V = \mathbb{R}^3$. In contrast, if $G$ acts on $V$ by a right action, we may form the right semi-direct product $G \rtimes V$ defined by the composition $$(g,v) \cdot_{\rtimes} (h,w) = (g \cdot h , w + v \cdot h ).$$ A standard example of such a system is a fluid with a vector-valued advected parameter [@HoMaRa]. In any case, it seems natural to surmise that the composition law $$\begin{aligned} (g,v) \cdot_{\Join} (h,w) = (g \cdot h , g \cdot w + v \cdot h) \label{eq:comp}\end{aligned}$$ yields a new type of semi-direct product. The first result of this article is that is a valid composition law in some circumstances, where the resulting group is dubbed a *centered semi-direct product*. The second result is that the 2nd order Taylor expansions (or $2$-jets) of diffeomorphisms over a fixed point is a centered semi-direct product. The main motivation behind understanding this example is to allow us to develop particle-based methods for fluid simulation, with applications ranging from medical imaging to simulation of complex fluids. Background ---------- The semi-direct product is a standard tool used in the construction of new Lie groups and plays an interesting role in geometric mechanics when the normal subgroup is interpreted as an advected parameter. A standard example is the modeling of the ‘heavy-top’, wherein the configuration space is the left semi-direct product $\operatorname{SO}(3) \ltimes \mathbb{R}^3$ [@HoMaRa]. Another standard example is the modeling of liquid crystals, in which we consider the right semi-direct product $\operatorname{SDiff}(M) \rtimes V$. In this case, $\operatorname{SDiff}(M)$ is the set of volume-preserving diffeomorphisms of a volume manifold $M$, and $V$ is a vector space of Lie algebra valued one-forms on $M$ upon which $\operatorname{SDiff}(M)$ acts by pullback [@Holm2002; @GayBalmaz2009]. Of course, the tangent bundle of a Lie group, $TG$, is isomorphic to a left semi-direct product $G \ltimes \mathfrak{g}$ by left-trivializing the group structure of $TG$. Additionally, $TG$ is isomorphic to a right semi-direct product $G \rtimes \mathfrak{g}$ when the group structure of $TG$ is right trivialized [@MTA see §5.3]. Thus, we see that this method of constructing groups can be found in a number of instances. In this article, we introduce a new type of semi-direct product which extends the existing semi-direct product theory. A motivating example will be a desire to understand the Jet-groupoid of a manifold $M$. In particular, it is known that the jet-functor sends the diffeomorphism group to a groupoid known as the Jet groupoid [@KMS see §12]. As will be illustrated in §\[sec:jets\], the isotropy groups of the jet groupoid for $2$-jets have a group structure which can be written as a centered semi-direct product. A thorough understanding of the Jet groupoid can be useful for the creation of new particle-based methods wherein the particles carry jet data in addition to position and velocity data. The advantage of such a particle method is the possibility for a discrete form of Kelvin’s circulation theorem [@JaRaDe2011]. Building such particle methods can be useful in scenarios in which one desires to work with the material representation of a fluid. This occurs in the image registration method known as ‘Large Deformation Diffeomorphic Matching,’ which is used in the field of computational anatomy [@Bruveris2011]. Moreover, the potential energy associated with the advected parameters in complex fluids is often a function of certain gradients which require jet data in order to be advected by a diffeomorphism. Thus, keeping track of jet data may play a significant role in the construction of particle-based variational integrators. Main Contributions ------------------ In this paper, we accomplish a sequence of goals, each building upon the previous. In particular: 1. In section \[sec:CSD\], we define a new type of semi-direct product that we dub a *centered semi-direct product*. 2. In proposition \[prop:algebra\], we derive the Lie algebra of a centered semi-direct product and its associated structures. 3. In section \[sec:EP\], we develop the Euler-Poincaré theory of centered semi-direct product in parallel with the existing theory of semi-direct product reduction [@HoMaRa]. 4. In section \[sec:examples\], we describe the centered semi-direct product Euler-Poincaré equations for a few examples. We present one toy example before presenting the theory for an isotropy group of the 2-Jet groupoid. Combined, these items allow for a computationally tractable algebraic understanding of $2$-Jets and perhaps open the door to applications which were previously overlooked by geometric mechanicians. Acknowledgements ---------------- We would like to thank Darryl D. Holm for providing the initial stimulus for this project. The work of L.C has been supported by MICINN (Spain) Grant MTM2010-21186-C02-01, MTM 2011-15725-E, ICMAT Severo Ochoa Project SEV-2011-0087 and IRSES-project "Geomech-246981”. L.C owes additional thanks to CSIC and the JAE program for a JAE-Pre grant. The work of H.O.J. was supported by European Research Council Advanced Grant 267382 FCCA. A motivating example {#sec:jets} -------------------- Let $\operatorname{Diff}(M)$ denote the diffeomorphisms group of a manifold $M$. For a fixed $x \in M$ we may define the isotropy subgroup $$\operatorname{Iso}(x) = \{ \varphi \in \operatorname{Diff}(M) \quad \vert \quad \varphi(x) = x \}.$$ Let $\varphi \in \operatorname{Iso}(x)$ and note that $T_x \varphi$ is a linear automorphism of the vector-space $T_xM$. In particular: \[prop:jets\] The functor “$T_x$” is a group homomorphism from $\operatorname{Iso}(x)$ to $\operatorname{GL}( T_x M)$. Clearly $\operatorname{Iso}(x)$ and $\operatorname{GL}(T_xM)$ are both Lie groups. Let $\varphi, \psi \in \operatorname{Iso}(x)$. Then $T_x \varphi \circ T_x \psi = T_x( \varphi \circ \psi)$. This observation has implications for computation for the following reason: By definition, $T_x \varphi$ approximates $\varphi$ in a neighborhood of $x \in M$. Thus, if one desired to model a continuum with activity at $x$, then $T_x \varphi$ carries some of the crucial data to do this task. In particular, this is computationally tractable as the dimension of $\operatorname{GL}(T_x M)$ is equal to $\dim(M)^2$. If $\dim(M) = n$ then $\operatorname{GL}( T_xM) \equiv \operatorname{GL}(n)$ is a Lie group of dimension $n^2$. ![Depicted is a diffeomorphism with a trivial $2$-jet (i.e. a linear transformation) and diffeomorphism with a nontrivial $2$-jet.[]{data-label="fig:jets"}](smileybn.pdf){width="2.5in"} However, the group $\operatorname{GL}(n)$ only captures the linearization of a diffeomorphism. If we desire to capture some of the nonlinearity then we might consider looking into the second jet of these diffeomorphisms (see figure \[fig:jets\]). We can do so by considering the functor $TT_x$. Let $\varphi \in \operatorname{Iso}(x)$ so that $TT_x \varphi$ is a map from $T(T_xM)$ to $T(T_xM)$. However, $T_xM$ is a vector-space so that $T(T_xM) \approx T_x M \times T_xM$. The second component represents the vertical component and the isomorphism between $TT_xM$ and $T_x M \times T_xM$ is given by the vertical lift $$v^{\uparrow}(v_1,v_2) = \left. \frac{d}{d \epsilon} \right|_{\epsilon = 0} ( v_1 + \epsilon v_2).$$ We can therefore represent $TT_x \varphi$ as $(T_x \varphi, A_{\varphi})$ where $A_{\varphi}: T_x M \times T_xM \to T_xM$ is the symmetric $(1,2)$ tensor $$\begin{aligned} A_{ij}^{k} = \frac{\partial^2 \varphi^k}{\partial x_i \partial x_j }(x) \label{eq:12_tensor}\end{aligned}$$ where $\varphi^k$ is the $k$th component of $\varphi$. In other words, we have the 1-1 correspondence $$TT_x \varphi \leftrightarrow (A_1,A_2)$$ where $A_1 = T_x \varphi$ and $A_2$ is given by . If we denote the set of symmetric $T_x M$-valued $2$-tensors on $T_x M$ by ${\ensuremath{\mathcal{S}}}^1_2(x)$, then this correspondence is given by a map $$\Psi : \left. \mathcal{J}^2 \right|_{x}^{x}( \operatorname{Diff}(M) ) \to \operatorname{GL}( T_x M) \times {\ensuremath{\mathcal{S}}}^1_2(x)$$ where $\left. \mathcal{J}^2 \right|_{x}^{x}( \operatorname{Diff}(M) )$ is the group of second order taylor expansions about $x$ of diffeomorphisms which send $x$ to itself (these are called $2$-jets). This allows us to write the Lie group structure of $ \left. \mathcal{J}^2 \right|_{x}^{x} ( \operatorname{Diff}(M) )$ as a type of semi-direct product. In particular: \[prop:2jets\] If we represent $TT_x \varphi$ and $TT_x \psi$ as $(A_1,A_2)$ and $(B_1,B_2)$ where $A_1 = T_x \varphi, B_2 = T_x \psi, A_2 = \frac{ \partial^2 \varphi^k}{ \partial x^i \partial x^j}$, and $B_2 = \frac{ \partial^2 \psi^k}{\partial x^i \partial x^j }$, then $TT_x \varphi \circ TT_x \psi \equiv TT_x( \varphi \circ \psi)$ is given by the composition $$(A_1,A_2) \circ (B_1,B_2) = (A_1 \circ B_1 , A_1 \circ B_2 + A_2 \circ (B_1 \times B_1) ).$$ We find that $${\ensuremath{\frac{\partial }{\partial x_i} } } ( \varphi^k \circ \psi) = {\ensuremath{\frac{\partial \varphi^k}{\partial x_l} } } \cdot {\ensuremath{\frac{\partial \psi^l}{\partial x_i} } } \circ \psi$$ and the second derivative is $$\begin{aligned} {\ensuremath{\frac{\partial }{\partial x_j} } } {\ensuremath{\frac{\partial }{\partial x_i} } } (\varphi^k \circ \psi) &= {\ensuremath{\frac{\partial }{\partial x_j} } } \left( {\ensuremath{\frac{\partial \varphi^k}{\partial x_l} } } \cdot {\ensuremath{\frac{\partial \psi^l}{\partial x_i} } } \circ \psi \right) \\ &= \left( \frac{\partial^2 \varphi^k}{ \partial x_l \partial x_m } {\ensuremath{\frac{\partial \psi^l}{\partial x_i} } } {\ensuremath{\frac{\partial \psi^m}{\partial x_j} } } + {\ensuremath{\frac{\partial \varphi^k}{\partial x_l} } } \frac{ \partial^2 \psi^l}{\partial x_i \partial x_j} \right) \circ \psi \end{aligned}$$ Noting that $\psi(x) = x$ we can set $$\begin{aligned} A_1 = \left. {\ensuremath{\frac{\partial \varphi^k}{\partial x_l} } } \right|_{x} , \quad A_2 = \left. \frac{\partial^2 \varphi^k}{\partial x_i \partial x_j} \right|_{x} \\ B_1 = \left. {\ensuremath{\frac{\partial \psi^k}{\partial x_l} } } \right|_{x} , \quad B_2 = \left. \frac{\partial^2 \psi^k}{\partial x_i \partial x_j} \right|_{x} \end{aligned}$$ and rewrite the equations in the form $$\begin{aligned} {\ensuremath{\frac{\partial }{\partial x_i} } } ( \varphi^k \circ \psi) &= A_1 \cdot B_1\\ {\ensuremath{\frac{\partial }{\partial x_j} } } {\ensuremath{\frac{\partial }{\partial x_i} } } (\varphi^k \circ \psi) &= A_1 \cdot B_2 + A_2 \circ (B_1 \times B_1) \end{aligned}$$ Therefore if we define the composition $$(A_1, A_2) \cdot (B_1, B_2) := (A_1 \cdot B_1 , A_1 \cdot B_2 + A_2 \circ (B_1 \times B_1) )$$ on the manifold $\operatorname{GL}(T_x M) \times \vee^2( T_x M ; T_x M)$, then $\Psi: \left. \mathcal{J}^2 \right|_{x}^{x}( \operatorname{Diff}(M) ) \to \operatorname{GL}(T_x M ) \times {\ensuremath{\mathcal{S}}}^1_2$ is a Lie group isomorphism by construction. We see that the composition law of Proposition \[prop:jets\] is of the form described in equation . In this paper, we will condense the composition law for 2-jets to the algebraic level and study in the abstract Lie group setting. Of course, one would naturally like to consider diffeomorphisms which are not contained in $\operatorname{Iso}(x)$. However, this extension brings us into the realm of Lie groupoid theory and will need to be addressed in future work. A centered semi-direct product theory {#sec:CSD} ===================================== In this section, we will discover a new type of semi-direct product. We will outline the necessary ingredients for the construction of such a Lie group and we will derive the corresponding structures on the Lie algebra. Preliminary material on Lie groups ---------------------------------- Let $G$ be a Lie group with identity $e\in G$ and Lie algebra $\mathfrak{g}$. In this subsection we will establish notation and recall relevant notions related to Lie groups and Lie algebras. ### Group actions: Let $V$ be a vector space. A *left action* of $G$ on $V$ is a smooth map $\rho_{L}:G\times V{\rightarrow}V$ for which: $$\rho_{L}(e,v)=v \text{ and } \rho_{L}(g,\rho_{L}(h,v))=\rho_{L}(gh,v) \quad , \quad \forall g,h\in G , \forall v\in V.$$ As using the symbol ‘$\rho_L$’ can become cumbersome and since we will only need a one left Lie group action in a given context, we will opt to use the notation $g\cdot v := \rho_{L}(g,v).$ Finally, the *induced infinitesimal left action* of $\mathfrak{g}$ on $V$ is $$\xi\cdot v := \frac{d}{d\epsilon}\Big{|}_{\epsilon=0} \exp( \epsilon \cdot \xi ) \cdot v \quad , \quad \forall \xi \in \mathfrak{g} , v \in V.$$ Similarly, a *right action* of $G$ on $V$ is the smooth map $\rho_{R}:V\times G{\rightarrow}V$ for which: $$\rho_{R}(v,e)=v \text{ and } \rho_{L}(\rho_{L}(v,g),h)=\rho_{L}(v,gh) \quad , \quad \forall g,h\in G , \forall v\in V.$$ Again, we will primarily use the notation $v\cdot g:= \rho_{R}(v,g)$ for right actions. The *induced infinitesimal right action* of $\mathfrak{g}$ on $V$ is given by $$v\cdot\xi =\frac{d}{d\epsilon}\Big{|}_{\epsilon=0}v\cdot \exp( \epsilon \cdot \xi) \quad, \quad \forall \xi \in \mathfrak{g}, v \in V$$ Lastly, we say that the left action and the right action *commute* if $$(g \cdot v) \cdot h = g \cdot (v \cdot h)$$ for any $g,h \in G$ and $v \in V$. ### Adjoint and coadjoint operators: In this section we will recall the “$\operatorname{AD}, \operatorname{Ad}, \operatorname{ad}$”-notation used in [@Holm_GM]. For $g\in G$ we define the *inner automorphism* $\operatorname{AD}:G \times G{\rightarrow}G$ as $\operatorname{AD}(g,h) \equiv \operatorname{AD}_{g}(h)= g h g^{-1}$. Differentiating $\operatorname{AD}$ with respect to the second argument along curves through the identity produces the *Adjoint representation* of $G$ on $\mathfrak{g}$ denoted $\operatorname{Ad}:G\times\mathfrak{g}{\rightarrow}\mathfrak{g}$ and given by $$\operatorname{Ad}_{g}(\eta)= \left. \frac{d}{d\epsilon}\right|_{\epsilon=0} \left( \operatorname{AD}_g( \exp(\epsilon \eta) ) \right) = g \cdot \eta \cdot g^{-1},$$ for $g\in G$ and $\xi\in\mathfrak{g}$. Differentiating $\operatorname{Ad}$ with respect to the first argument along curves through the identity produces the *adjoint* operator $\operatorname{ad}:\mathfrak{g}\times\mathfrak{g}{\rightarrow}\mathfrak{g}$ given by $$\operatorname{ad}_{\xi}(\eta)= \left. \frac{d}{d\epsilon} \right|_{\epsilon = 0} ( \operatorname{Ad}_{ \exp( \epsilon \xi) }(\eta) ) = \xi \cdot \eta - \eta \cdot \xi.$$ The $\operatorname{ad}$-map is an alternative notation for the Lie bracket of $\mathfrak{g}$ in the sense that $$\operatorname{ad}(\xi,\eta) \equiv \operatorname{ad}_{\xi}(\eta) \equiv [\xi,\eta].$$ For each $\xi \in \mathfrak{g}$ the map $\operatorname{ad}_{\xi} : \mathfrak{g} \to \mathfrak{g}$ is linear and therefore has a formal dua $\operatorname{ad}_\xi^*: \mathfrak{g}^* \to \mathfrak{g}^*$ which we call the *coadjoint operator*. Explicitly, $\operatorname{ad}_{\xi}^*$ is defined by the relation $$\begin{aligned} \langle \operatorname{ad}_{\xi}^{*} (\mu) ,\eta \rangle= \langle \mu,\operatorname{ad}_{\xi}(\eta)\rangle \label{eq:ad_star}\end{aligned}$$ for each $\eta \in \mathfrak{g}$ and $\mu \in \mathfrak{g}^*$. Centered semi-direct products {#subsec:CSD} ----------------------------- In this subsection, we will construct a semi-direct product which can be thought of as a ‘sum’ of a right semi-direct product and a left semi-direct product. Let $G$ be a Lie group which acts on a vector-space $V$ via left and right group actions. Then, the product $G\times V$ with the composition law $$\label{cs-dp} (g_{1},v_{1})\cdot(g_{2},v_{2}):=(g_{1}g_{2},g_{1}\cdot v_{2}+v_{1}\cdot g_{2})$$ is a Lie group if and only if the left and right actions of $G$ commute. It is clear that $G \times V$ is a smooth manifold and that the composition law \[cs-dp\] is a smooth map. We must prove that this composition makes $G \times V$ a group. - That the composition map produces another element of $G \times V$ can be observed directly. Thus ‘closure’ is satisfied. - The identity element is given by $(e,0) \in G \times V$ where $e\in G$ is the identity of $G$. - The inverse element of an arbitrary $(g,v)\in G\times V$ is $(g^{-1},-g^{-1}vg^{-1})$ where $g^{-1}$ is the inverse of $g \in G.$ - Given three elements of $G \times V$ we find $$\begin{aligned} (g_1,v_1)\cdot \left( (g_2,v_2)\cdot(g_3,v_3) \right) = (g_1,v_1) \cdot (g_2 g_3 , g_2 \cdot v_3 + v_2 \cdot g_3 ) \\ = \left( g_1 g_2 g_3,g_1\cdot(g_2\cdot v_3+v_2 \cdot g_3)+v_1\cdot(g_2g_3) \right) \\ = \left( (g_1 g_2) g_3, (g_1g_2)\cdot v_3+g_1\cdot(v_2\cdot g_3)+(v_1\cdot g_2)\cdot g_3 \right). \end{aligned}$$ By the commutativity of the group actions we may equate the above line with: $$\begin{aligned} &=& ((g_1g_2)g_3, (g_1g_2)\cdot v_3+(g_1\cdot v_2)\cdot g_3+(v_1\cdot g_2)\cdot g_3)\\ &=& ((g_1g_2)g_3, (g_1g_2)\cdot v_3+(g_1\cdot v_2+v_1\cdot g_2)\cdot g_3)\\ &=& ((g_1g_2), g_1\cdot v_2+v_1\cdot g_2)\cdot(g_3,v_3)\\ &=& ((g_1,v_1)\cdot (g_2,v_2))\cdot(g_3,v_3). \end{aligned}$$ Thus, the associative property is satisfied. Moreover, all maps in sight including the inverse map are smooth. In conclusion we see that $G \times V$ with the composition \[cs-dp\] defines a Lie group. Moreover, if the left and right actions of $G$ on $V$ do *not* commute, then we can observe that associativity is violated. \[def-bowtie-Lie group\] Given commuting left and right representations of a group $G$ on a vector space $V$, the Lie group $G\times V$ with the composition is denoted $G \Join V$ and called the *centered semi-direct product* of $G$ and $V.$ It customary to denote the left semi-direct product using the symbol $\ltimes$ and the right semi-direct product via the symbol $\rtimes$. We justify our use of the symbol $\Join $ in that the concept of centered semi-direct product is merely a ‘sum’ of a left and a right semi-direct product. The formula $\Join = \rtimes + \ltimes$ can be used as a heuristic throughout the paper. In particular, this heuristic applies to the Lie algebra. \[prop:algebra\] Let $G \Join V$ be a centered-semi direct product Lie group. The Lie algebra $\mathfrak{g} \Join V$ is given by the set $\mathfrak{g} \times V$ with the Lie bracket $$\label{bracket} \left[(\xi_1, v_1),(\xi_2, v_2)\right]_{\Join } =\left([\xi_1,\xi_2]_{\mathfrak{g}}, (\xi_1\cdot v_2+v_1\cdot\xi_2)-(\xi_2\cdot v_1+v_2\cdot \xi_1)\right),$$ for $\xi_1,\xi_2\in\mathfrak{g},$ $v_1,v_2\in V$. Firstly, it is simple to verify that the tangent space at the identity, $(e,0) \in G \times V$, is $\mathfrak{g}\times V$. To derive the Lie bracket, we will derive the the $\operatorname{ad}$-map via the $\operatorname{Ad}$ and $\operatorname{AD}$-maps. For $(g,v), (h,w)\in G\Join V$ we find $$\begin{aligned} \operatorname{AD}_{(g,h)}(h,w)&=&(gh,v\cdot h + g\cdot w)\cdot(g^{-1},-g^{-1}\cdot v\cdot g^{-1})\\ &=&(\operatorname{AD}_{g}(h), v\cdot hg^{-1}+g\cdot w\cdot g^{-1}-\operatorname{AD}_{g}(h)\cdot v\cdot g^{-1}).\end{aligned}$$ If we substitute $(h,w)$ with the $\epsilon$-dependent curve $( \exp( \epsilon \cdot \xi_2) , \epsilon \cdot v_1)$ we can calculate the *adjoint operator*, $\operatorname{Ad}:(G\Join V)\times(\mathfrak{g}\Join V)\rightarrow \mathfrak{g}\Join V.$ Given by $$\begin{aligned} \operatorname{Ad}_{(g,v)}(\xi_2,v_2)&=& \left. \frac{d}{d \epsilon }\right|_{\epsilon = 0}\operatorname{AD}_{(g,v)}(\exp(\epsilon \cdot \xi_1) , \epsilon \cdot v_1)\\ &=&(\operatorname{Ad}_{g}(\xi_2); v\cdot \xi_2 g^{-1}+g\cdot v_2\cdot g^{-1}-\operatorname{Ad}_{g}(\xi_2)\cdot v\cdot g^{-1}).\end{aligned}$$ If we substitute $(g,v)$ with the $t$-dependent curve $( \exp( t \xi_1) , t v_2)$ we can differentiate with respect to $t$ to produce the adjoint operator $\operatorname{ad}:(\mathfrak{g}\Join V)\times (\mathfrak{g}\Join V)\rightarrow \mathfrak{g}\Join V$. Specifically, the adjoint operator is given by $$\begin{aligned} \operatorname{ad}_{(\xi_1,v_1)}(\xi_2,v_2)&=&\frac{d}{dt}\Big{|}_{t=0}(\operatorname{Ad}_{(\exp(t \cdot \xi_1),t \cdot v_1)}(\xi_2,v_2))\\ &=&\frac{d}{dt}\Big{|}_{t=0}(g\xi_2g^{-1},v\cdot\xi_2g^{-1}-g\xi_2g^{-1}\cdot v\cdot g^{-1}+ g\cdot v_2\cdot g^{-1})\\ &=&( \operatorname{ad}_{\xi_1}( \xi_2) , \xi_1 \cdot v_2 + v_1 \cdot \xi_2 - \xi_2 \cdot v_1 - v_2 \cdot \xi_1)\\ &=&([\xi_1,\xi_2]_{\mathfrak{g}}, (\xi_1\cdot v_2+v_1\cdot \xi_2) - (\xi_2\cdot v_1 + v_2 \cdot \xi_1 ) ).\end{aligned}$$ Noting that the $\operatorname{ad}$-map is merely an alternative notation for the Lie bracket completes the proof. We complete this section by defining operations designed to express interaction terms between momenta in $V$ in and momenta in $G$ in mechanical systems. The *heart operator* $\heartsuit : \mathfrak{g}\times V^{*}\rightarrow V^{*}$ is defined by $$\label{triangle} \langle \xi \heartsuit \alpha, v\rangle_{V}:=\langle \alpha,\xi\cdot v-v\cdot\xi\rangle_{V}.$$ The *diamond operator*, $\diamondsuit:V\times V^{*}{\rightarrow}\mathfrak{g}^{*}$, is defined as $$\begin{aligned} \langle v \diamondsuit \alpha,\xi\rangle_{\mathfrak{g}}:=\langle\alpha, v\cdot\xi-\xi\cdot v\rangle_{\mathfrak{g}}.\end{aligned}$$ The diamond operator can be seen as the sum of a diamond operator of a left semi-direct product and that of a right semi-direct product  [@HoMaRa]. The heart operator will allow us to express how momenta in $V$ impact motion in $G$, while the diamond operator will allow us to express the converse. Euler-Poincaré theory {#sec:EP} ===================== The Euler-Lagrange equations on a Lie group, $\tilde{G}$ can be expressed by a vector field over $T\tilde{G}$. If the Lagrangian is $\tilde{G}$-invariant then the equations of motion are $\tilde{G}$-invariant as well and the evolution equations can be reduced. While the unreduced system evolves by the *Euler-Lagrange* equations on $T \tilde{G}$, the reduced dynamics evolve on the quotient $T\tilde{G} / \tilde{G}$. However, $T\tilde{G} / \tilde{G}$ is just an alternative description of the Lie algebra $\tilde{\mathfrak{g}}$ and so the reduced equations of motion can be described on $\tilde{\mathfrak{g}}$ where we call them the *Euler-Poincaré equations.* This reduction procedure is summarized by the commutative diagram: (GL) [$T\tilde{G}$]{}; (GR) \[right of=GL\][$T\tilde{G}$]{}; (gL) \[below of=GL\] [$\tilde{\mathfrak{g}}$]{}; (gR) \[right of=gL\][$\tilde{\mathfrak{g}}$]{}; (GL) to node [flow by ‘EL’]{} (GR); (gL) to node \[swap\] [flow by ‘EP’]{} (gR); (GL) to node \[swap\] [$/ \tilde{G}$]{} (gL); (GR) to node [$ / \tilde{G}$]{} (gR); To be even more specific. A Lagrangian $L: T\tilde{G} \to \mathbb{R}$ is said to be *(right) $\tilde{G}$-invariant* if $$L( (\tilde{g}, \dot{\tilde{g}}) \cdot h) = L(\tilde{g} , \dot{\tilde{g}})$$ for all $h \in \tilde{G}$. If $L$ is $\tilde{G}$-invariant, then $L$ is uniquely specified by its restriction $\ell = \left. L \right|_{ \tilde{ \mathfrak{g}}} : \tilde{\mathfrak{g}} \to \mathbb{R}$. The Euler-Poincare theorem states the Euler Lagrange equations $$\frac{d}{dt} \left( {\ensuremath{\frac{\partial L}{\partial \dot{\tilde{g}}} } } \right) - {\ensuremath{\frac{\partial L}{\partial \tilde{g}} } } = 0$$ on $T\tilde{G}$ are equivalent the Euler-Poincaré equations and reconstruction formula $$\frac{d}{dt} \left( {\ensuremath{\frac{\partial \ell}{\partial \tilde{\xi} } } } \right) = - \operatorname{ad}_{\xi}^* \left( {\ensuremath{\frac{\partial \ell}{\partial \xi} } } \right) \quad , \quad \tilde{\xi} := \dot{\tilde{g}} \cdot \tilde{g}^{-1}.$$ A review of Euler-Poincaré reduction is given in  [@MandS Ch 13] while a specialization to the case of semidirect products with advected parameters is described in  [@HoMaRa]. In this section we will specialize the Euler-Poincaré theorem to the case of centered semi-direct products by setting $\tilde{G} = G \Join V$. To begin let us compute how variations of curves in the group induce variations on the trivializations of the velocities to the Lie algebra. Studying such variations will allow us to transfer the variational principles on the group to variational principles on the Lie algebra. \[prop:variations\] Let $G \Join V$ be a centered semi-direct product and consider a curve $(g,v)(t) \in G \Join V$. Let $(\xi_{g}(t),\xi_{v}(t)):=(\dot{g}(t),\dot{v}(t))\cdot(g(t),v(t))^{-1}\in\mathfrak{g}\Join V$ be the right trivialization of $(\dot{g},\dot{v})(t)$. An arbitrary variation of $(g,v)(t)$ is given by $$(\delta g , \delta v)(t) = (\eta_g , \eta_v)(t) \cdot (g,v)(t) \in T_{(g,v)(t)} (G \Join V),$$ where $(\eta_g,\eta_v)(t) \in \mathfrak{g} \Join V$. Given such a variation, the induced variation on $(\xi_g,\xi_v)$ is given by $$\begin{aligned} \label{variationsright} (\delta\xi_g, \delta\xi_v) &= (\dot{\eta}_{g}-\operatorname{ad}_{\xi_{g}}\eta_{g}, \dot{\eta}_{v}+ (\eta_g \xi_v + \eta_v \xi_g) - (\xi_{g}\eta_{v}+\xi_{v}\eta_{g})) \\ &= \frac{d}{dt} (\eta_v,\eta_v) - [ (\xi_g,\xi_v) , (\eta_g, \eta_v) ]_\Join. \nonumber\end{aligned}$$ For any Lie group, $\tilde{G}$, and any curve $\tilde{g}(t) \in \tilde{G}$, the variation of $\tilde{\xi}(t) := \dot{\tilde{g}}(t) \cdot \tilde{g}^{-1}(t)$ induced by the variation $\delta \tilde{g}(t) = \tilde{\eta}(t) \cdot \tilde{g}(t)$ is $\delta \tilde{\xi} = \dot{\tilde{\eta}} - [ \tilde{\xi} , \tilde{\eta} ]$. For matrix groups see [@MandS Theorem 13.5.3] and [@BlKrMaRa] for the general case. If we set $\tilde{G} = G \Join V$ and use the bracket derived in Proposition \[prop:algebra\] then the theorem follows. Now that we understand the relationship between variations of curves in $G \Join V$ and the induced variations in $\mathfrak{g} \Join V$ we can state the Euler-Poincaré theorem for centered semi-direct products. \[thm:ep\] Let $L: G \Join V \to \mathbb{R}$ be (right) $G \Join V$-invariant, and let $\ell: \mathfrak{g} \Join V \to \mathbb{R}$ be its reduced Lagrangian. Let $(g,v)(t) \in G \Join V$ and denote the right trivialized velocity by $(\xi_g, \xi_v)(t) := (\dot{g} , \dot{v})(t) \cdot (g,v)(t)^{-1}$. Then the following statements are equivalent: (i) Hamilton’s principle holds. That is, $$\label{action1}\delta\int_{t_0}^{t_1}L(g(t),\dot{g}(t),v(t))dt=0$$ for variations of $(g,v)(t)$ with fixed endpoints. (ii) $(g, v)(t)$ satisfies the Euler-Lagrange equations for $L$. (iii) The constrained variational principle $$\label{action2} \delta\int_{t_0}^{t_1} \ell (\xi_{g}(t),\xi_v(t))dt=0$$ holds on $\mathfrak{g}\times V$ for variations of the form $$\label{eq:variations2L} (\delta\xi_g, \delta\xi_v)=(\dot{\eta}_{g}-\operatorname{ad}_{\xi_{g}}\eta_{g}, \dot{\eta}_{v}+\eta_g\xi_v-\xi_{v}\eta_{g}+\eta_{v}\xi_{g}-\xi_{g}\eta_{v}).$$ where $(\eta_{g}, \eta_{v})(t)$ is an arbitrary curve in $\mathfrak{g}\Join V$ which vanishes at the endpoints. (iv) The Euler-Poincaré equations $$\begin{aligned} \label{EPeq1} \frac{d}{dt}\left(\frac{\delta \ell}{\partial\xi_{g}}\right) + \operatorname{ad}_{\xi_g}^{*}\left(\frac{\delta \ell}{\delta\xi_g}\right) +\xi_{v} \diamondsuit \frac{\delta \ell}{\delta \xi_v}&=&0,\\ \frac{d}{dt}\left(\frac{\delta \ell}{\partial\xi_{v}}\right) + \xi_{g} \heartsuit \frac{\delta \ell}{\delta\xi_v}&=&0\nonumber\end{aligned}$$ hold on $\mathfrak{g}\Join V$. The equivalence [*(i)*]{} and [*(ii)* ]{} holds for any configuration manifold and so, in particular it holds in this case. Next we show the equivalence [*(iii)*]{} and [*(iv)*]{}. We compute the variations of the action integral to be $$\begin{aligned} \delta \int_{t_0}^{t_1}l(\xi_{g}(t),\xi_v(t))dt =& \int_{t_0}^{t_1}\Big{\langle}\frac{\delta \ell}{\delta\xi_{g}},\delta\xi_g\Big{\rangle}+\Big{\langle}\frac{\partial \ell}{\partial\xi_{v}},\delta \xi_v\Big{\rangle}dt\\ =&\int_{t_0}^{t_1}\Big{\langle}\frac{\delta \ell}{\delta\xi_{g}},\dot{\eta}_{g}-\operatorname{ad}_{\xi_{g}}\eta_{g}\Big{\rangle}+\Big{\langle}\frac{\delta \ell}{\delta \xi_{v}},\dot{\eta}_{v}+\eta_{g}\xi_{v}-\xi_{v}\eta_{g}+\eta_{v}\xi_{g}-\xi_{g}\eta_{v}\Big{\rangle}dt\\ \intertext{ and applying integration by parts and equation \eqref{eq:ad_star} we find}\\ =&\int_{t_{0}}^{t_1}{\Big{\langle}}-\frac{d}{dt}\left(\frac{\delta \ell}{\ell \xi_{g}}\right) -\operatorname{ad}_{\xi_{g}}^{*}\left(\frac{\delta \ell}{\delta \xi_{g}}\right) , \eta_{g}{\Big{\rangle}}+{\Big{\langle}}-\frac{d}{dt}\frac{\partial l}{\partial\xi_{v}} , \eta_{v}{\Big{\rangle}}\\ &+{\Big{\langle}}\frac{\delta \ell}{\delta \xi_{v}} , \eta_{g}\xi_{v} - \xi_{v}\eta_g{\Big{\rangle}}+{\Big{\langle}}\frac{\delta \ell}{\delta\xi_{v}} , \eta_{v}\xi_{g} - \xi_g\eta_v{\Big{\rangle}}dt \\ &+ {\Big{\langle}}\frac{\partial l}{\partial\xi_{g}} , \eta_{g}{\Big{\rangle}}\Big{|}_{t_0}^{t_1}+{\Big{\langle}}\frac{\delta \ell}{\delta \xi_{v}} , \eta_{v}{\Big{\rangle}}\Big{|}_{t_0}^{t_1}\\ =&\int_{t_0}^{t_1}\Big{\langle}-\frac{d}{dt}\left(\frac{\delta l}{\delta\xi_g}\right)-\operatorname{ad}_{\xi_g}^{*}\left(\frac{\delta \ell}{\delta\xi_g}\right)-\left(\xi_{v}\diamondsuit \frac{\delta \ell}{\delta \xi_{v}}\right), \eta_g\Big{\rangle}\\ &+\Big{\langle}-\frac{d}{dt}\left(\frac{\delta \ell}{\delta\xi_v}\right)- \xi_{g}\heartsuit \frac{\delta \ell}{\delta\xi_v}, \eta_v\Big{\rangle}dt.\end{aligned}$$ By noting that $(\eta_g,\eta_v)(t)$ is arbitrary on the interior of the integration domain, the result follows. Finally, we show that [*(i)*]{} and [*(iii)*]{} are equivalent. The $G-$invariance of $L$ implies that the integrands in and are equal. However, by Proposition \[prop:variations\] all the variations of $(g,v)(t)$ with fixed endpoints induce, and are induced by, variations $(\delta\xi_g,\delta\xi_v)(t)\in\mathfrak{g}\Join V$ of the form given in equation . Conversely if [*(i)*]{} holds with respect to arbitrary variations $(\delta g, \delta v)$, we define $$(\eta_{g},\eta_{v})(t) = (\delta g , \delta v) \cdot (g,v)^{-1},$$ to produce the variation of $(\xi_g, \xi_v)$ given in equation . There is a left invariant version of theorem in which $(\xi_{g} , \xi_v) :=(g,v)^{-1} \cdot (\dot{g},\dot{v})$ and $L$ is left $G \Join V$-invariant. In this case the Euler-Poincaré equations take the form $$\begin{aligned} \frac{d}{dt}\left(\frac{\delta \ell}{\partial\xi_{g}}\right) - \operatorname{ad}_{\xi_g}^{*}\left(\frac{\delta \ell}{\delta\xi_g}\right) - \xi_{v} \diamondsuit \frac{\delta \ell}{\delta \xi_v}&=0,\\ \frac{d}{dt}\left(\frac{\delta \ell}{\partial\xi_{v}}\right) - \xi_{g} \heartsuit \frac{\delta \ell}{\delta\xi_v}&=0.\end{aligned}$$ There is a version of semi-direct product mechanics wherein the vector-space $V$ is a set of *advected parameters* as in [@HoMaRa]. In this case we impose the holonomic constraint $$\dot{v} = \dot{g} \cdot v + v \cdot \dot{g}$$ and the set of admissible variations in $\mathfrak{g} \Join V$ become $$\begin{aligned} \delta \xi_g = \dot{\eta}_g - [\xi_g , \eta_g] \quad , \quad \delta v = \eta_g \cdot v + v \cdot \eta_g.\end{aligned}$$ If we do this, the $\heartsuit$-term is removed and $\frac{ \delta \ell}{\delta v }$ equation is replaced with a holonomic constraint. In particular we find that $$\begin{aligned} \frac{d}{dt}\left(\frac{\delta \ell}{\partial\xi_{g}}\right) \pm \operatorname{ad}_{\xi_g}^{*}\left(\frac{\delta \ell}{\delta\xi_g}\right) \pm \xi_{v} \diamondsuit \frac{\delta \ell}{\delta \xi_v}&=0 \\ \frac{d v}{dt} = \xi_g \cdot v + v \cdot \xi_g.\end{aligned}$$ where we use a plus sign for right trivialization and a minus sign for left trivialization. Examples {#sec:examples} ======== In this section we will present two examples of Euler-Poincaré equations on centered semidirect products. This first is a toy example designed to illustrate how computations of the diamond and heart operators can be done in practice. The second example is designed to 2-Jets as described in subsection \[sec:jets\]. A toy example ------------- Consider the group $\operatorname{GL}(n)$ and let $\operatorname{Mat}(n)$ denote the vector space of $n \times n$ real matrices. Noting that $\operatorname{GL}(n)$ acts on $\operatorname{Mat}(n)$ by left and right multiplication, we can define the composition law on the Lie group $GL(n) \Join \operatorname{Mat}(n)$ by: $$(A,v)\cdot (B,w)=(AB,Aw+vB).$$ Moreover, we can identify $\mathfrak{gl}^*(n)$ with $\mathfrak{gl}(n)$ and $\operatorname{Mat}(n)^*$ with $\operatorname{Mat}(n)$ by the matrix trace pairing $\langle A , B \rangle=Tr( A^{T} B)$. This allows us to calculate the heart operator $\heartsuit:\mathfrak{gl}(n)\times \operatorname{Mat}(n)^* \to \operatorname{Mat}(n)$ as $$\begin{aligned} \langle A\heartsuit w,v\rangle&=&\langle w,A\cdot v-v \cdot A\rangle\\ &=&\operatorname{trace}\left(w^{T}(A\cdot v-v \cdot A)\right)\\ &=&\operatorname{trace}\left(w^{T}\cdot (A\cdot v)-w^{T}(v\cdot A)\right)\\ &=&\operatorname{trace}\left((w^{T}\cdot A)v-(A\cdot w^{T})\cdot v\right)\\ &=&\operatorname{trace}\left((w^{T}\cdot A-A\cdot w^{T})\cdot v\right)\\ &=&\operatorname{trace}\left((A^{T}w-w\cdot A^{T})^{T}\cdot v\right)\\ &=&\langle A^{T}w-w A^{T}, v\rangle\end{aligned}$$ Therefore, $$A\heartsuit w = A^{T}w-w A^{T}.$$ By a similar calculation, diamond operator is found to be $$v \diamondsuit w = v^T w - w v^T,$$ and the coadjoint action on $\operatorname{GL}(n)$ is given by $$\operatorname{ad}_{A}^{*}(\alpha_A )= A^{T} \cdot \alpha_A-\alpha_A \cdot A^{T}.$$ Now, we have all the ingredients to write the Euler-Poincaré equations. Given a reduced Lagrangian $\ell: \mathfrak{gl}(n) \Join \operatorname{Mat}(n) \to \mathbb{R}$ we may denote the reduced momenta by $$\mu = \frac{ \delta \ell}{\delta \xi} , \quad \gamma = \frac{ \delta \ell}{\delta v}.$$ where $(\xi, v) \in \mathfrak{gl}(n) \Join \operatorname{Mat}(n)$. The Euler-Poincaré equations can be written as $$\begin{aligned} \dot{\mu}&=&(\xi^{T}\mu-\mu\xi^{T})+v^{T}\gamma-\gamma v^{T}\\ \dot{\gamma}&=&\xi^{T}\gamma-\gamma \xi^{T}.\end{aligned}$$ An isotropy group of a 2-Jet groupoid ------------------------------------- In proposition \[prop:jets\] we illustrated how the set of $2$-jets of diffeomorphisms of the stabilizer group of a point $x \in M$ is identifiable with a centered semidirect product. In particular, if $\dim(M) = n$ we can consider the group $\operatorname{GL}(n) \Join {\ensuremath{\mathcal{S}}}^1_2$, where ${\ensuremath{\mathcal{S}}}^1_2$ is the set of $(1,2)$-tensors which are symmetric in the covariant part. For the moment we shall consider the larger space of all $(1,2)$-tensors denoted ${\ensuremath{\mathcal{T}}}^1_2$. If we let ${\ensuremath{ {\mathbf e} } }_1, \dots, {\ensuremath{ {\mathbf e} } }_n \in \mathbb{R}^n$ be a basis with dual basis ${\ensuremath{ {\mathbf e} } }^1, \dots , {\ensuremath{ {\mathbf e} } }^n \in (\mathbb{R}^n)^*$ we can write an arbitrary element of ${\ensuremath{\mathcal{T}}}^1_2$ as $$T = T^i_{jk} {\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j \otimes {\ensuremath{ {\mathbf e} } }^k .$$ The left action of $\operatorname{GL}(n)$ on ${\ensuremath{\mathcal{T}}}^1_2$ is $$g \cdot T := T^{i}_{jk} (g \cdot {\ensuremath{ {\mathbf e} } }_i) \otimes {\ensuremath{ {\mathbf e} } }^j \otimes {\ensuremath{ {\mathbf e} } }^k \equiv T^{i}_{jk} g^{l}_i {\ensuremath{ {\mathbf e} } }_l \otimes {\ensuremath{ {\mathbf e} } }^j \otimes {\ensuremath{ {\mathbf e} } }^k$$ while the right action is $$T \cdot g := T^{i}_{jk} {\ensuremath{ {\mathbf e} } }_i \otimes ( g^T \cdot {\ensuremath{ {\mathbf e} } }^j ) \otimes (g^T \cdot {\ensuremath{ {\mathbf e} } }^k).$$ Clearly these actions commute, and so we may form the centered semidirect product Lie group $\operatorname{GL}(n) \Join {\ensuremath{\mathcal{T}}}^1_2$. Let us now focus on the Lie algebra. The Lie algebra $\mathfrak{gl}(n)$ is equivalent to ${\ensuremath{\mathcal{T}}}^1_1$ and the Lie bracket is then given in the bases ${\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j$ by $$[\xi , \eta] = (\xi^i_k \eta^k_j - \eta^i_k \xi^k_j) {\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j,$$ where $\xi = \xi^i_j {\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j$ and $\eta = \eta^i_j {\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j$. We can use the dual basis ${\ensuremath{ {\mathbf e} } }^i \otimes {\ensuremath{ {\mathbf e} } }_j$ to see that the coadjoint action of $\xi$ on $\mu = \mu_i^j {\ensuremath{ {\mathbf e} } }^i \otimes {\ensuremath{ {\mathbf e} } }_j$ is given by $$\operatorname{ad}_\xi^* \mu = (\mu^j_k \xi^k_i - \mu^k_i \xi^j_k) {\ensuremath{ {\mathbf e} } }^i \otimes {\ensuremath{ {\mathbf e} } }_j.$$ By differentiation we see that the infinitesimal left and right actions of $\mathfrak{gl}(n)$ on ${\ensuremath{\mathcal{T}}}^1_2$ are given by $$\begin{aligned} \xi \cdot T &= T^{i}_{jk} \xi^{l}_i {\ensuremath{ {\mathbf e} } }_l \otimes {\ensuremath{ {\mathbf e} } }^j \otimes {\ensuremath{ {\mathbf e} } }^k \\ T \cdot \xi &= T^{i}_{lk} \left[ {\ensuremath{ {\mathbf e} } }_i \otimes ( \xi^j_l \cdot {\ensuremath{ {\mathbf e} } }^l ) \otimes {\ensuremath{ {\mathbf e} } }^k + {\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j \otimes (\xi^k_l \cdot {\ensuremath{ {\mathbf e} } }^l) \right] \\ &= (T_{lk}^{i} \xi^l_j + T_{jl}^i \xi^{l}_k ) {\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j \otimes {\ensuremath{ {\mathbf e} } }^k. \end{aligned}$$ If we choose an arbitrary element $\alpha \in ({\ensuremath{\mathcal{T}}}^1_2)^* \equiv {\ensuremath{\mathcal{T}}}_1^2$ given by $$\alpha = \alpha_i^{jk} {\ensuremath{ {\mathbf e} } }^i \otimes {\ensuremath{ {\mathbf e} } }_j \otimes {\ensuremath{ {\mathbf e} } }_k$$ we find that $$\begin{aligned} \langle \alpha , \xi \cdot T \rangle &= (\alpha_l^{jk} \xi^l_i) T_{jk}^i = (\alpha^{lk}_{i} T^{j}_{lk}) \xi^{i}_j\\ \langle \alpha , T \cdot \xi \rangle &= ( \alpha^{lk}_{i} \xi^j_l + \alpha^{jl}_{i} \xi^{k}_{l}) T_{jk}^i = ( \alpha^{jk}_l T^l_{ik} + \alpha^{kj}_l T^l_{ki}) \xi^{i}_{j}.\end{aligned}$$ Therefore the heart operator is given by $$\xi \heartsuit \alpha = ( \xi^l_i \alpha_l^{jk} - \alpha^{lk}_{i} \xi^j_l - \alpha^{jl}_{i} \xi^{k}_{l}) {\ensuremath{ {\mathbf e} } }^i \otimes {\ensuremath{ {\mathbf e} } }_j \otimes {\ensuremath{ {\mathbf e} } }_k$$ and the diamond operator is $$\alpha \diamondsuit T = ( \alpha^{jk}_l T^l_{ik} + \alpha^{kj}_l T^l_{ki} - \alpha^{lk}_{i} T^{j}_{lk} ) {\ensuremath{ {\mathbf e} } }^i \otimes {\ensuremath{ {\mathbf e} } }_j.$$ Given a reduced Lagrangian $\ell: \mathfrak{gl}(n) \Join {\ensuremath{\mathcal{T}}}^1_2 \to \mathbb{R}$ we can denote $\mu = \frac{ \delta \ell}{\delta \xi}$ and $\gamma = \frac{ \delta \ell}{\delta T}$. In terms of the basis ${\ensuremath{ {\mathbf e} } }^i \otimes {\ensuremath{ {\mathbf e} } }_j$ and ${\ensuremath{ {\mathbf e} } }_i \otimes {\ensuremath{ {\mathbf e} } }^j \otimes {\ensuremath{ {\mathbf e} } }^k$ we may write the (right) Euler-Poincaré equations as: $$\begin{aligned} \dot{\mu}^j_i &= \alpha^{lk}_{i} T^{j}_{lk} + \mu^j_k \xi^k_i - \mu_i^k \xi^j_k - \alpha^{jk}_l T^l_{ik} - \alpha^{kj}_l T^l_{ki} \\ \dot{T}^i_{jk} &= \xi^l_i \alpha_l^{jk} - \alpha^{lk}_{i} \xi^j_l - \alpha^{jl}_{i} \xi^{k}_{l}.\end{aligned}$$ By restricting ${\ensuremath{\mathcal{T}}}^1_2$ to the subspace ${\ensuremath{\mathcal{S}}}^1_2$, we can obtain a Lie group which models 2-jets of diffeomorphisms as demonstrated in proposition \[prop:2jets\]. This example provides a first step towards the creation of higher-order, spatially accurate particle methods [@JaRaDe2011 section 4.2]. Moreover, the data of $2$-jets is necessary for the advection of quantities seen in complex fluids in which the advected parameters depend on gradients of the flow [@GayBalmaz2009; @Holm2002]. Therefore, the structures described here may prove useful in the construction of particle-based integrators for complex fluids as well. Conclusion ========== In this paper, we have presented a variant of traditional semi-direct products, dubbed centered semi-direct products, and we have illustrated the associated Euler-Poincaré theory. The diamond operator, the group multiplication, and the Lie bracket can all be seen as sums of the associated concepts for left and right semi-direct products. As a result, the Euler-Poincaré theory associated with centered semi-direct products can also be seen as a sum of the left and right invariant Euler-Poincaré theories for semi-direct products. Presently, many of these constructions remain fairly theoretical. However, an isotropy group of the groupoid of $2$-jets of diffeomorphisms of a manifold can be seen as a centered semi-direct product. This has potential applications in simulation of complex fluids. We hope this paper provides a stepping stone towards realizing this application. [99]{} Abraham, R and Marsden, J E and Ratiu, T S, *Manifolds, Tensor Analysis, and Applications,* 3rd Edition, Spinger, Applied Mathematical Sciences **75**, 2009. Bloch, A M and Krishnaprasad, P S and Marsden, J E and Ratiu, T S, *The Euler Poincaré equations and double bracket dissipation* Comm. Math. Phys. **15** (1996) 1-42. Bruveris, M. and Gay-Balmaz, F. and Holm, D.D. and Ratiu, T.S, *The Momentum Map Representation of Images,* Journal of Nonlinear Science, **21** (2011) Issue 1, 115-150. Gay-Balmaz, F and Ratiu, T S, *The geometric structure of complex fluids*, Advances in Applied Mathematics, **42** (2009), N.2, 176-275. Holm, D D, *Euler-Poincare dynamics of perfect complex fluids,* . pp 113-167, Fields Institute, 2002. Holm, D D, *Geometric mechanics: parts I and II*, 2nd. ed., Imperial College Press, 2008. Holm, D D and Marsden, J E and Ratiu, T S, *The Euler Poincaré equations and semidirect products with applications to continuum theories*, Advances in Mathematics **137** (1998), 1-81. Jacobs, H O and Ratiu, T S and Desbrun, M, *On the coupling between and ideal fluid and immersed particles,* to appear in Physica D: Nonlinear Phenomena <arXiv:1208.6561> Kolar, I and Michor, P W and Slovak, J, *Natural Operations in Differential Geometry,* Springer, 1999. Marsden, J E and Ratiu, T S, *Introduction to Mechanics and Symmetry*, 2nd ed., Texts in Applied Mathematics, vol. 17, Springer Verlag 1999.
{ "pile_set_name": "ArXiv" }
--- abstract: | We consider numerical algorithms for the simulation of the rheology of two-dimensional vesicles suspended in a viscous Stokesian fluid. The vesicle evolution dynamics is governed by hydrodynamic and elastic forces. The elastic forces are due to local inextensibility of the vesicle membrane and resistance to bending. Numerically resolving vesicle flows poses several challenges. For example, we need to resolve moving interfaces, address stiffness due to bending, enforce the inextensibility constraint, and efficiently compute the (non-negligible) long-range hydrodynamic interactions. Our method is based on the work of [*Rahimian, Veerapaneni, and Biros, “Dynamic simulation of locally inextensible vesicles suspended in an arbitrary two-dimensional domain, a boundary integral method”, Journal of Computational Physics, 229 (18), 2010*]{}. It is a boundary integral formulation of the Stokes equations coupled to the interface mass continuity and force balance. We extend the algorithms presented in that paper to increase the robustness of the method and enable simulations with concentrated suspensions. In particular, we propose a scheme in which both intra-vesicle and inter-vesicle interactions are treated semi-implicitly. In addition we use special integration for near-singular integrals and we introduce a spectrally accurate collision detection scheme. We test the proposed methodologies on both unconfined and confined flows for vesicles whose internal fluid may have a viscosity contrast with the bulk medium. Our experiments demonstrate the importance of treating both intra-vesicle and inter-vesicle interactions accurately. address: | Institute of Computational Engineering and Sciences,\ The University of Texas at Austin, Austin, TX, 78712. author: - Bryan Quaife - George Biros bibliography: - 'refs.bib' title: 'High-volume fraction simulations of two-dimensional vesicle suspensions' --- Stokes flow ,Suspensions ,Particulate flows ,Vesicle simulations ,Boundary integral method ,Fluid membranes ,Semi-implicit algorithms ,Fluid-structure interaction ,Spectral collision detection , Fast multipole methods Introduction\[s:intro\] ======================= introduction.tex Formulation\[s:formulation\] ============================ formulation.tex Method\[s:method\] ================== method.tex Computing Local Averages of Pressure and Stress\[s:aver\] ========================================================= aver.tex Results\[s:results\] ==================== results.tex Conclusions\[s:conclusions\] ============================ conclusions.tex Error estimates for near-singular integration \[A:AppendixA\] ============================================================= appen1.tex Jumps in pressure and stress \[A:AppendixB\] ============================================ appen2.tex Variable curvature formulation \[A:AppendixC\] ============================================== appen3.tex
{ "pile_set_name": "ArXiv" }
--- author: - 'C. Ferrari' - 'H. T. Intema[^1]' - 'E. Orrù' - 'F. Govoni' - 'M. Murgia' - 'B. Mason' - 'H. Bourdin' - 'K. M. Asad' - 'P. Mazzotta' - 'M. W. Wise' - 'T. Mroczkowski' - 'J. H. Croston' bibliography: - 'RXJ1347.bib' date: 'Received 29 July 2011; accepted 6 October 2011' title: 'Discovery of the correspondence between intra-cluster radio emission and a high pressure region detected through the Sunyaev-Zel’dovich effect' --- Introduction ============ The existence of a non-thermal component (GeV electrons and $\mu$G magnetic fields) of the intra-cluster medium (ICM) has been revealed by the detection of diffuse radio sources that are not associated with active galaxies, but with the ICM. Through non-thermal studies of galaxy clusters we can estimate the cosmic-ray and magnetic field energy budget and pressure contribution to the ICM and also obtain clues about the cluster dynamical state and energy redistribution during merging events [e.g. @2004JKAS...37..433S]. Up to now, only $\lesssim$ 10% of known clusters are “radio-loud”, i.e. show evidence of a diffuse non-thermal component in the radio band [see @2009ASPC..407..223C; @2011IAUS..274..340F for recent reviews]. Based on their physical properties, diffuse cluster radio sources are usually divided into three categories: halos, relics and mini-halos [see, e.g., @2008SSRv..134...93F]. Radio halos are low surface brightness sources with a regular morphology that permeate the central cluster region and extend out to $\gtrsim$ 1 Mpc. Relics have generally been found at the periphery of clusters and exhibit a wider range of morphologies. Mini-halos have sizes smaller than 500 kpc, and have been detected in the central regions of cool-core galaxy clusters, generally surrounding a powerful radio galaxy. A common property of these three classes of objects is that the radiative lifetime of their relativistic electrons is much shorter than the timescale on which the radio-emitting plasma can fill the whole radio source volume [e.g. @2001MNRAS.320..365B]. Different models have been proposed to explain the presence of cosmic-ray electrons in radio-loud clusters . Observational results are at present in favor of intra-cluster electron re-acceleration by shocks in the volume of radio relics, or turbulence in the case of halos and mini-halos [see @2008SSRv..134...93F and references therein]. Most Mpc-scale radio sources have been detected in luminous merging systems. Their radio power is generally correlated to the X-ray luminosity of the host cluster . The energy required to produce radio-emitting cosmic-rays comes therefore most likely from the huge gravitational energy released during cluster mergers ($\approx 10^{64}$ ergs). This is different for mini-halos, in which it has been suggested that a population of relic electrons ejected by the central AGN are most likely re-accelerated by MHD turbulence within the central cold cluster region ; this turbulence is possibly related to gas “sloshing” [i.e. the oscillatory motion of the lowest entropy gas within the gravitational potential of merging clusters, @2008ApJ...675L...9M]. Unfortunately, our current observational knowledge of mini-halos is limited to only a handful of well-studied clusters . More statistics as well as complementary detailed physical analyses of clusters hosting radio mini-halos are therefore required. We analyzed new GMRT observations of the most X-ray luminous cluster known – RXJ1347-1145 (hereafter RXJ1347) – that hosts a radio mini-halo . Our radio results are compared to millimeter and X-ray data. Particularly interesting for this work is the dynamical state of this cluster, for which a wealth of observational data exists at optical, X-ray, radio and mm wavelengths [@2011arXiv1106.3489J and references therein]. Initially considered as the prototype of a relaxed cooling-flow cluster, RXJ1347 has subsequently shown signatures of merging coming from millimeter observations of the Sunyaev-Zel’dovich effect [SZE, e.g. @1999ApJ...519L.115P; @2004PASJ...56...17K] and from higher sensitivity X-ray analyses . The presence of a southeast (SE) substructure, characterized by a hot ICM (T $>$ 20 keV), has been pointed out and analyzed in detail through joint multi-wavelength and numerical studies [@2011arXiv1106.3489J and references therein]. High resolution (10$''$) MUSTANG 90 GHz observations have recently confirmed the SZ signal at 20$''$ to the SE from the cluster center, indicating a strong, localized SZE decrement [@2010ApJ...716..739M; @2011ApJ...734...10K] that is possibly associated to an ICM shock. The adopted cosmological parameters are $\Lambda$CDM (${\rm H}_0$=71 km ${\rm s}^{-1} {\rm Mpc}^{-1}$, $\Omega_{\rm m} = 0.27$, $\Omega_{\Lambda} = 0.73$). At the redshift of the cluster ($z$=0.451) 1$''$ corresponds to 5.74 kpc. Radio data reduction ==================== ![Azimuthally averaged brightness profile of the radio emission in RXCJ1347 at 614 MHz. The profile is calculated in concentric annuli, as shown in the inset panel. The horizontal dashed-dotted line indicates the 3$\sigma_{\rm 614 MHz}$ noise level of the radio image. In our analysis we considered all data points above the $3\sigma_{\rm 614 MHz}$ noise level. The black line indicates the best-fit profile described by an exponential law (dashed line, Eq.\[eq:MH\]) representing the mini-halo emission, and by a central Gaussian profile (dotted line, Eq.\[eq:PS\]) representing the central point source. []{data-label="fig:profilo"}](Fig2.ps){width="30.00000%"} GMRT observations of RXJ1347 were obtained in the 240/610 MHz dual frequency mode. Visibilities were recorded every 16.8 seconds in 128 frequency channels covering 32 MHz of bandwidth at both frequencies. Data reduction was performed using the AIPS and SPAM software packages . After flagging, the remaining effective bandwidths are 6.25 and 13.5 MHz, centered on 237 and 614 MHz, respectively. The total effective time on-target is 12 hours. We used 3C147 as the primary flux and bandpass calibrator, adopting flux levels of 59.5 and 39.7 Jy at 237 and 614 MHz, respectively. The secondary calibrator 3C283 was used to determine slow-gain amplitude variations. The amplitude calibration results were applied to the target field data, followed by additional RFI flagging and frequency averaging to 25 channels of 0.25 MHz each at 237 MHz, and 18 channels of 0.75 MHz each at 614 MHz. ![image](Fig3_a.eps){width="31.00000%"} ![image](Fig3_b.ps){width="31.00000%"} The target field data were phase-calibrated against a simple point source model derived from NVSS [@1998AJ....115.1693C] and WENSS , followed by several rounds of wide-field imaging, CLEAN deconvolution and self-calibration. Bright sources in the 614 MHz data were peeled to decrease the overall noise level. For the 237 MHz data we applied ionospheric calibration as implemented in SPAM. In Fig.\[fig:Radio\] we show the final (uniform weighted) images at 237 and 614 MHz. Their respective synthesized beams and noise levels are $11.7 '' \times 9.3 ''$ and $\sigma_{\rm 237~MHz}$ = 0.9 mJy/beam, and $4.8 '' \times 3.5 ''$ and $\sigma_{\rm 614~MHz}$ = 0.1 mJy/beam. Results ======= Radio emission at the center of RXJ1347 results from a combination of a central point source and surrounding diffuse emission . In order to carefully separate the contribution of the mini-halo from that of the central radio galaxy and estimate the radio power of the diffuse source, we followed . The total brightness profile of the radio emission at the center of the cluster was fitted taking into account a central point source ($I_{\rm PS}$) plus the radio mini-halo diffuse emission ($I_{\rm MH}$): $$I(r)=I_{\rm PS}(r)+I_{\rm MH}(r).$$ The profile of the point and diffuse sources were adopted to be a Gaussian and an exponential law, respectively: $$I_{\rm PS}(r)=I_{0_{\rm PS}}~{\rm e}^{-(r^2/2\sigma_{\rm PS}^2)} \label{eq:PS},$$ $$I_{\rm MH}(r)=I_{0_{\rm MH}}~{\rm e}^{-r/r_{e}} \label{eq:MH}.$$ In Fig.\[fig:profilo\] we show the azimuthally averaged radio brightness profile at 614 MHz (i.e. the higher resolution of the two GMRT maps) traced down to a level of 3$\sigma_{\rm 614~MHz}$. The radio image was convolved to 5$''$ resolution. The annuli, as shown in the inset panel, are as wide as the half FWHM beam. The S/N ratio of this map is sufficient to allow a very good separation between the point source and diffuse emission. The best-fit model is shown as a continuous black line in the right panels of Fig.\[fig:profilo\]. The mini-halo contribution is indicated by the dashed line. Overall the mini-halo clearly extends from the central point source and it is well fitted by the exponential model. The best fit of the exponential model yields a central brightness of $I_{\rm 0}$=$286_{-53}^{+2}$ $\mu$Jy/arcsec$^{2}$ and $r_{\rm e}$=$33_{-2}^{+1}$ kpc. The flux density of the mini-halo at 614 MHz integrated up to 3 $r_{\rm e}$ is $S_{\rm 614 MHz}$=$48 \pm 2$ mJy, while the flux density calculated up to the size of the diffuse brightness emission containing the 3$\sigma_{\rm 614 MHz}$ radio isophotes is $S_{\rm 614 MHz}$=$50 \pm 2$ mJy. We estimated the flux density at 237 MHz up to 3$\sigma_{\rm 237 MHz}$ level from the map, resulting in $S_{\rm 237 MHz}$=$131 \pm 6$ mJy. Following @2011arXiv1106.6228V, we also subtracted the central point source in frequency space by obtaining flux measures that agree very well with those derived through the fitting procedure. We then estimated the 237 and 614 MHz fluxes of the diffuse source from the point source subtracted maps within the 614 MHz 3$\sigma$ contours and derived a mean spectral index of $\alpha_{237}^{614} \simeq 0.98 \pm 0.05$[^2] for the mini-halo. The central point source has a flux of $55 \pm 4$ mJy at 237 MHz and $32 \pm 2$ mJy at 614 MHz. We compared our radio observations to cluster gas brightness and temperature maps obtained from archival X-ray observations through B2-spline wavelet imaging and spectral imaging analyses, as detailed in . We used the Chandra observation to map the gas brightness at a 1$''$ angular resolution. We took advantage of the larger effective area of [*XMM-Newton*]{} at high energy to map the gas temperature from 3 $\sigma$ thresholding of the wavelet coefficients, investigating significant features within a resolution range of 4 to 32 arcsec. An elongation in the radio mini-halo morphology is evident both in the 614 MHz and 237 MHz GMRT maps at more than 5$\sigma$ level (see Fig.\[fig:Radio\]). It lies in the SE X-ray substructure, which is evident in [*Chandra*]{} data (black contours in the left panel of Fig.\[fig:multi-L\]). The radio excess, particularly clear and resolved for the first time in the 614 MHz map, corresponds exactly to the position of the hottest region SE of the cluster core detected by MUSTANG SZE imaging (inner green contour in the right panel of Fig.\[fig:multi-L\], which indicates the strongest SZ decrement). In the same region, our [*XMM-Newton*]{} temperature map reveals a hot (T $\gtrsim$ 17 keV) structure, delimited to the SE by a surface brightness edge that might be a shock front (see left panel of Fig.\[fig:multi-L\]). These results agree with previous X-ray analyses [@2002MNRAS.335..256A] in concluding that the strong SZ decrement region of RXJ1347 is presumably shock-heated. Aside from the SE excess, the rest of the mini-halo radio emission seems to be well confined within the cold central part of the cluster, surrounded by the high-pressure gas pointed out by SZE observations, and shown in the right panel of Fig.\[fig:multi-L\] by the two external contours [see also Fig. 6 in @2010ApJ...716..739M]. ![614 MHz map of RXJ1347 convolved to 5$''$ resolution and total intensity contours starting from 3 $\sigma$ level. The mean surface brightness of the annulus indicated in this image was estimated excluding (or including only) the excess region within the rectangular region (see text).[]{data-label="fig:annulus"}](Fig4.ps){width="31.00000%"} To obtain a rough estimate of the radio flux in the SE excess region of the mini-halo, we considered an annulus containing the radio elongation (see Fig.\[fig:annulus\]). We then estimated the mean surface brightness in the higher-resolution 614 MHz map a) within the annulus, excluding the excess region ($<I> = 0.25 \pm 0.02$ mJy/beam over $\sim$37.1 beam area), and b) only within the excess region (rectangular area in Fig.\[fig:annulus\], $<I> = 0.63 \pm 0.03$ mJy/beam over $\sim$8.8 beam area). The net excess in the radio surface brightness of the SE cluster region is therefore $<I> = 0.38 \pm 0.04$ mJy/beam, corresponding to a radio flux of $3.3 \pm 0.3$ mJy at 614 MHz. Conclusions =========== We showed for the first time a clear correspondence between an excess emission in the radio mini-halo at the center of RXJ1347 and a high pressure ICM region revealed by SZE observations at a similarly high angular resolution [$\lesssim$ 10$''$, @2010ApJ...716..739M]. Possible evidence of radio emission excess in the SE direction were pointed out by higher radio frequency observations , but without the sensitivity and resolution offered by GMRT data. Our radio observations uniquely allow us to probe the exact spatial coincidence with the SZ decrement detected at 90 GHz by @2010ApJ...716..739M and with the hot ICM region shown in our temperature map. If we exclude the SE elongation in the mini-halo morphology, the rest of the diffuse radio source is confined within the colder central region. Electron re-acceleration in the mini-halo at the center of RXJ1347 can be related to turbulence produced by gas sloshing that is typical of disturbed clusters [@2008ApJ...675L...9M]. However, additional physical mechanisms are needed to explain the detected radio excess. The possible SE shock – most likely confirmed by our X-ray analysis – could be responsible for local intra-cluster electron re-acceleration . An alternative hypothesis is that the hot gas in the SE of the cluster is related to adiabatic gas compression, which amplifies the intra-cluster magnetic field intensity and increases the energy of radio emitting electrons, resulting in a higher synchrotron emissivity . We estimate that in this second case, a 15% volume compression is required to justify the observed surface brightness excess. The diffuse radio source at the center of RXJ1347 presents intermediate properties between “classical” radio mini-halos and relics, cosmic-ray acceleration in this system resulting from the combination of different physical mechanisms. Other mini-halos could present similar properties when analyzed through such detailed multi-wavelength observations. The implications of this study indeed go beyond the single case of RXJ1347, because we clearly show the perspectives opened by new high-resolution multi-wavelength observations for cluster studies. In particular our study highlights the importance of combining good resolution ($\sim$ 5$''$-10$''$) observations at a) low radio frequencies ($<$ 1 GHz), which are best suited for the detection of diffuse intra-cluster radio sources that are characterized by steep synchrotron spectra, and b) millimeter observations, which are favored for depicting shock regions, given the linear dependence to the ICM pressure. In the next few years joint higher-sensitivity and/or resolution observations derived from X-ray, mm and radio facilities (e.g. NuSTAR, MUSTANG–2, Planck, LOFAR, EVLA, …) will allow us to surpass phenomenological comparisons of thermal and non-thermal cluster properties, and to achieve a clearer characterization of the physics driving non-thermal intra-cluster phenomena. Most likely, traditional classifications of diffuse intra-cluster radio sources will lead to a more general view of multi-scale, complex radio emission, which is difficult to classify in a general scheme and deeply connected to the dynamical history and thermal evolution of each cluster. We thank the anonymous referee for her/his useful comments. We are grateful to Giulia Macario for helpful suggestions in the radio data reduction phase. We warmly thank Monique Arnaud for very useful discussions. We would like to thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. CF acknowledges financial support by the “[*Agence Nationale de la Recherche*]{}” through grant ANR-09-JCJC-0001-01. This work was supported by the BQR program of [*Observatoire de la Côte dÕAzur*]{} and by [*Laboratoire Cassiopée*]{} (UMR6202). [^1]: Jansky Fellow of the National Radio Astronomy Observatory [^2]: $S_{\nu} \propto \nu^{-\alpha}$
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss a few mathematical aspects of random dynamical decoupling, a key tool procedure in quantum information theory. In particular, we place it in the context of discrete stochastic processes, limit theorems and CPT semigroups on matrix algebras. We obtain precise analytical expressions for expectation and variance of the density matrix and fidelity over time in the continuum-time limit depending on the system Lindbladian, which then lead to rough short-time estimates depending only on certain coupling strengths. We prove that dynamical decoupling does not work in the case of intrinsic (i.e., not environment-induced) decoherence, and together with the above-mentioned estimates this yields a novel method of partially identifying intrinsic decoherence.' author: - | \ [Robin Hillier]{}$^{1}$, [Christian Arenz]{}$^2$, [Daniel Burgarth]{}$^{2}$\ \ ${}^1$ Department of Mathematics and Statistics\ Lancaster University\ Lancaster LA1 4YF, UK\ \ ${}^2$ Department of Mathematics\ Aberystwyth University\ Aberystwyth SY23 2BZ, UK title: | A Continuous-Time Diffusion Limit Theorem\ for Dynamical Decoupling and\ Intrinsic Decoherence --- Introduction {#sec:one} ============ The aim of this article is two-fold: first, to provide an analytical description of random dynamical decoupling because analytical expressions are often more manageable than combinatoric-numerical ones; second, to use this description to propose a partial method of detecting intrinsic decoherence of quantum systems. Dynamical decoupling is a method applied to stabilise states of quantum registers against undesired time-evolution. Originally invented in NMR technology, it has been generalised to a wider context, in particular in quantum information theory [@LB; @VKL]. It works by application of repeated instantaneous unitary correction pulses on the quantum register, perturbing the original time-evolution. The procedure is particularly interesting and effective when performed in a random way [@vi3; @vi4]. While several general estimates and specialisations of the procedure have been proposed in the past ([[cf.]{} ]{}[@LB] and the references therein), our focus here is on finding handy analytical descriptions of the time-evolution of expectation and distribution of physically interesting quantities like the density matrix process or the gate fidelity process arising from this random time-evolution. We believe that such descriptions are a valuable tool in future computations and enable predictions in experiments. We would like to provide now a rough overview of the content of this article. Let $({\mathcal{H}},{\mathcal{L}})$ stand for a generic finite-dimensional quantum system, with ${\mathcal{H}}$ a finite-dimensional complex Hilbert space and ${\mathcal{L}}$ a possibly time-dependent Lindblad generator ([[cf.]{} ]{}[@pb; @wo] for general background information). We start in Section \[sec:two\] by introducing dynamical decoupling and show that the decoupling condition can be satisfied only if ${\mathcal{L}}={\operatorname{i}}[H,\cdot]$ for some Hamiltonian $H$. This might sound like a contradiction since dynamical decoupling aims to eliminate decoherence (noise) arising from open systems. It can be resolved by differentiating between *intrinsic* and *extrinsic decoherence*: the latter one is where decoherence arises from interaction with an actual quantum heat bath or environment such that the total space time-evolution is unitary, the former one is where decoherence is actually the time-evolution of a closed system [@Adler]. It is unclear whether intrinsic decoherence may appear in nature, and it would, of course, contradict the axiom of unitary time-evolution. But in order to find out whether it may exist or whether the axiom of unitarity is always verified, one has to perform experiments and develop mathematical tools. To this end, Sections \[sec:three\] and \[sec:four\] provide a probabilistic-analytical approach to dynamical decoupling, namely: we set up a probabilistic description of a random walk in the completely positive trace-preserving (CPT) maps of the quantum system arising from the random correction pulses; then we study the continuum-limit of this random walk under a suitable scaling, which becomes a Gaussian (Markov) process in the CPT maps. We use this to determine the expectation and higher moments of the density matrix process $\rho_t$. In the \[sec:four\]th section we then compute the expectation of the gate fidelity, which might be regarded as a mean fidelity when averaging over all states on $B({\mathcal{H}})$ in a suitable manner. We illustrate all constructions and considerations with an easy example that shall accompany us through the paper. Up to this point, things were quite general, but this is where we can turn to our second aim: distinguishing between intrinsic and extrinsic decoherence (with bounded Hamiltonian dilations, [[cf.]{} ]{}[@ABH App.]). We therefore specialise in the final Section \[sec:five\] on providing approximative bounds for the gate fidelity in these two extremal cases together with a recipe which should enable the experimenter to determine the type of decoherence present in his setting. Ideally he should just know the pulse length $\tau$, the total time $t$ of evolution, and the coupling strength of the undesired decoherence. In some cases unfortunately some further input is needed. However, the overall moral is roughly speaking the following: the rate of decoherence decreases to $0$ when $\tau{\rightarrow}0$ if decoherence is extrinsic and due to bounded interaction [@vi3; @ABH]; it remains essentially unaffected by the decoupling procedure if decoherence is intrinsic. In other words, if random dynamical decoupling with $\tau{\rightarrow}0$ does not eliminate decoherence, then it was intrinsic or due to unbounded interaction! Model studies and illustrations of this procedure can be found in the companion article [@ABH]. [ **Acknowledgements.** We would like to thank Michał  Gnacik, Lorenza Viola and the referee for useful discussions and/or comments on the manuscript. Moreover, RH would like to thank Gernot Alber and Burkhard Kümmerer for guidance in his master thesis several years ago, in which a special case of Section \[sec:three\] had been developed. ]{} The concept of dynamical decoupling {#sec:two} =================================== Let us start with some notation used throughout the article. We shall denote our quantum system in question by $({\mathcal{H}},{\mathcal{L}})$, with ${\mathcal{H}}$ a finite-dimensional complex Hilbert space (of dimension $d_{\mathcal{H}}$) and we denote the adjoint of linear maps on it by ‘$^*$’; ${\mathcal{L}}$ is the Lindblad operator on $B({\mathcal{H}})$, generating the completely positive trace-preserving (CPT) time evolution maps $\alpha_t={\operatorname{e}}^{t{\mathcal{L}}}$, $t\in{\mathbb{R}}_+$, of the quantum system ([[cf.]{} ]{}[@pb; @wo] for general background information). Let us abbreviate $A:=B({\mathcal{H}})$, which has dimension $d=d_{\mathcal{H}}^2$ and which becomes a Hilbert space again with scalar product $(x,y)\in A\times A \mapsto \langle x, y\rangle := {\operatorname{tr }}(x^*y)$, and we denote adjoints of maps on this Hilbert space by ‘$^\dagger$’. We write ${\operatorname{Ad }}$ (or ${\operatorname{ad }}$) for the adjoint representation of the unitary group (or its Lie algebra, respectively) on $A$, [[i.e.,]{} ]{}${\operatorname{Ad }}(v)(x)=v x v^*$ and ${\operatorname{ad }}(H)(x)=[H,x]$, for $v,H,x\in A$ with $v$ unitary and $H$ selfadjoint. By a *decoupling set* in $A$ we mean a finite group of unitaries $V:=(v_j)_{j\in J}\subset A$ such that $0\in J$ and $$\label{eq:deccond1} v_0={\mathbf{1}}, \quad \sum_{j\in J} {\operatorname{Ad }}(v_j)(x) \in {\mathbb{C}}{\mathbf{1}}, \quad \forall x\in A.$$ Notice that in this case we automatically have $\frac{1}{|J|}\sum_{j\in J} v_j x v_j^*=\frac{1}{d_{\mathcal{H}}}{\operatorname{tr }}(x){\mathbf{1}}$. \[ex:1\] The standard illustrative example of a finite-dimensional quantum system to keep in mind throughout this paper is an $N$-qubit quantum system, so ${\mathcal{H}}=({\mathbb{C}}^2)^{\otimes N}$ and $d=2^{2N}$; there typically the decoupling set $V$ consists of the $4^N$ different combinations of Pauli matrices $\{{\mathbf{1}}, \sigma_1,\sigma_2,\sigma_3\}$ on the tensor factors. Here $v_j^*=v_j$., for all $j$. Given the CPT semigroup of time evolution maps $(\alpha_t)_{t\in{\mathbb{R}}_+}$ of our system and a (“short") time $\tau$, consider the externally modified time evolution $$\label{eq:randomwalk} \alpha^{(\tau)}_{(n+1)\tau}= {\operatorname{Ad }}(v_{j_0}^* v_{j_n})\circ\alpha_\tau \circ {\operatorname{Ad }}(v_{j_n}^*v_{j_{n-1}})\circ \alpha_\tau\circ \ldots \circ \alpha_\tau \circ{\operatorname{Ad }}(v_{j_1}^*v_{j_0}),$$ where $n\in{\mathbb{N}}$, and $(j_i)_{i\in{\mathbb{N}_0}}$ forms a certain sequence in $J$ with $j_0 =0$, meaning we apply *instantaneous decoupling* or *correction pulses* $v_{j_i}^*v_{j_{i-1}}$ at time $i\tau$; set $\alpha^{\tau}_t=\alpha_{t-n\tau}\circ\alpha^{\tau}_{n\tau}$ whenever $t\in [n\tau,(n+1)\tau)$. The sequence $j_i$ can be fixed or random, leading to *deterministic* or *random dynamical decoupling*. It turns out that random decoupling has many advantages [@vi3; @vi4; @kas] and moreover is mathematically more interesting, and that is why we want to investigate it here. Our first goal is to find an analytical description of the externally modified time evolution $\alpha^{(\tau)}_t$. In the random setting, $(\alpha^{(\tau)}_t)_{t\in\tau{\mathbb{N}}}$ becomes a stochastic process (a random walk with steps lasting time $\tau$) induced by the process $(v_{j_t})_{t\in\tau{\mathbb{N}}}$ with independent identically distributed (iid) and equidistributed [@Fel] increments in $V$, and we are interested in the limit $\tau{\rightarrow}0$, which would enable nice analytical expressions. Since $\alpha_t = \exp( t{\mathcal{L}})$, we find $$\label{eq:v-alpha} \alpha^{(\tau)}_{(n+1)\tau}= \exp(\tau {\operatorname{Ad }}(v_{j_n})\circ {\mathcal{L}}\circ {\operatorname{Ad }}(v_{j_n}^*)) \circ \alpha^{(\tau)}_{n\tau},$$ so the increment during the time interval $(t-\tau,t)$ is given by $\exp(\tau {\operatorname{Ad }}(v_{j_t})\circ {\mathcal{L}}\circ {\operatorname{Ad }}(v_{j_t}^*))$. We say that $V$ satisfies the *decoupling condition* for $({\mathcal{H}},{\mathcal{L}})$ if $$\label{eq:deccond2} \sum_{j\in J} {\operatorname{Ad }}(v_j)\circ {\mathcal{L}}\circ {\operatorname{Ad }}(v_j^*)=0.$$ The idea behind this condition is that it ensures cancellation of interaction at first order in $\tau\|{\mathcal{L}}\|$, [[i.e.,]{} ]{}for short time $\tau$, and thus higher order terms contribute. We say (the time evolution of) our quantum system $({\mathcal{H}},{\mathcal{L}})$ is *purely unitary* if ${\mathcal{L}}={\operatorname{i}}{\operatorname{ad }}(H)$, with $H\in A$ selfadjoint, because in this case $\alpha_t={\operatorname{e}}^{t{\mathcal{L}}}$ is induced by a one-parameter family of unitary matrices; in this case ${\mathcal{L}}^\dagger=-{\mathcal{L}}$. The “opposite case”, namely where ${\mathcal{L}}^\dagger={\mathcal{L}}$ we call *purely dephasing*. \[lem:deccond-unitary\] A decoupling set for $({\mathcal{H}},{\mathcal{L}})$ satisfies the decoupling condition iff $({\mathcal{H}},{\mathcal{L}})$ is purely unitary. We write out the generator ${\mathcal{L}}$ in Christensen-Evans form [@ce; @wo] $${\mathcal{L}}(x) = \Psi(x) + a x + x a^*, \quad \forall x\in A,$$ with a certain $a\in A$ and completely positive $\Psi$ which is not a multiple of ${\operatorname{id}}_A$ (w.l.o.g., because adding $2\lambda {\operatorname{id}}_A$ to $\Psi$ has the same result on ${\mathcal{L}}$ as adding $\lambda {\mathbf{1}}$ to $a$). Suppose first that $\alpha$ is purely unitary; then $\Psi=0$ and $a^*=-a$. From , with $J$ indexing the decoupling set $V=\{v_j:j\in J\}$ as above, we obtain $$\begin{aligned} \frac{1}{|J|}\sum_{j\in J} {\operatorname{Ad }}(v_j)\circ {\mathcal{L}}\circ{\operatorname{Ad }}(v_j^*)(x) =& \frac{1}{|J|}\sum_{j\in J} {\operatorname{Ad }}(v_j)(a {\operatorname{Ad }}(v_j^*)(x)+{\operatorname{Ad }}(v_j^*)(x)a^*)\\ =& \frac{1}{|J|} \sum_{j\in J} ({\operatorname{Ad }}(v_j)(a) x + x {\operatorname{Ad }}(v_j)(a^*))\\ =& \frac{1}{d_{\mathcal{H}}}{\operatorname{tr }}(a+a^*)x =0,\quad x\in A, \end{aligned}$$ so $V$ satisfies the decoupling condition. If instead $\alpha$ is not purely unitary, we have $\Psi\not=0$ and hence $$\Phi:= \frac{1}{|J|}\sum_{j\in J} {\operatorname{Ad }}(v_j)\circ \Psi \circ {\operatorname{Ad }}(v_j^*)$$ is completely positive and nonzero. Suppose $\Phi(x)$ equals $$- \frac{1}{|J|} \sum_{j\in J} ({\operatorname{Ad }}(v_j)(a {\operatorname{Ad }}(v_j^*)(x))+{\operatorname{Ad }}(v_j)({\operatorname{Ad }}(v_j^*)(x)a^*)) = - \frac{1}{d_{\mathcal{H}}}{\operatorname{tr }}(a+a^*) x, \quad x\in A.$$ Then, for every rank-one projection $p\in A$, we have $\Phi(p)=-\frac{1}{d_{\mathcal{H}}}{\operatorname{tr }}(a+a^*) p$. But for every $\xi\in{\mathcal{H}}$ with $p\xi=0$, we have $$0= -\frac{1}{d_{\mathcal{H}}}{\operatorname{tr }}(a+a^*) \langle \xi, p\xi\rangle = \langle \xi, \Phi(p)\xi\rangle = \frac{1}{|J|} \sum_{j\in J} \langle \xi, {\operatorname{Ad }}(v_j)\circ\Psi\circ {\operatorname{Ad }}(v_j^*)(p)\xi\rangle,$$ with each single term $\ge 0$, due to the positivity of $\Psi$ and the scalar product, and thus actually $=0$. In particular, since $v_0={\mathbf{1}}$, we have $\langle \xi, \Psi(p)\xi\rangle=0$, so $\Psi(p) \in {\mathbb{R}}_+p$. Let us write $\Psi$ in (minimal) Kraus form with $\operatorname{rank}(\Psi)$ its Kraus rank and certain $b_i\in A$ [@wo]: $$\Psi(x) = \sum_{i=1}^{\operatorname{rank}(\Psi)} b_i x b_i^*, \quad x\in A.$$ This entails then, for any two mutually orthogonal vectors $\eta,\xi\in H$, $$0 = \langle \xi, \Psi(|\eta\rangle\langle\eta |) \xi \rangle = \sum_{i=1}^{\operatorname{rank}(\Psi)} | \langle \xi, b_i \eta \rangle |^2.$$ Hence, for every $i$, we see that $b_i \eta\in {\mathbb{C}}\eta$, or in other words, $\eta$ must be an eigenvector of $b_i$. This holds for every $\eta\in{\mathcal{H}}$, so $b_i\in {\mathbb{C}}{\mathbf{1}}$, and thus $\Psi$ is a multiple of ${\operatorname{id}}_A$, which contradicts our initial assumptions. Therefore, $\Phi(x)\not= -\frac{1}{d_{\mathcal{H}}}{\operatorname{tr }}(a+a^*) x$, and $$\frac{1}{|J|}\sum_{j\in J} {\operatorname{Ad }}(v_j)\circ {\mathcal{L}}\circ {\operatorname{Ad }}(v_j^*)\not= 0,$$ [[i.e.,]{} ]{}$V$ on $({\mathcal{H}},{\mathcal{L}})$ does not satisfy the decoupling condition. Despite this result, it will turn out in the course of this paper that dynamical decoupling is still interesting beyond the unitary case. The continuous-time limit of random dynamical decoupling {#sec:three} ======================================================== We continue with the notation and concepts introduced in the previous section. Let us, in particular, assume all increments $v_{j_i}$, with $i\in{\mathbb{N}}$, in our random walk $(v_{j_i})_{i\in{\mathbb{N}_0}}$ of decoupling pulses to be iid and equidistributed in $V$ as in the preceding section. The induced random walk $(\alpha^{(\tau)}_{(n+1)\tau})_{n\in{\mathbb{N}}}$ lies in the completely positive maps on $A$ according to and . Moreover, since completely positive maps are linear maps of the Hilbert space $A$ and since all the increments are invertible, the random walk actually lies in the group ${\operatorname{GL}}(A)$ of invertible linear maps of $A$, and ${\mathcal{L}}\in{\mathfrak{gl}}(A)$, the Lie algebra of ${\operatorname{GL}}(A)$. This induced random walk has again iid increments and is described by the measure $$\label{eq:mutau} \mu ^{(\tau)}:=\frac{1}{|J|} \sum_{j\in J} \delta _{\exp(\tau {\operatorname{Ad }}(v_j)\circ {\mathcal{L}}\circ {\operatorname{Ad }}(v_j^*))}.$$ We would like to investigate it in the limit $\tau{\rightarrow}0$. However, since $\tau$ is an actual physical quantity in our set-up, we keep it and instead consider a fictitious limit, which should be good for small $\tau$, as explained below. Considering simply $\mu^{(\tau)}$ and the limit of $\tau{\rightarrow}0$, we would obtain a drift-like expression without fluctuations, which is not a really physical result but a good first approximation, [[cf.]{} ]{} and Remark \[rem:drift-limit\]. In fact, the well-known Donsker invariance principle [@Fel] basically says that the limit of a classical random walk is described suitably well by Brownian motion, [[i.e.,]{} ]{}by scaling length increments with the square-root of time increments, supposed that the expectation of every increment is $0$. The following kind of central limit theorem helps us to treat these dissipation-fluctuation terms in the present noncommutative setting, and will thus become a building stone in our construction; it has first been stated in [@we] but can also be found in the textbook [@gr Th.4.4.2]. The necessary notation and concepts in Lie groups and stochastic processes on Lie groups are lined out in Appendix \[sec:app\], and we suggest the reader to go through it before continuing here. \[theowehn\] Let $G$ be an $N$-dimensional Lie group, with ${\mathbf{1}}$-chart $(U,x)$, Lie algebra basis $(X_k)_{1\le k\le N}$ and coordinate mappings $x_k:U{\rightarrow}{\mathbb{R}}$ extended to functions in ${C^\infty}_c(G)$ and hence to the one-point compactification $G_c$. Let $(\mu_n)_{n\in {\mathbb{N}}}$ be a family of probability measures on $G$ converging to $\delta _{{\mathbf{1}}}$. Suppose there are numbers $a_k,a_{kl} \in {\mathbb{R}}$ such that $(a_{kl})_{k,l=1\ldots N}$ is positive semi-definite and, for all $k,l=1,...,N$ and $n{\rightarrow}\infty$: - $ \int_{G} x_k(g) d\mu _n(g) = a_k/n + o(1/n),$ - $\int_{G} x_k(g)x_l(g) {\operatorname{d}}\mu _n(g) = a_{kl}/n + o(1/n),$ - $\mu _n(\tilde{U}^c)=o(1/n)$ for every ${\mathbf{1}}$-neighbourhood $\tilde{U}\subset G_c$. Then the sequence $((\mu _n) ^{*n})_{n \in {\mathbb{N}}}$ converges \*-weakly to a measure $\nu_1$ on $G_c$ which belongs to the convolution semigroup $(\nu_t)_{t \in {\mathbb{R}}_+}$ whose corresponding operator semigroup $(T_t)_{t\in{\mathbb{R}}_+}$ on $C(G_c)$ has infinitesimal generator $$L:= \frac{{\operatorname{d}}}{{\operatorname{d}}t} T_t\restriction_{t=0} =\sum_{k=1}^N a_k D_{X_k} + \sum_{k,l=1}^N a_{kl}D_{X_k} D_{X_l}$$ with ${\operatorname{dom}}(L)= C^2(G_c)$. We would like to apply this theorem to our setting, namely where $G={\operatorname{GL}}(A)$ and ${\mathfrak{g}}={\mathfrak{gl}}(A)$ regarded as (real!) linear Lie group and algebra, respectively, and subspaces of $B(A)$. The plan is as follows: in a first step we shall construct a continuous-time stochastic process in $G$, and in a second step use this to obtain a description of the induced behaviour of the density matrix process $(\rho_t)_{t\in{\mathbb{R}}_+}$. In our setting this means we first have to define suitable and physically realistic measures $\mu_n$ to which to apply our limit procedure of Theorem \[theowehn\]. The drift part should correspond to the original drift part resulting from . Putting $$\bar{{\mathcal{L}}} := \frac{1}{|J|}\sum_{j\in J}{\operatorname{Ad }}(v_j)\circ {\mathcal{L}}\circ{\operatorname{Ad }}(v_j^*)$$ and $${\mathcal{L}}_j:= {\operatorname{Ad }}(v_j)\circ ({\mathcal{L}}-\bar{{\mathcal{L}}})\circ{\operatorname{Ad }}(v_j^*),$$ let us define the measures $$\label{eq:mu-n} \mu_n := \frac{1}{|J|}\sum_{j\in J}\delta _{\exp\big(\frac{\tau}{n^{1/2}} {\mathcal{L}}_j + \frac{\tau}{n} (\bar{{\mathcal{L}}} - \frac{\tau}{2} {\mathcal{L}}_j^2)\big)}, \quad n\in{\mathbb{N}},$$ which conceptually imitate a diffusion part for the variation around the mean $\bar{{\mathcal{L}}}$ and a drift part for the mean movement. Apart from being mathematically clear and plausible from the classical Donsker invariance principle, the meaningfulness of this limit shall moreover be confirmed by numerical analysis carried out partially in the final section and mainly in [@ABH]. Let us drop a quick side remark: as a rough first approximation for $\mu_n$ we might also study the purely drift-like $$\label{eq:mu-n2} \mu_n^{({\operatorname{drift}})} := \frac{1}{|J|}\sum_{j\in J}\delta _{\exp(\frac{\tau}{n} {\operatorname{Ad }}(v_j)\circ {\mathcal{L}}\circ {\operatorname{Ad }}(v_j^*))},$$ similar to but with scaling variable $\tau/n$ instead of $\tau$ as we now would like to keep $\tau$ fixed. In analogy to the law of large numbers this would lead to even nicer expressions and CPT dynamics but less faithful modelling; sometimes we will consider it briefly for comparison reasons, [[cf.]{} ]{}Remark \[rem:drift-limit\]. It can be shown that all other types of scaling ([[i.e.,]{} ]{}others than $1/n$ or $1/\sqrt{n}$) essential lead to either trivial or singular not well-defined expressions. We shall therefore stick to $\mu_n$ henceforth if not explicitly mentioned otherwise. Moreover, it is clear that ${\operatorname{Ad }}(v_j)\circ\bar{{\mathcal{L}}}\circ{\operatorname{Ad }}(v_j^*) = \bar{{\mathcal{L}}}$ (as $V$ is a group) and hence $\sum_{j\in J} {\mathcal{L}}_j = 0$. We have $\bar{{\mathcal{L}}}=0$ iff $V$ satisfies the decoupling condition for $({\mathcal{H}},{\mathcal{L}})$. Using now the defining property of the coordinate maps in the limit $n{\rightarrow}\infty$, (i) is obtained with a series expansion of $\exp\big(\frac{\tau}{n^{1/2}} {\mathcal{L}}_j + \frac{\tau}{n} (\bar{{\mathcal{L}}} - \frac{\tau}{2} {\mathcal{L}}_j^2)\big)$, namely: $$\begin{aligned} a_k=& \lim_{n{\rightarrow}\infty} \frac{n}{\tau} \int_{G_c} x_k(g) {\operatorname{d}}\mu_n(g) \\ =& \lim_{n{\rightarrow}\infty} \frac{n}{\tau|J|} \sum_{j\in J} x_k\Big(\exp\big(\frac{\tau}{n^{1/2}} {\mathcal{L}}_j + \frac{\tau}{n} (\bar{{\mathcal{L}}} - \frac{\tau}{2} {\mathcal{L}}_j^2)\big)\Big)\\ =& \lim_{n{\rightarrow}\infty} \frac{n}{\tau |J|} \sum_{j\in J} \Big( \frac{\tau}{n^{1/2}}\langle {\mathcal{L}}_j, X_k\rangle_{\mathfrak{g}}+ \frac{\tau}{n}\langle \bar{{\mathcal{L}}} - \frac{\tau}{2}{\mathcal{L}}_j^2, X_k \rangle_{\mathfrak{g}}+ \frac{\tau^2}{2 n}\langle {\mathcal{L}}_j^2, X_k\rangle_{\mathfrak{g}}+ O\big(\frac{1}{n^{3/2}}\big)\Big)\\ =& \langle \bar{{\mathcal{L}}}, X_k \rangle_{\mathfrak{g}}.\end{aligned}$$ Analogously, for (ii) we have $$\begin{aligned} a_{kl}=& \lim_{n{\rightarrow}\infty} \frac{n}{\tau} \int_{G_c} x_k(g)x_l(g) {\operatorname{d}}\mu_n(g) \\ =& \lim_{n{\rightarrow}\infty} \frac{n}{\tau |J|}\sum_{j\in J} x_k\Big(\exp\big(\frac{\tau}{n^{1/2}} {\mathcal{L}}_j + \frac{\tau}{n} (\bar{{\mathcal{L}}} - \frac{\tau}{2} {\mathcal{L}}_j^2)\big)\Big)x_l\Big(\exp\big(\frac{\tau}{n^{1/2}} {\mathcal{L}}_j + \frac{\tau}{n} (\bar{{\mathcal{L}}} - \frac{\tau}{2} {\mathcal{L}}_j^2)\big)\Big)\\ =& \lim_{n{\rightarrow}\infty} \frac{n}{\tau |J|}\sum_{j\in J}\Big( \frac{\tau^2}{n}\langle {\mathcal{L}}_j, X_k\rangle_{\mathfrak{g}}\langle {\mathcal{L}}_j, X_l\rangle_{\mathfrak{g}}+ O\big(\frac{1}{n^{3/2}}\big)\Big)\\ =& \frac{\tau}{|J|}\sum_{j\in J} \langle {\mathcal{L}}_j, X_k\rangle_{\mathfrak{g}}\langle {\mathcal{L}}_j, X_l\rangle_{\mathfrak{g}},\end{aligned}$$ for every $k,l=1,\ldots ,N$. Finally, it is easy to see that condition (iii) is satisfied as $\mu_n$ has discrete support in $|J|$ points only which converge to $0$ as $n{\rightarrow}\infty$. Thus we get $$\label{eq:CLTresult} L= \sum_{k=1}^N a_k D_{X_k} + \sum_{k,l=1}^N a_{kl} D_{X_k}D_{X_l} = D_{\bar{{\mathcal{L}}}} + \frac{\tau}{|J|}\sum_{j\in J} D_{{\mathcal{L}}_j}^2$$ for the generator of the limit convolution semigroup $(\nu_t)_{t\in{\mathbb{R}}_+}$ on $G_c$, which can be interpreted as a combination of drift and diffusion on $G_c$. This has been the first big step in our construction, namely the construction of the convolution semigroup of measures $(\nu_t)_{t\in{\mathbb{R}}_+}$ on $G$; it implicitly describes a stochastic process $(\alpha'_t)_{t\in{\mathbb{R}}_+}$ on $G$ (according to Theorem \[theowehn\]) with $\alpha_0'={\operatorname{id}}_A$. Our second step shall be to calculate the time evolution of the density matrix and related physically significant quantities out of the stochastic process $(\alpha_t')_{t\in{\mathbb{R}}_+}$. This is slightly involved, but can be done using some tools which we are now going to derive. A general fact is that, for every $f\in C(G_c)$, we have $$\label{eq:CLTexp} {\mathbb{E}}[ f\circ\alpha'_t] = \int_{G_c} f(g) {\operatorname{d}}\nu_t (g) = T_t f ({\mathbf{1}}).$$ We define the subsemigroup $G^{[1]}:=\{g\in G: \|g\|\le 1\}\subset G\subset G_c$. Since $\alpha^{(\tau)}_t$ are contractions, the measures $\mu_n$ must all be supported in $G^{[1]}$. This implies that the convolutions $\mu_n^{*m}$ of those measures are supported in $G^{[1]}$ ([[cf.]{} ]{}Appendix \[sec:app\] for a proof). For given $t>0$, choosing a sequence $(m_n)_{n\in{\mathbb{N}}}$ such that $m_n/n{\rightarrow}t$, one can check that $\mu_n^{*m_n}{\rightarrow}\nu_t$. Hence the limit semigroup $(\nu_t)_{t\in{\mathbb{R}}_+}$ is supported in $G^{[1]}$, meaning the Gaussian process $(\alpha'_t)_{t\in{\mathbb{R}}_+}$ stays almost surely in $G^{[1]}$. Then it follows that $T_t$ preserves the closed subspace $C_{0,b}(G^{[1]c})\subset C(G_c)$ of bounded continuous functions on the complement $G^{[1]c}$ of $G^{[1]}$ vanishing at the boundary $\{g\in G: \|g\|=1\}$: $$T_t f(g) = \int_{G^{[1]}} f(hg) {\operatorname{d}}\nu_t (h) = 0,$$ for every $g\in G^{[1]}$, as $f(hg)=0$ for $h,g\in G^{[1]}$, [[i.e.,]{} ]{}$T_tf$ has support in $G^{[1]c}$ and is bounded by $\|f\|_\infty$. The corresponding quotient Banach space $C(G_c)/C_{0,b}(G^{[1]c})$ can be identified with $C_b(G^{[1]})$: namely, $f\in C(G_c)$ induces a function $f\restriction_{G^{[1]}}\in C_b(G^{[1]})$ and v.v., two extensions $f_1,f_2$ of a function $f\in C_b(G^{[1]})$ to $G_c$ lead to $f_1-f_2\in C_{0,b}(G^{[1]c})$, thus a unique element in $C(G_c)/C_{0,b}(G^{[1]c})$. Write $q$ for the corresponding quotient map and $f^{[1]}:= q(f)$, for every $f\in C(G_c)$, so that $f^{[1]}(g)= f(g)$ if $g\in G^{[1]}$. Then we get the quotient semigroup $(T_t^{[1]})_{t\in{\mathbb{R}}_+}$ as in Appendix \[sec:app\], with infinitesimal generator $K= q(L q^{-1}( \cdot))$ and ${\operatorname{dom}}(K) = q({\operatorname{dom}}(L)) \simeq C_b^2(G^{[1]})$, the twice differentiable functions on $G^{[1]}$ which and whose first and second order derivatives are all bounded. In order to achieve a description of the time evolution $(\rho_t)_{t\in{\mathbb{R}}_+}$ of the density matrix, the idea is to study every entry of $\rho_t$ in a certain orthonormal basis. To this end, let $(e_k)_{k=1...d}$ be an arbitrary fixed orthonormal basis of $A$. We consider, for every $k,l$, the function $$f_{kl}: g \in G \mapsto \langle e_l, g (e_k)\rangle,$$ which is $\nu_t$-integrable (because bounded by $1$ on the support) and which lies in ${C^\infty}(G)$ but not in $C_c(G)$. Write $f_{kl}^{[\infty]}$ for an arbitrary but fixed function in ${C^\infty}_c(G)$ (and hence ${C^\infty}(G_c)$) coinciding with $f_{kl}$ on $G^{[1]}$, which can always be achieved, [[e.g.]{} ]{}by multiplying with a smoothed indicator function on $G^{[1]}$ (easy exercise); moreover, following the notation of the preceding paragraph we write $f_{kl}^{[\infty,1]}:= q(f_{kl}^{[\infty]})$. Then $${\mathbb{E}}[f_{kl}(\alpha'_t(\cdot)g)]={\mathbb{E}}[f_{kl}^{[\infty]}(\alpha'_t(\cdot)g)] = T_t f_{kl}^{[\infty]} (g) = T_t^{[1]} f_{kl}^{[\infty,1]} (g), \quad t\in{\mathbb{R}}_+, g\in G^{[1]}.$$ Noticing furthermore that $f_{kl}^{[\infty,1]}\in C_b^2(G^{[1]})$, we have, for every $g\in G^{[1]}$, $$\label{eq:LL} \begin{aligned} K f_{kl}^{[\infty,1]}(g) =& q(L f_{kl}^{[\infty]})(g) = L f_{kl}^{[\infty]}(g)\\ =& \frac{1}{|J|}\sum_{j\in J} \Big( D_{\bar{{\mathcal{L}}}}+ \tau D_{{\mathcal{L}}_j}^2\Big) f_{kl}^{[\infty]}(g)\\ =& \frac{{\operatorname{d}}}{{\operatorname{d}}t}\langle e_l,{\operatorname{e}}^{t\bar{{\mathcal{L}}}}g (e_k) \rangle \restriction_{t=0} +\frac{\tau}{|J|}\sum_{j\in J} \frac{{\operatorname{d}}^2}{{\operatorname{d}}t^2}\langle e_l,{\operatorname{e}}^{t{\mathcal{L}}_j}g (e_k) \rangle \restriction_{t=s=0} \\ =& \langle e_l , \Big(\bar{{\mathcal{L}}} +\frac{\tau}{|J|}\sum_{j\in J} {\mathcal{L}}_j^2 \Big)(g e_k) \rangle \\ =& \langle e_l, \hat{L}(g e_k)\rangle \end{aligned}$$ with $$\hat{L} := \bar{{\mathcal{L}}} + \frac{\tau}{|J|} \sum_{j\in J} {\mathcal{L}}_j^2 \in B(A).$$ Analogously $K^n f_{kl}^{[\infty,1]}(g) =\langle e_l, \hat{L}^n(g e_k)\rangle$, which is bounded by $\|\hat{L}\|^n$ uniformly in $g\in G^{[1]}$. Therefore, $$z\in {\mathbb{C}}\mapsto \sum_{n=0}^\infty \frac{z^n}{n!} K^n f_{kl}^{[\infty,1]} = \sum_{n=0}^\infty \frac{z^n}{n!} \langle e_l, \hat{L}^n (\cdot e_k)\rangle = \langle e_l, {\operatorname{e}}^{z \hat{L}} (\cdot e_k)\rangle \in {C^\infty}(K)$$ converges and is an analytic continuation of $t\mapsto T_t^{[1]}f_{kl}^{[\infty,1]}$, so $f_{kl}^{[\infty,1]}\in{C^\infty}(K)$ is an entire analytic vector for $T_t^{[1]}$ ([[cf.]{} ]{}Appendix \[sec:app\]). Recalling and noticing that $\frac12{\mathbf{1}}\in G^{[1]}$ and that $f_{kl}$ is linear in its argument, this enables us to compute the expectation value $$\begin{aligned} {\mathbb{E}}[ \langle e_l, \alpha'_t(e_k) \rangle] =& {\mathbb{E}}[f_{kl}(\alpha'_t(\cdot))] = 2 {\mathbb{E}}[f_{kl}(\frac12\alpha'_t(\cdot))] = 2 {\mathbb{E}}[f_{kl}^{[\infty]}(\frac12\alpha'_t(\cdot))] \\ =& 2 T_t f_{kl}^{[\infty]}\Big(\frac12 {\mathbf{1}}\Big) = 2 T_t^{[1]} f_{kl}^{[\infty,1]} \Big(\frac12 {\mathbf{1}}\Big) = \langle e_l, {\operatorname{e}}^{t \hat{L}} (e_k) \rangle,\end{aligned}$$ for every $k,l$, so ${\mathbb{E}}[\alpha'_t(e_k)]= {\operatorname{e}}^{t\hat{L}}(e_k)$. Since this holds for every basis vector $e_k$, it holds for all elements in $A$. Applying it to the $A$-valued “density matrix stochastic process" $(\rho_t:=\alpha'_t(\rho_0))_{t\in{\mathbb{R}}_+}$, we find $${\mathbb{E}}[\rho_t] = {\operatorname{e}}^{t \hat{L}}(\rho_0),$$ concluding our second step, too. We summarize this all in \[th:cont-limit\] The continuous-time limit $(\alpha'_t)_{t\in{\mathbb{R}}_+}$ of the above random walk determined by a quantum system $({\mathcal{H}},{\mathcal{L}})$ and random dynamical decoupling with decoupling set $V=(v_j)_{j\in J}$ and leads to a contraction semigroup with generator . The density matrix $(\rho_t)_{t\in{\mathbb{R}}_+}$ is then a stochastic process in $A$ with expectation $${\mathbb{E}}[\rho_t] = {\operatorname{e}}^{t \hat{L}}(\rho_0), \quad \forall t\ge 0,$$ where $$\hat{L} = \bar{{\mathcal{L}}} + \frac{\tau}{|J|} \sum_{j\in J} {\mathcal{L}}_j^2.$$ \[rem:non-const-evo\] If the intrinsic time evolution is not constant (but still continuously differentiable), then the continuous-time limit can be carried out in the same way, resulting in a time-dependent generator $$\hat{L}(t) = \bar{{\mathcal{L}}}(t) + \frac{\tau}{|J|} \sum_{j\in J} {\operatorname{Ad }}(v_j)\circ ({\mathcal{L}}(t)-\bar{{\mathcal{L}}}(t))^2 \circ {\operatorname{Ad }}(v_j^*), \quad \forall t\in{\mathbb{R}}_+,$$ and just a time-ordered integral [@LB] $$\label{eq:time-order} {\mathbb{E}}[\rho_t] = \mathcal{T} {\operatorname{e}}^{\int_0^t \hat{L}(t'){\operatorname{d}}t'}(\rho_0) = \sum_{n=0}^\infty \int_0^{t}\int_0^{t'_{n}} \ldots \int_0^{t'_2} \hat{L}(t'_n)\ldots \hat{L}(t'_1) {\operatorname{d}}t'_1 \ldots {\operatorname{d}}t'_n$$ instead of the semigroup. However, this analytic expression will be a good approximation of the original random walk usually only if $\tau$ is sufficiently small such that $$\tau \Big\| \frac{{\operatorname{d}}}{{\operatorname{d}}t'} {\mathcal{L}}(t)\Big\| \ll \|{\mathcal{L}}(t')\|, \quad \forall t'\in [0,t].$$ For simplicity we shall only deal with the time-independent version here below. \[rem:drift-limit\] Let us write $\hat{L}^{({\operatorname{drift}})}$ for the generator and $(\nu_t^{({\operatorname{drift}})})_{t\in{\mathbb{R}}_+}$ for the convolution semigroup of measures corresponding to the drift-like continuous-time limit of the random walk with $\mu_n^{({\operatorname{drift}})}$ as in instead of , and accordingly ${\mathbb{E}}^{({\operatorname{drift}})}$ and ${\operatorname{Var}}^{({\operatorname{drift}})}$ for expectation and variances with respect to $(\nu_t^{({\operatorname{drift}})})_{t\in{\mathbb{R}}_+}$. Then going through the construction of Theorem \[th:cont-limit\], we see that the generator of $(T_t^{({\operatorname{drift}})})_{t\in{\mathbb{R}}_+}$ becomes $L^{({\operatorname{drift}})}=D_{\bar{{\mathcal{L}}}}$. Hence $\hat{L}^{({\operatorname{drift}})}= \bar{{\mathcal{L}}}$, which vanishes iff the decoupling condition is fulfilled iff the original time evolution $\alpha$ was unitary, according to Theorem \[lem:deccond-unitary\]. In this case $T_t^{({\operatorname{drift}})}={\operatorname{id}}$, for all $t\in{\mathbb{R}}_+$, hence ${\mathbb{E}}^{({\operatorname{drift}})}[\rho_t]=\rho_0$. \[ex:2\] (1) We continue our Example \[ex:1\] from the preceding section, the $N$-qubit system, with $V$ the group of tensor products of $N$ Pauli matrices. Suppose our time evolution is unitary, so ${\mathcal{L}}= {\operatorname{i}}{\operatorname{Ad }}(H)$ with $H$ the system Hamiltonian. Then we find $\hat{L}^{({\operatorname{drift}})}=\bar{{\mathcal{L}}}=0$, so ${\mathcal{L}}_j= {\operatorname{i}}[v_j H v_j,\cdot]$ and $$\hat{L} = -\frac{\tau}{|J|} \sum_{j\in J} [v_jH v_j,[v_j H v_j, \cdot]].$$ Now a variety of special cases may be investigated. If [[e.g.]{} ]{}$H$ acts only on the first qubit, [[i.e.,]{} ]{}it can be written as $H=H_1 \otimes {\mathbf{1}}^{\otimes (n-1)}$, then so does $\hat{L}$. If moreover $\rho_0$ splits as a product state on the tensor factors, then so does ${\mathbb{E}}[\rho_t]$, for all $t>0$, with only the first tensor factor changing over time. \(2) Another example, which shall turn up in Figure \[fig1\] and which is treated in detail in [@ABH] is the amplitude-damping model. In this setting ${\mathcal{H}}$ is the one-qubit Hilbert space ${\mathbb{C}}^2$, $A= {\operatorname{M}}_2({\mathbb{C}})$ and $${\mathcal{L}}(x)= -\gamma \big( 2x - {\operatorname{i}}\sigma_3 x - {\operatorname{i}}x \sigma_3 - \sigma_1 x \sigma_1 -\sigma_2 x \sigma_2 - {\operatorname{i}}\sigma_1 x \sigma_2 - {\operatorname{i}}\sigma_2 x \sigma_1 \big), \quad x\in A,$$ with a certain coefficient $\gamma \in {\mathbb{R}}_+$. The Pauli matrices constitute the decoupling set $V=\{v_0={\mathbf{1}}, v_j=\sigma_j: j=1,2, 3\}$. In order to compute the generator $\hat{L}= \bar{{\mathcal{L}}} + \frac{\tau}{4} \sum_{j=0}^3 {\mathcal{L}}_j^2$, one checks: $$\bar{{\mathcal{L}}}(x) = - \gamma \big( 2x - \sigma_1 x \sigma_1 - \sigma_2 x \sigma_2 \big)$$ and $${\mathcal{L}}_0(x)= -{\mathcal{L}}_1(x)= -{\mathcal{L}}_2(x) = {\mathcal{L}}_3(x) = -{\operatorname{i}}\gamma \big( \sigma_3 x + x \sigma_3 + \sigma_1 x\sigma_2 - \sigma_2 x \sigma_1 \big), \quad x\in A.$$ A computer can now easily calculate ${\mathbb{E}}[\rho_t]= {\operatorname{e}}^{t\hat{L}}(\rho_0)$, for any given $t>0$ and initial density matrix $\rho_0\in A$. The result should be a good approximation for the actual random walk if $\tau\ll 1/\|{\mathcal{L}}\|$. Important related quantities like gate fidelity shall be computed in the following section. Before concluding the present section let us derive here a tool that shall allow us to compute higher moments (including variance) of random variables, beyond the present linear ones (expectation value). \[prop:higher-mom\] In the setting of Theorem \[th:cont-limit\], for all $x_1,\ldots,x_n, y_1,\ldots y_n \in A $, let $$f_{x_1\ldots x_n,y_1\ldots y_n}(g) := \langle y_1, g(x_1)\rangle \cdots\langle y_n, g(x_n)\rangle = (f_{x_1,y_1}\cdots f_{x_n,y_n})(g), \quad g\in G,$$ and define the linear operator $\hat{L}^{(n)}$ on $A ^{{\otimes}n}$ by $$\begin{split} \hat{L}^{(n)} (x_1 {\otimes}... {\otimes}x_n) :=& \sum_{l=1}^n x_1{\otimes}...{\otimes}\bar{{\mathcal{L}}}(x_l) {\otimes}...{\otimes}x_n\\ &+ \frac{\tau}{|J|} \sum_{j\in J} \Big(\sum_{l=1}^n x_1{\otimes}...{\otimes}{\mathcal{L}}_j^2(x_l) {\otimes}...{\otimes}x_n\\ & \quad + 2 \sum_{k=1,l>k}^{n} x_1 {\otimes}...{\otimes}{\mathcal{L}}_j(x_k){\otimes}...{\otimes}{\mathcal{L}}_j(x_l) {\otimes}...{\otimes}x_n \Big) \end{split}$$ and linear extension. Then $${\mathbb{E}}[f_{x_1 \ldots x_n,y_1 \ldots y_n}\circ\alpha_t'] = \langle(y_1{\otimes}...{\otimes}y_n), {\operatorname{e}}^{t\hat{L}^{(n)}} (x_1{\otimes}...{\otimes}x_n)\rangle.$$ Following the notation and the truncation and quotient space procedure exactly as in the case of $f_{kl}$, we can define (not uniquely) a smooth function $f_{x_1 \ldots x_n,y_1 \ldots y_n}^{[\infty]} \in {C^\infty}(G)$ from $f_{x_1 \ldots x_n,y_1 \ldots y_n}$ and hence a function $f^{[\infty,1]}_{x_1 \ldots x_n,y_1 \ldots y_n}\in C_b^2(G^{[1]})$, which is analytic for $K$, [[i.e.,]{} ]{}in ${C^\infty}(K)$; we can and do choose it such that $f^{[\infty,1]}_{x_1 \ldots x_n,y_1\ldots y_n}=f^{[\infty]}_{x_1,y_1} \cdots f^{[\infty]}_{x_n,y_n}$. Exploiting then the product rule for differentiation, we obtain $$\begin{split} K f^{[\infty,1]}_{x_1 \ldots x_n,y_1\ldots y_n}({\mathbf{1}}) =& L f^{[\infty]}_{x_1 \ldots x_n,y_1\ldots y_n}({\mathbf{1}})\\ =& L \big(f^{[\infty]}_{x_1,y_1} \cdots f^{[\infty]}_{x_n,y_n} \big)({\mathbf{1}})\\ =& D_{\bar{{\mathcal{L}}}} \big(f^{[\infty]}_{y_1,x_1} \cdots f^{[\infty]}_{y_n,x_n}\big)({\mathbf{1}}) + \frac{\tau}{|J|}\sum_{j \in J} D_{{\mathcal{L}}_j}^2 \big(f^{[\infty]}_{y_1,x_1}\cdots f^{[\infty]}_{y_n,x_n}\big)({\mathbf{1}})\\ =& \sum_{l=1}^n \langle y_1,x_1\rangle \cdots \langle y_l,\bar{{\mathcal{L}}}(x_l) \rangle \cdots \langle y_n,x_n\rangle \\ &+ \frac{\tau}{|J|}\sum_{j \in J} \Big( \sum_{l=1}^n \langle y_1,x_1\rangle \cdots \langle y_l,{\mathcal{L}}_j^2(x_l) \rangle \cdots \langle y_n,x_n\rangle \\ & \quad + \sum_{k=1,l>k}^{n} 2 \langle y_1, x_1\rangle \cdots \langle y_k, {\mathcal{L}}_j(x_k)\rangle \cdots \langle y_l, {\mathcal{L}}_j(x_l) \rangle \cdots \langle y_n, x_n\rangle\Big) \\ =& \langle (y_1{\otimes}...{\otimes}y_n), \hat{L}^{(n)} (x_1{\otimes}...{\otimes}x_n)\rangle. \end{split}$$ Analogously, for higher powers we have $$K ^k f^{[\infty,1]}_{x_1,...,x_n,y_1,...y_n}({\mathbf{1}}) = \langle(y_1{\otimes}...{\otimes}y_n), (\hat{L}^{(n)})^k (x_1{\otimes}...{\otimes}x_n)\rangle$$ whence $\exp(t\hat{L}^{(n)})$ is well-defined on $A ^{{\otimes}n}$. Thus we find $${\mathbb{E}}[f_{x_1,...,x_n,y_1,...y_n}\circ \alpha_t'] = T^{[1]}_t f^{[\infty,1]}_{x_1,...,x_n,y_1,...y_n} ({\mathbf{1}}) = \langle(y_1{\otimes}...{\otimes}y_n), {\operatorname{e}}^{t\hat{L}^{(n)}} (x_1{\otimes}...{\otimes}x_n)\rangle.$$ Analogously, one can prove \[prop:modulus\] In the setting of Theorem \[th:cont-limit\], for all $x,x_i,y,y_i \in A $, let \^[(2)]{} (x y) :=& |[Ł]{} (x)y + x|[Ł]{}\^(y)\ &+ \_[jJ]{} (Ł\_j\^2(x)y + 2 Ł\_j (x)Ł\_j\^(y) + x(Ł\_j\^)\^2(y)) and $$\begin{aligned} \check{L}^{(4)} &(x_1{\otimes}y_1 {\otimes}x_2{\otimes}y_2) := \bar{{\mathcal{L}}} (x_1){\otimes}y_1{\otimes}x_2{\otimes}y_2 + x_1{\otimes}\bar{{\mathcal{L}}} (y_1){\otimes}x_2{\otimes}y_2 \\ &+ x_1{\otimes}y_1 {\otimes}\bar{{\mathcal{L}}}^\dagger (x_2){\otimes}y_2 +x_1{\otimes}y_1 {\otimes}x_2{\otimes}\bar{{\mathcal{L}}}^\dagger (y_2)\\ & + \frac{\tau}{|J|} \sum_{j\in J} \Big({\mathcal{L}}_j^2(x_1){\otimes}y_1{\otimes}x_2{\otimes}y_2 + x_1{\otimes}{\mathcal{L}}_j^2(y_1) {\otimes}x_2{\otimes}y_2 \\ &\quad + x_1{\otimes}y_1{\otimes}({\mathcal{L}}_j^\dagger)^2(x_2) {\otimes}y_2 + x_1{\otimes}y_1{\otimes}x_2 {\otimes}({\mathcal{L}}_j^\dagger)^2(y_2) \\ &\quad + 2 ({\mathcal{L}}_j (x_1){\otimes}y_1 + x_1{\otimes}{\mathcal{L}}_j(y_1)){\otimes}({\mathcal{L}}_j^\dagger (x_2){\otimes}y_2 + x_2{\otimes}{\mathcal{L}}_j^\dagger(y_2))\\ &\quad +2 {\mathcal{L}}_j (x_1){\otimes}{\mathcal{L}}_j(y_1) {\otimes}x_2{\otimes}y_2 + 2 x_1{\otimes}y_1 {\otimes}{\mathcal{L}}_j^\dagger (x_2){\otimes}{\mathcal{L}}_j^\dagger (y_2) \Big).\end{aligned}$$ Then $$L (|f^{[\infty]}_{x,y}|^2)({\mathbf{1}}) = \langle(y{\otimes}x), \check{L}^{(2)} (x{\otimes}y) \rangle$$ and $$L (|f^{[\infty]}_{x,y}|^4)({\mathbf{1}}) = \langle(y{\otimes}x{\otimes}y{\otimes}x), \check{L}^{(4)} (x{\otimes}y{\otimes}x{\otimes}y) \rangle.$$ Distribution of the gate fidelity {#sec:four} ================================= The most interesting quantity in control theory of a quantum system is its fidelity; as we want to decouple independently of the state, we consider the *gate fidelity* [@LB], which is given by the random variable $$F_t := 1- \frac{1}{d} \sum_{k,l=1}^d |\langle e_l, ({\operatorname{id}}- \alpha'_t)(e_k) \rangle|^2,$$ independent of the actual choice of the orthonormal basis $(e_k)_{k=1\ldots d}$ of $A$. Most other versions of fidelity can be treated using similar ideas. We are interested in ${\mathbb{E}}[F_t]$ and ${\operatorname{Var}}[F_t]$. \[prop:EFtVarFt\] In the setting of Theorem \[th:cont-limit\], the expectation and variance of the gate fidelity of the quantum system $({\mathcal{H}},{\mathcal{L}})$ with decoupling set $V$ are given by $$1-{\mathbb{E}}[F_t] = \frac{1}{d}\sum_{k,l=1}^d \Big( \delta_{k,l} - \delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}}+{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle +\langle e_l\otimes e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k\otimes e_l) \rangle \Big)$$ and $$\begin{aligned} {\operatorname{Var}}[F_t]=& \frac{1}{d^2}\sum_{i,j,k,l=1}^d \Big( \delta_{k,l}\delta_{i,j} - 2 \delta_{i,j}\delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}}+{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle \\ &\qquad+2 \delta_{i,j} \langle e_l{\otimes}e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k{\otimes}e_l) \rangle\\ &\qquad+ \delta_{i,j}\delta_{k,l} \langle e_i{\otimes}e_k, ({\operatorname{e}}^{t\check{L}^{(1,1)}}+{\operatorname{e}}^{t\check{L}^{(1,2)}} +{\operatorname{e}}^{t\check{L}^{(1,2)\dagger}}+{\operatorname{e}}^{t\check{L}^{(1,1)\dagger}}) (e_i{\otimes}e_k) \rangle \\ &\qquad - 2 \delta_{i,j} \langle e_i{\otimes}e_l{\otimes}e_k, ({\operatorname{e}}^{t\check{L}^{(3,1)}}+{\operatorname{e}}^{t\check{L}^{(3,2)}})(e_i{\otimes}e_k{\otimes}e_l) \rangle \\ &\qquad+ \langle e_j{\otimes}e_i{\otimes}e_l{\otimes}e_k, {\operatorname{e}}^{t\check{L}^{(4)}}(e_i{\otimes}e_j{\otimes}e_k{\otimes}e_l) \rangle\Big)\\ &-\frac{1}{d^2}\Big(\sum_{k,l=1}^d \Big(\delta_{k,l} - 2\delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}}+{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle + \langle e_l\otimes e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k\otimes e_l) \rangle \Big)\Big)^2.\end{aligned}$$ with $$\begin{aligned} \check{L}^{(1,1)}(x{\otimes}y) =& \bar{{\mathcal{L}}}(x){\otimes}y+ x{\otimes}\bar{{\mathcal{L}}}(y)\\ &+\frac{\tau}{|J|}\sum_{j\in J} {\mathcal{L}}_j^2(x){\otimes}y + 2{\mathcal{L}}_j(x){\otimes}{\mathcal{L}}_j(y) + x{\otimes}{\mathcal{L}}_j^2(y)\\ \check{L}^{(1,2)}(x{\otimes}y) =& \bar{{\mathcal{L}}}^\dagger(x){\otimes}y + x{\otimes}\bar{{\mathcal{L}}}(y)\\ &+\frac{\tau}{|J|}\sum_{j\in J} ({\mathcal{L}}_j^\dagger)^2(x){\otimes}y + 2{\mathcal{L}}_j^\dagger(x){\otimes}{\mathcal{L}}_j(y) + x{\otimes}{\mathcal{L}}_j^2(y)\\ \check{L}^{(3,1)}(x{\otimes}y{\otimes}z) =& \bar{{\mathcal{L}}}(x){\otimes}y{\otimes}z + x{\otimes}\bar{{\mathcal{L}}}(y) {\otimes}z + x{\otimes}y{\otimes}\bar{{\mathcal{L}}}^\dagger(z)\\ &+\frac{\tau}{|J|}\sum_{j\in J} \Big( {\mathcal{L}}_j^2(x){\otimes}y{\otimes}z + 2{\mathcal{L}}_j(x){\otimes}{\mathcal{L}}_j(y) {\otimes}z + 2{\mathcal{L}}_j(x){\otimes}y {\otimes}{\mathcal{L}}_j^\dagger(z)\\ &\quad + 2x{\otimes}{\mathcal{L}}_j(y) {\otimes}{\mathcal{L}}_j^\dagger(z) + x{\otimes}{\mathcal{L}}_j^2(y) {\otimes}z +x{\otimes}y{\otimes}({\mathcal{L}}_j^\dagger)^2(z) \Big)\\ \check{L}^{(3,2)}(x{\otimes}y{\otimes}z) =& \bar{{\mathcal{L}}}^\dagger(x){\otimes}y{\otimes}z + x{\otimes}\bar{{\mathcal{L}}}(y) {\otimes}z + x{\otimes}y{\otimes}\bar{{\mathcal{L}}}^\dagger(z)\\ &+\frac{\tau}{|J|}\sum_{j\in J} \Big(({\mathcal{L}}_j^\dagger)^2(x){\otimes}y{\otimes}z + 2{\mathcal{L}}_j^\dagger(x){\otimes}{\mathcal{L}}_j(y) {\otimes}z + 2{\mathcal{L}}_j^\dagger(x){\otimes}y {\otimes}{\mathcal{L}}_j^\dagger(z)\\ &\quad + 2x{\otimes}{\mathcal{L}}_j(y) {\otimes}{\mathcal{L}}_j^\dagger(z) + x{\otimes}{\mathcal{L}}_j^2(y) {\otimes}z +x{\otimes}y{\otimes}({\mathcal{L}}_j^\dagger)^2(z)\Big).\end{aligned}$$ Since we know $\alpha'_t$, we find: $$\label{eq:EFtProof} \begin{aligned} 1- {\mathbb{E}}[F_t] =& \frac{1}{d}\sum_{k,l=1}^d {\mathbb{E}}[|\langle e_l, ({\operatorname{id}}- \alpha'_t)(e_k) \rangle|^2] \\ =& \frac{1}{d}\sum_{k,l=1}^d {\mathbb{E}}[\delta_{k,l} - 2\delta_{k,l} \Re \langle e_l, \alpha'_t(e_k) \rangle + |\langle e_l, \alpha'_t(e_k) \rangle|^2] \\ =& \frac{1}{d}\sum_{k,l=1}^d \Big(\delta_{k,l} - 2\delta_{k,l} T_t \Big(\Re f^{[\infty,1]}_{kl} ({\mathbf{1}}) \Big) + T_t |f^{[\infty,1]}_{kl}|^2 ({\mathbf{1}})\Big) \\ =& \frac{1}{d}\sum_{k,l=1}^d \Big(\delta_{k,l} - \delta_{k,l} \sum_{n=0}^\infty \frac{t^n}{n!} K^n\Big(f^{[\infty,1]}_{kk} + \overline{f^{[\infty,1]}_{kk}}\Big) ({\mathbf{1}}) + \sum_{n=0}^\infty \frac{t^n}{n!} K^n \Big(|f^{[\infty,1]}_{kl}|^2\Big) ({\mathbf{1}})\Big)\\ =&\frac{1}{d}\sum_{k,l=1}^d \Big(\delta_{k,l} - \delta_{k,l} \sum_{n=0}^\infty \frac{t^n}{n!} (\langle e_k, \hat{L}^n(e_k) \rangle + \langle e_k, (\hat{L}^\dagger)^n(e_k)\rangle \\ &+ \sum_{n=0}^\infty \frac{t^n}{n!} \langle e_l\otimes e_k, (\check{L}^{(2)})^m (e_k\otimes e_l) \rangle\Big)\\ =& \frac{1}{d}\sum_{k,l=1}^d \Big( \delta_{k,l} - \delta_{k,l} \langle e_k, ({\operatorname{e}}^{t\hat{L}} +{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle + \langle e_l\otimes e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k\otimes e_l) \rangle \Big). \end{aligned}$$ Here the third equality follows from and the quotient procedure; the fifth from the Leibniz rule and Proposition \[prop:modulus\], noticing that $\overline{f_{kk}}(g) = \langle g e_k, e_k \rangle$ and $L^n \overline{f_{kk}}({\mathbf{1}}) = \langle \hat{L}^n(e_k), e_k \rangle = \langle e_k, (\hat{L}^\dagger)^n (e_k) \rangle$. The variance is obtained analogously: $$\begin{aligned} {\operatorname{Var}}[F_t] =& {\mathbb{E}}[F_t^2]- {\mathbb{E}}[F_t]^2 = {\mathbb{E}}[(1-F_t)^2]- {\mathbb{E}}[1-F_t]^2\\ =& \frac{1}{d^2}\sum_{i,j,k,l=1}^d {\mathbb{E}}[|\langle e_i, ({\operatorname{id}}- \alpha'_t)(e_j) \rangle|^2|\langle e_l, ({\operatorname{id}}- \alpha'_t)(e_k) \rangle|^2] \\ &- \Big( \frac{1}{d}\sum_{k,l=1}^d {\mathbb{E}}[|\langle e_l, ({\operatorname{id}}- \alpha'_t)(e_k) \rangle|^2]\Big)^2\\ =& \frac{1}{d^2}\sum_{i,j,k,l=1}^d {\mathbb{E}}\Big[ \delta_{k,l}\delta_{i,j} - 2 \delta_{i,j}\delta_{k,l} \langle e_l, (\alpha'_t+\alpha'^\dagger_t)(e_k) \rangle\\ &\qquad +2 \delta_{i,j} \langle e_l{\otimes}e_k, (\alpha'_t{\otimes}\alpha'^\dagger_t)(e_k{\otimes}e_l) \rangle\\ &\qquad+ \delta_{i,j}\delta_{k,l} \langle e_i{\otimes}e_k, (\alpha'_t{\otimes}\alpha'_t+\alpha'^\dagger_t{\otimes}\alpha'_t+\alpha'_t{\otimes}\alpha'^\dagger_t +\alpha'^\dagger_t{\otimes}\alpha'^\dagger_t) (e_i{\otimes}e_k) \rangle \\ &\qquad - 2 \delta_{i,j} \langle e_i{\otimes}e_l{\otimes}e_k, ((\alpha'_t+\alpha'^\dagger_t){\otimes}\alpha'_t{\otimes}\alpha'^\dagger_t)(e_i{\otimes}e_k{\otimes}e_l) \rangle \\ &\qquad+ \langle e_j{\otimes}e_i{\otimes}e_l{\otimes}e_k, (\alpha'_t{\otimes}\alpha'^\dagger_t{\otimes}\alpha'_t{\otimes}\alpha'^\dagger_t)(e_i{\otimes}e_j{\otimes}e_k{\otimes}e_l) \rangle\Big]\\ &-\frac{1}{d^2}\Big(\sum_{k,l=1}^d \Big(\delta_{k,l} - 2\delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}}+{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle + \langle e_l\otimes e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k\otimes e_l) \rangle \Big)\Big)^2.\end{aligned}$$ The terms in the first sum are all $0,1,2,3,4$-(anti-)linear expressions, respectively, of the type investigated in Propositions \[prop:higher-mom\] and \[prop:modulus\]. Following the proof there, we have $$\begin{aligned} {\mathbb{E}}[\langle e_i{\otimes}e_k, (\alpha'^\dagger_t{\otimes}\alpha'_t)(e_i{\otimes}e_k) \rangle] =& {\mathbb{E}}[ \overline{f_{ii}^{[\infty]}}f_{kk}^{[\infty]} \circ\alpha'_t]\\ =& \sum_{n=0}^\infty \frac{t^n}{n!} L^n \big(\overline{f_{ii}^{[\infty]}}f_{kk}^{[\infty]} \big)({\mathbf{1}})\\ =& \sum_{n=0}^\infty \frac{t^n}{n!} \langle e_i{\otimes}e_k, (\check{L}^{(1,2)})^n(e_i{\otimes}e_k) \rangle\end{aligned}$$ with $$\begin{aligned} \check{L}^{(1,2)}(x{\otimes}y) =& \bar{{\mathcal{L}}}^\dagger(x){\otimes}y + x{\otimes}\bar{{\mathcal{L}}}(y)\\ &+\frac{\tau}{|J|}\sum_{j\in J} ({\mathcal{L}}_j^\dagger)^2(x){\otimes}y + 2{\mathcal{L}}_j^\dagger(x){\otimes}{\mathcal{L}}_j(y) + x{\otimes}{\mathcal{L}}_j^2(y)\end{aligned}$$ because $$\begin{aligned} L \big(\overline{f_{ii}^{[\infty]}}f_{kk}^{[\infty]} \big)({\mathbf{1}}) =& (D_{\bar{{\mathcal{L}}}}\overline{f_{ii}^{[\infty]}}) f_{kk}^{[\infty]}({\mathbf{1}}) + \overline{f_{ii}^{[\infty]}} (D_{\bar{{\mathcal{L}}}}f_{kk}^{[\infty]})({\mathbf{1}})\\ &+ \frac{\tau}{|J|} \sum_{j\in J} (D_{{\mathcal{L}}_j}^2\overline{f_{ii}^{[\infty]}})f_{kk}^{[\infty]}({\mathbf{1}}) + 2 (D_{{\mathcal{L}}_j}\overline{f_{ii}^{[\infty]}})(D_{{\mathcal{L}}_j}f_{ii}^{[\infty]})({\mathbf{1}}) + \overline{f_{kk}^{[\infty]}} (D_{{\mathcal{L}}_j}^2f_{kk}^{[\infty]})({\mathbf{1}})\\ =& \frac{{\operatorname{d}}}{{\operatorname{d}}t}\overline{f_{ii}^{[\infty]}({\operatorname{e}}^{t\bar{{\mathcal{L}}}})} f_{kk}^{[\infty]}({\mathbf{1}}) + \overline{f_{ii}^{[\infty]}({\mathbf{1}})} \frac{{\operatorname{d}}}{{\operatorname{d}}t}f_{kk}^{[\infty]}({\operatorname{e}}^{t\bar{{\mathcal{L}}}})\\ &+ \frac{\tau}{|J|} \sum_{j\in J} \frac{{\operatorname{d}}^2}{{\operatorname{d}}t{\operatorname{d}}s} \overline{f_{ii}^{[\infty]}({\operatorname{e}}^{s{\mathcal{L}}_j}{\operatorname{e}}^{t{\mathcal{L}}_j})}f_{kk}^{[\infty]}({\mathbf{1}})\\ &\qquad + 2\frac{{\operatorname{d}}^2}{{\operatorname{d}}t{\operatorname{d}}s} \overline{f_{ii}^{[\infty]}({\operatorname{e}}^{s{\mathcal{L}}_j})} f_{ii}^{[\infty]}({\operatorname{e}}^{t{\mathcal{L}}_j}) + \overline{f_{kk}^{[\infty]}({\mathbf{1}})} \frac{{\operatorname{d}}^2}{{\operatorname{d}}t{\operatorname{d}}s}f_{kk}^{[\infty]}({\operatorname{e}}^{s{\mathcal{L}}_j}{\operatorname{e}}^{t{\mathcal{L}}_j}) \restriction_{s=t=0}\\ =& \langle e_i{\otimes}e_k, \bar{{\mathcal{L}}}^\dagger(e_i){\otimes}e_k + e_i{\otimes}\bar{{\mathcal{L}}}(e_k)\rangle\\ &+ \frac{\tau}{|J|} \sum_{j\in J} \langle e_i{\otimes}e_k, ({\mathcal{L}}_j^\dagger)^2(e_i){\otimes}e_k + 2 {\mathcal{L}}_j^\dagger(e_i){\otimes}{\mathcal{L}}_j(e_k) + e_i{\otimes}{\mathcal{L}}_j^2(e_k)\rangle\\ =& \langle e_i{\otimes}e_k, \check{L}^{(1,2)}(e_i{\otimes}e_k) \rangle.\end{aligned}$$ For the other 2-(anti-)linear expressions we obtain similar results but with operators $\check{L}^{(1,1)},\check{L}^{(1,1)\dagger},\check{L}^{(1,2)\dagger}$ instead. The remaining terms are treated analogously, by letting $L$ act on the corresponding $m$-(anti-)linear functions, [[e.g.]{} ]{}the 3-(anti-)linear case is obtained writing $$\begin{aligned} {\mathbb{E}}[ (f_{ii}^{[\infty]}+\overline{f_{ii}^{[\infty]}}) f_{kl}^{[\infty]} \overline{f_{kl}^{[\infty]}} \circ \alpha'_t] =& \sum_{n=0}^\infty \frac{t^n}{n!}L^n ((f_{ii}^{[\infty]}+\overline{f_{ii}^{[\infty]}}) f_{kl}^{[\infty]} \overline{f_{kl}^{[\infty]}})\\ =& \sum_{n=0}^\infty \frac{t^n}{n!} \langle e_i{\otimes}e_l{\otimes}e_k, ((\check{L}^{(3,1)})^n+ (\check{L}^{(3,2)})^n)(e_i{\otimes}e_k{\otimes}e_l \rangle.\end{aligned}$$ Putting together all of this and expressing the power series back again as exponential functions, we finally obtain the statement in the proposition: $$\begin{aligned} {\operatorname{Var}}[F_t] =&\frac{1}{d^2}\sum_{i,j,k,l=1}^d {\mathbb{E}}\Big[ \delta_{k,l}\delta_{i,j} - 2 \delta_{i,j}\delta_{k,l} \langle e_l, (\alpha'_t+\alpha'^\dagger_t)(e_k) \rangle\\ &\qquad +2 \delta_{i,j} \langle e_l{\otimes}e_k, (\alpha'_t{\otimes}\alpha'^\dagger_t)(e_k{\otimes}e_l) \rangle\\ &\qquad+ \delta_{i,j}\delta_{k,l} \langle e_i{\otimes}e_k, (\alpha'_t{\otimes}\alpha'_t+\alpha'^\dagger_t{\otimes}\alpha'_t+\alpha'_t{\otimes}\alpha'^\dagger_t +\alpha'^\dagger_t{\otimes}\alpha'^\dagger_t) (e_i{\otimes}e_k) \rangle \\ &\qquad - 2 \delta_{i,j} \langle e_i{\otimes}e_l{\otimes}e_k, ((\alpha'_t+\alpha'^\dagger_t){\otimes}\alpha'_t{\otimes}\alpha'^\dagger_t)(e_i{\otimes}e_k{\otimes}e_l) \rangle \\ &\qquad+ \langle e_j{\otimes}e_i{\otimes}e_l{\otimes}e_k, (\alpha'_t{\otimes}\alpha'^\dagger_t{\otimes}\alpha'_t{\otimes}\alpha'^\dagger_t)(e_i{\otimes}e_j{\otimes}e_k{\otimes}e_l) \rangle\Big]\\ &-\frac{1}{d^2}\Big(\sum_{k,l=1}^d \Big(\delta_{k,l} - 2\delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}}+{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle + \langle e_l\otimes e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k\otimes e_l) \rangle \Big)\Big)^2\\ =& \frac{1}{d^2}\sum_{i,j,k,l=1}^d \Big( \delta_{k,l}\delta_{i,j} - 2 \delta_{i,j}\delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}}+{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle \\ &\qquad+2 \delta_{i,j} \langle e_l{\otimes}e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k{\otimes}e_l) \rangle\\ &\qquad+ \delta_{i,j}\delta_{k,l} \langle e_i{\otimes}e_k, ({\operatorname{e}}^{t\check{L}^{(1,1)}}+{\operatorname{e}}^{t\check{L}^{(1,2)}} +{\operatorname{e}}^{t\check{L}^{(1,2)\dagger}}+{\operatorname{e}}^{t\check{L}^{(1,1)\dagger}}) (e_i{\otimes}e_k) \rangle \\ &\qquad - 2 \delta_{i,j} \langle e_i{\otimes}e_l{\otimes}e_k, ({\operatorname{e}}^{t\check{L}^{(3,1)}}+{\operatorname{e}}^{t\check{L}^{(3,2)}})(e_i{\otimes}e_k{\otimes}e_l) \rangle \\ &\qquad+ \langle e_j{\otimes}e_i{\otimes}e_l{\otimes}e_k, {\operatorname{e}}^{t\check{L}^{(4)}}(e_i{\otimes}e_j{\otimes}e_k{\otimes}e_l) \rangle\Big)\\ &-\frac{1}{d^2}\Big(\sum_{k,l=1}^d \Big(\delta_{k,l} - 2\delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}}+{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle + \langle e_l\otimes e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k\otimes e_l) \rangle \Big)\Big)^2.\end{aligned}$$ For comparison reasons and some applications in [@ABH], we would like to state the analogous formulae for the case of the drift-like continuous-time limit in the sense of Remark \[rem:drift-limit\]. Since $L^{({\operatorname{drift}})}$ can be regarded as a special case of $L$ with vanishing ${\mathcal{L}}_j$, the expressions in Proposition \[prop:EFtVarFt\] simplify significantly and we obtain: \[prop:EFtVarFtdrift\] In the setting of Proposition \[prop:EFtVarFt\] but with $L^{({\operatorname{drift}})}$ instead of $L$, we obtain $${\mathbb{E}}^{({\operatorname{drift}})}[F_t] = 1- \frac{1}{d}\sum_{k,l=1}^d |\langle e_l, ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_k) \rangle |^2$$ and ${\operatorname{Var}}^{({\operatorname{drift}})}[F_t] = 0$. Some readers might find the vanishing variance intuitively expected, given that the limiting procedure corresponds somehow to the classical law of large numbers where convergence is almost surely to the (non-constant but time-dependent) expectation value. The expression for ${\mathbb{E}}^{({\operatorname{drift}})}$ follows immediately from that of ${\mathbb{E}}$ in the preceding proof, specialising to $\hat{L}^{({\operatorname{drift}})}$: since $$\langle e_l{\otimes}e_k, {\operatorname{e}}^{t (\bar{{\mathcal{L}}}{\otimes}{\operatorname{id}}+ {\operatorname{id}}{\otimes}\bar{{\mathcal{L}}}^\dagger)}(e_k{\otimes}e_l)\rangle = \langle e_l {\otimes}{\operatorname{e}}^{t \bar{{\mathcal{L}}}}(e_k), {\operatorname{e}}^{t \bar{{\mathcal{L}}}}(e_k) {\otimes}e_l \rangle,$$ the last line in becomes simply $$\langle e_l{\otimes}({\operatorname{id}}+ {\operatorname{e}}^{t \bar{{\mathcal{L}}}})(e_k), ({\operatorname{id}}+{\operatorname{e}}^{t \bar{{\mathcal{L}}}})(e_k){\otimes}e_l)\rangle = |\langle e_l{\otimes}({\operatorname{id}}+ {\operatorname{e}}^{t \bar{{\mathcal{L}}}})(e_k)|^2.$$ For ${\operatorname{Var}}^{({\operatorname{drift}})}$, we analogously compute: $$\begin{aligned} {\operatorname{Var}}^{({\operatorname{drift}})}[F_t] =& {\mathbb{E}}^{({\operatorname{drift}})}[F_t^2]- {\mathbb{E}}^{({\operatorname{drift}})} [F_t]^2 = {\mathbb{E}}^{({\operatorname{drift}})}[(1-F_t)^2]- {\mathbb{E}}^{({\operatorname{drift}})} [1-F_t]^2\\ =& \frac{1}{d^2}\sum_{i,j,k,l=1}^d {\mathbb{E}}^{({\operatorname{drift}})} [|\langle e_i, ({\operatorname{id}}- \alpha'_t)(e_j) \rangle|^2|\langle e_l, ({\operatorname{id}}- \alpha'_t)(e_k) \rangle|^2] \\ &- \Big( \frac{1}{d}\sum_{k,l=1}^d {\mathbb{E}}^{({\operatorname{drift}})} [|\langle e_l, ({\operatorname{id}}- \alpha'_t)(e_k) \rangle|^2]\Big)^2\\ =& \frac{1}{d^2}\sum_{i,j,k,l=1}^d |\langle e_i, ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_j) \rangle|^2 |\langle e_l, ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_k) \rangle|^2 \\ &- \Big(\frac{1}{d}\sum_{k,l=1}^d |\langle e_l, ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_k) \rangle|^2\Big)^2\\ =& 0.\end{aligned}$$ We return to our former illustrative Example \[ex:1\]. Since ${\mathcal{L}}= {\operatorname{i}}{\operatorname{Ad }}(H)$, we find that ${\mathcal{L}}^\dagger=-{\mathcal{L}}$ and hence $$\hat{L}^\dagger= \hat{L}, \quad \check{L}^{(2)} = \hat{L}^{(2)},$$ as follows immediately from the respective definition in Propositions \[prop:higher-mom\] and \[prop:modulus\]. For short times $t$ the results of Proposition \[prop:EFtVarFt\] become: $$\begin{aligned} 1-{\mathbb{E}}[F_t] \approx& -\frac{1}{4^N}\sum_{k}^{4^N} 2 t \langle e_k, \hat{L}(e_k)\rangle +\frac{1}{4^N}\sum_{k,l}^{4^N} t \langle e_k{\otimes}e_l, \hat{L}^{(2)}(e_l{\otimes}e_k)\rangle \\ =& \frac{2t\tau}{4^N|J|} \sum_{j\in J} \sum_{k,l}^{4^N} |\langle e_k, [v_j H v_j, e_l]\rangle|^2\\ =& \frac{2t\tau}{4^N|J|} \sum_{j\in J} \sum_{k}^{4^N} |[v_j H v_j, e_k]|^2,\end{aligned}$$ which for special cases of $H$ can be further simplified, but in general will be used in this form for a computer and is of order $O(\tau t \|H\|)$. A similar procedure may be applied to variance. In contrast, in the case of the drift-like limit, we would simply get $${\mathbb{E}}^{({\operatorname{drift}})}[F_t]=1, \quad {\operatorname{Var}}^{({\operatorname{drift}})}[F_t] = 0, \quad t\in{\mathbb{R}}_+,$$ which is obviously less realistic than the diffusion-like limit, but on the other hand confirms that for unitary time-evolution $(\alpha_t)_{t\in{\mathbb{R}}_+}$ dynamical decoupling works ([[i.e.,]{} ]{}decouples) optimally, in contrast to other types of $\alpha$, [[cf.]{} ]{}also Theorem \[lem:deccond-unitary\]! Theorem \[th:cont-limit\] gives us the expectation of our quantities in the continuum limit, but we must ask two questions:- - How big is the difference between continuum limit and original discrete random paths? - What is the distribution of the actual (continuum limit) paths around the expectation value? These two errors add up to give the total maximal error, which we have to estimate now. Concerning (1), one has to work with a kind of Berry-Esseen theorem [@Fel] on the approximation of random walks by Brownian motion. This is quite complicated, but we satisfy ourselves here with the fact that this error tends to $0$ as $\tau{\rightarrow}0$. Concerning (2), the deviation around the expectation value is expressed in the quantiles, which can be efficiently estimated using Chebychev’s inequality [@Fel] together with the variance expression. Estimates and application: intrinsic/extrinsic decoherence {#sec:five} ========================================================== Suppose a given quantum system $({\mathcal{H}},{\mathcal{L}})$ undergoes decoherence caused by interaction with an external quantum heat bath described by another quantum system $({\mathcal{H}}_1, {\operatorname{i}}{\operatorname{ad }}(H_1))$. Then according to standard axioms of quantum mechanics, time evolution of the total (closed) system is unitary, thus described by a one-parameter automorphism family on the operators of the total Hilbert space ${\mathcal{H}}'={\mathcal{H}}\otimes{\mathcal{H}}_1$, namely $$t\in{\mathbb{R}}_+ \mapsto {\mathcal{T}}{\operatorname{e}}^{{\operatorname{i}}\int_0^t {\operatorname{ad }}(H'(t')){\operatorname{d}}t'},$$ and $H'$ is the (possibly time-dependent) Hamiltonian of the total system on ${\mathcal{H}}'$ and the time-ordered integral is defined in analogy to . The heat bath may be infinite-dimensional separable, but the involved Hamiltonian $H'$ is henceforth supposed to be uniformly bounded on compact intervals. It is unclear whether dynamical decoupling works without this assumption, and maybe alternative requirements would have to be made in case of unboundedness, [[cf.]{} ]{}[@ABH App.] for further discussion. The actual dynamics perceived on the subsystem ${\mathcal{H}}$ is given by $$t\in{\mathbb{R}}_+ \mapsto \alpha_t := {\mathcal{E}}\circ {\mathcal{T}}{\operatorname{e}}^{{\operatorname{i}}\int_0^t {\operatorname{ad }}(H'(t')){\operatorname{d}}t'}(\cdot\otimes\rho^\theta),$$ where ${\mathcal{E}}:B({\mathcal{H}}'){\rightarrow}B({\mathcal{H}})$ is the partial trace (conditional expectation) onto the subsystem and $\rho^\theta$ the initial state of the heat bath [@LB; @pb]. The resulting perceived dynamics $(\alpha_t)_{t\in{\mathbb{R}}_+}$ then becomes a family of CPT maps. Under special assumptions on $H'$, it actually produces the CPT semigroup with infinitesimal generator ${\mathcal{L}}$, the Lindblad operator, but usually $\alpha_t$ is no longer an automorphism. We call this phenomenon, where a CPT semigroup time evolution arises from interaction with an external quantum heat bath and unitary time evolution on the total system, *extrinsic decoherence* because the non-unitarity of time evolution of the original system is caused by interaction with the external heat bath. In contrast to this, *intrinsic decoherence* we call the situation where time evolution of a *closed* system $({\mathcal{H}},{\mathcal{L}})$ is no longer unitary and the non-unitarity is intrinsic to the system, [[i.e.,]{} ]{}does not arise from (unitary) interaction with a heat bath. It is a fundamental question whether this actually occurs in nature or whether the axiom of unitarity is always fulfilled – on a sufficiently large total system. Mathematically the two cases are described in the same way (by CPT semigroups with unitary dilations), and also physically with usual observations they seem to be indistinguishable. However, applying dynamical decoupling in the case of the above type of extrinsic decoherence, the time evolution of the total system is unitary, and so the perceived evolution on the subsystem is given by the discrete stochastic process $$\alpha^{(\tau)}_{n\tau} = {\mathcal{E}}\circ \prod_{i=1}^n {\operatorname{Ad }}(v_{j_i}\otimes{\mathbf{1}}) \circ {\mathcal{T}}{\operatorname{e}}^{{\operatorname{i}}\int_{(i-1)\tau}^{i\tau} {\operatorname{ad }}(H'(t')){\operatorname{d}}t'} \circ {\operatorname{Ad }}(v_{j_i}^*\otimes {\mathbf{1}})(\cdot\otimes\rho^\theta).$$ Now we notice that, if is satisfied for all $x\in B({\mathcal{H}})$, then it is also satisfied for all $x\in B({\mathcal{H}}')$ modulo ${\mathbf{1}}\otimes B({\mathcal{H}}_1)$. In fact, $x\in B({\mathcal{H}}')$ can be written as a finite sum $\sum_k y_k\otimes z_k + {\mathbf{1}}\otimes \tilde{z}$, with certain traceless $y_k\in B({\mathcal{H}})$ and with $z_k,\tilde{z}\in B({\mathcal{H}}_1)$, and then $$\frac{1}{|J|}\sum_{j\in J}\sum_{k} (v_j\otimes {\mathbf{1}})(y_k\otimes z_k)(v_j\otimes {\mathbf{1}})^* +{\mathbf{1}}\otimes \tilde{z} = \frac{1}{|J|}\sum_k \Big(\sum_{j\in J} v_j y_k v_j^*\Big) \otimes z_k + {\mathbf{1}}\otimes \tilde{z} = {\mathbf{1}}\otimes \tilde{z}.$$ Consider now for $x$ the (possibly time-dependent) Hamiltonian $H'=\sum_k H_{0,k}\otimes H_{1,k} + {\mathbf{1}}\otimes H_1$. The heat bath is by definition in a thermal equilibrium state $\rho^\theta$ independent of time, [[i.e.,]{} ]{}${\operatorname{ad }}(H_1)(\rho^\theta)=0$. Let ${\mathcal{L}}':={\operatorname{i}}{\operatorname{ad }}(H')$ be the (purely unitary) Lindbladian of the total system and thus $\bar{{\mathcal{L}}}'={\operatorname{i}}{\operatorname{ad }}({\mathbf{1}}{\otimes}H_1)$, so $\bar{{\mathcal{L}}}(x\otimes\rho^\theta)=0$, for all $x\in A$. Then we obtain $$\hat{L}'= \bar{{\mathcal{L}}}' + \frac{\tau}{|J|}\sum_{j\in J} {\operatorname{Ad }}(v_j\otimes{\mathbf{1}}) \circ ({\mathcal{L}}'-\bar{{\mathcal{L}}}')^2 \circ {\operatorname{Ad }}(v_j^*\otimes{\mathbf{1}}),$$ and hence $${\mathbb{E}}[\rho_t] = {\mathcal{E}}\circ {\mathcal{T}}{\operatorname{e}}^{\int_0^t\hat{L}'(t'){\operatorname{d}}t'}(\rho_0\otimes \rho^\theta).$$ The main dynamics comes from $\bar{{\mathcal{L}}}'$, which leaves $\rho\otimes\rho^\theta$ invariant, but ${\mathcal{L}}_j'$ changes it, so that higher-order terms disturb the invariance of the state. We can conclude: if the system dynamics is determined by extrinsic decoherence then the decoupling condition is satisfied in first-order approximation and the total time evolution under decoupling in first-order approximation in $\tau$ and $t$ is described as in the unitary case; ${\mathbf{1}}{\otimes}\rho^\theta$ will in general not be invariant under ${\mathcal{L}}_j'$, but those effects are of higher order in $\tau$. We would like to have an estimate of ${\mathbb{E}}[F_t]$ that depends only on $t$ and the coupling strength, distinguishing between the two extremal cases of purely extrinsic decoherence ([[i.e.,]{} ]{}purely unitary on the dilation: $\Psi'=0$ and $a'=-a'^*$ in the notation of Theorem \[lem:deccond-unitary\]) and purely intrinsic decoherence ( $\Psi'\not=0$ and $a=a^*$). \[th:EFt-bounds\] Given a quantum system $({\mathcal{H}},{\mathcal{L}})$ with decoupling set $V$ and the previous notation, write $\Gamma:=\max\{\|{\mathcal{L}}\|,\|{\mathcal{L}}'\|,\|\bar{{\mathcal{L}}}\|\}$. Then in the drift-like limit of Remark \[rem:drift-limit\], for purely extrinsic decoherence we have $${\mathbb{E}}^{({\operatorname{drift}})}[F_t^{(extr)}] = 1.$$ An approximate upper bound for the expectation of the fidelity in the case of purely intrinsic decoherence, in the limit of $\tau\ll t\ll 1/\Gamma$, is asymptotically given by $${\mathbb{E}}^{({\operatorname{drift}})}[F_t^{(intr)}] \lesssim 1 - \frac{1}{d}t^2 \|\hat{L}^{({\operatorname{drift}})}\|^2$$ If in addition ${\mathcal{L}}={\mathcal{L}}^\dagger$ (so-called purely intrinsic dephasing) this can be made more precise: $${\mathbb{E}}^{({\operatorname{drift}})}[F_t^{(intr)}] \le 1 - \frac{1}{d}(1- {\operatorname{e}}^{-t \|{\mathcal{L}}\|/|J|})^2.$$ In the (physically more realistic) diffusion-like limit of Theorem \[th:cont-limit\], a lower bound for the fidelity of purely extrinsic decoherence is asymptotically given by $${\mathbb{E}}[F_t^{(extr)}] \gtrsim 1 -\frac{2d}{|J|} \tau \int_0^t\|{\mathcal{L}}'_0(t')\|^2{\operatorname{d}}t',$$ in terms of the (possibly time-dependent) Lindbladian ${\mathcal{L}}'$ on the dilated system, while an upper bound in the case of purely intrinsic decoherence is asymptotically given by $${\mathbb{E}}[F_t^{(intr)}] \lesssim 1- \frac{2}{d\, |J|} \tau t \|{\mathcal{L}}-\bar{{\mathcal{L}}}\|^2 - \frac{1}{d} t^2 \|\bar{{\mathcal{L}}}\|^2.$$ We appeal to Proposition \[prop:EFtVarFt\] for notation and the exact formulae underlying our estimates here. For the case of pure dephasing ([[i.e.,]{} ]{}${\mathcal{L}}^\dagger={\mathcal{L}}$), we first notice that $\hat{L}^{({\operatorname{drift}})}$, being a sum of double commutators, is a selfadjoint operator on the Hilbert space $A$. Moreover, it must be negative since ${\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}}$ is a contraction. We may suppose that the orthonormal basis $(e_k)_{k=1\ldots d}$ consists of the eigenvectors of $\hat{L}^{({\operatorname{drift}})}$ in decreasing order of eigenvalues. The smallest eigenvalue of $\hat{L}^{({\operatorname{drift}})}$ is less than $-\|{\mathcal{L}}\|/|J|$: in fact, the smallest eigenvalue of ${\mathcal{L}}$ is $-\|{\mathcal{L}}\|$, and $\hat{L}^{({\operatorname{drift}})} \le \frac{1}{|J|}{\mathcal{L}}$. Then we find $$\begin{aligned} 1-{\mathbb{E}}^{({\operatorname{drift}})}[F_t] =& \frac{1}{d}\sum_{k,l=1}^d \langle e_l, ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_k) \rangle^2\\ \ge& \frac{1}{d} \langle e_1, ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_1) \rangle^2 = \frac{1}{d} (1- {\operatorname{e}}^{-t\|{\mathcal{L}}\|/|J|})^2.\end{aligned}$$ For general ${\mathcal{L}}$, we can say at least $$\begin{aligned} 1-{\mathbb{E}}^{({\operatorname{drift}})}[F_t] =& \frac{1}{d}\sum_{k=1}^d \langle ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_k) , ({\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}})(e_k) \rangle\\ \ge& \frac{1}{d} \| {\operatorname{id}}- {\operatorname{e}}^{t\hat{L}^{({\operatorname{drift}})}} \|^2 \approx \frac{1}{d} t^2 \|\hat{L}^{({\operatorname{drift}})}\|^2.\end{aligned}$$ Let us come to the actual diffusion-like limit. Using the power series of the exponential function and neglecting higher order terms, we obtain $$\begin{aligned} 1- {\mathbb{E}}[F_t] =& \frac{1}{d}\sum_{k,l=1}^d \Big( \delta_{k,l} -\delta_{k,l} \langle e_l, ({\operatorname{e}}^{t\hat{L}} +{\operatorname{e}}^{t\hat{L}^\dagger})(e_k) \rangle + \langle e_l\otimes e_k, {\operatorname{e}}^{t\check{L}^{(2)}}(e_k\otimes e_l) \rangle \Big)\\ \approx& t\Big( \frac{\tau}{d\, |J|} \sum_{k,l=1}^d \sum_{j\in J} \langle e_l\otimes e_k, ({\mathcal{L}}_j{\otimes}{\mathcal{L}}_j^\dagger + {\mathcal{L}}_j^\dagger{\otimes}{\mathcal{L}}_j)(e_k\otimes e_l) \rangle\Big)\\ &+t^2 \Big( \frac{1}{d}\sum_{k,l}^d \langle e_l\otimes e_k, (\bar{{\mathcal{L}}}{\otimes}\bar{{\mathcal{L}}}^\dagger)(e_k\otimes e_l) \rangle \Big)\\ =& t\Big( \frac{\tau}{d\, |J|} \sum_{j\in J} 2 {\operatorname{tr }}_{A{\otimes}A} ({\mathcal{L}}_j{\otimes}{\mathcal{L}}_j^\dagger \circ \phi)\Big)\\ &+t^2 \Big( \frac{1}{d} {\operatorname{tr }}_{A{\otimes}A} ( \bar{{\mathcal{L}}}{\otimes}\bar{{\mathcal{L}}}^\dagger \circ\phi)\Big)\\ \geq & \frac{2}{d\, |J|} \tau t \|{\mathcal{L}}_j\|^2 + \frac{1}{d} t^2 \|\bar{{\mathcal{L}}}\|^2, \end{aligned}$$ where $j\in J$ is arbitrary, $\phi$ denotes the flip unitary on $A{\otimes}A$, and ${\otimes}_{A{\otimes}A}$ the standard (non-normalised) trace on $B(A{\otimes}A)$. For the case of extrinsic decoherence, we have to work in the dilation algebra $A{\otimes}B({\mathcal{H}}_1)$. Let ${\mathcal{L}}'$ be the corresponding (possibly time-dependent) Lindbladian on that algebra, corresponding to unitary time evolution. Then after decoupling we obtain $$\hat{L}' = \bar{{\mathcal{L}}}' + \frac{\tau}{|J|} \sum_{j\in J} ({\mathcal{L}}'_j)^2, \quad {\mathcal{L}}'_j:= {\operatorname{Ad }}(v_j{\otimes}{\mathbf{1}})\circ ({\mathcal{L}}'-\bar{{\mathcal{L}}}') \circ {\operatorname{Ad }}(v_j^*{\otimes}{\mathbf{1}}),$$ with the commutator $\bar{{\mathcal{L}}}'$ vanishing on $A{\otimes}\rho^\theta$. As in the preceding case, we find $$\begin{aligned} 1- {\mathbb{E}}[F_t]=& \frac{1}{d}\sum_{k,l=1}^d {\mathcal{T}}\Big(\delta_{k,l} -\delta_{k,l} \langle e_l{\otimes}{\mathbf{1}}, ({\operatorname{e}}^{\int_0^t \hat{L}'(t'){\operatorname{d}}t'} +{\operatorname{e}}^{\int_0^t \hat{L}'^\dagger (t'){\operatorname{d}}t'})(e_k{\otimes}\rho^\theta) \rangle \\ &\quad + \langle e_l{\otimes}{\mathbf{1}}{\otimes}e_k{\otimes}{\mathbf{1}}, {\operatorname{e}}^{\int_0^t\check{L}'^{(2)}(t'){\operatorname{d}}t'}(e_k{\otimes}\rho^\theta{\otimes}e_l{\otimes}\rho^\theta) \rangle \Big)\\ \approx& \frac{\tau}{d\, |J|} \sum_{k,l=1}^d \sum_{j\in J} \Big\langle e_l{\otimes}{\mathbf{1}}{\otimes}e_k{\otimes}{\mathbf{1}}, \int_0^t({\mathcal{L}}_j'{\otimes}{\mathcal{L}}_j'^\dagger + {\mathcal{L}}_j'^\dagger{\otimes}{\mathcal{L}}_j')(t'){\operatorname{d}}t' (e_k{\otimes}\rho^\theta{\otimes}e_l{\otimes}\rho^\theta) \Big\rangle\\ =& \frac{\tau}{d\, |J|} \sum_{j\in J} 2 {\operatorname{tr }}_{A{\otimes}A{\otimes}B({\mathcal{H}}_1) {\otimes}B({\mathcal{H}}_1)} \Big( \int_0^t({\mathcal{L}}_j'{\otimes}{\mathcal{L}}_j'^\dagger)(t'){\operatorname{d}}t' \circ \phi^{{\otimes}2} ({\mathbf{1}}{\otimes}\rho^\theta{\otimes}{\mathbf{1}}{\otimes}\rho^\theta)\Big)\\ \leq & \, 2d \tau \int_0^t\|{\mathcal{L}}'_0(t')\|^2{\operatorname{d}}t', \end{aligned}$$ because ${\operatorname{tr }}_{A{\otimes}B({\mathcal{H}}_1)}({\mathbf{1}}{\otimes}\rho^\theta)=d$ and $\bar{{\mathcal{L}}}'(x\otimes\rho^\theta)=0$ for all $x\in A$. Notice that these bounds are probably not sharp at all, but they should rather serve as an inspiration and starting point for finding more specific and sharper bounds. The interesting fact, in any case, is that they separate the fidelity of intrinsic and extrinsic decoherence dynamics, respectively, in the region $\tau\ll t\ll 1/\Gamma$, [[cf.]{} ]{}Figure \[fig1\]. Moreover, under further assumptions on ${\mathcal{L}}$ like [[e.g.]{} ]{}${\mathcal{L}}^\dagger={\mathcal{L}}$ we can try to use a similar procedure in order to achieve better bounds involving directly $\|{\mathcal{L}}\|$. ![Numerical evaluation of average $\overline{F_t^{ext}}$ (dotted blue line) coinciding with the lower bound for ${\mathbb{E}}^{({\operatorname{drift}})}[F_t^{ext}]\equiv 1$, average $\overline{F_t^{int}}$ (solid orange line), and the upper bound for ${\mathbb{E}}^{({\operatorname{drift}})}[F_t^{int}]$ from Theorem \[th:EFt-bounds\] (dashed red line) as a function of $t$, for $\Gamma\tau=10^{-3}$ and the so-called amplitude-damping channel model with coupling strength $\gamma$ ([[cf.]{} ]{}Example \[ex:2\](2) and [@ABH] for further explanation). The average was taken over 25 paths, one of them illustrated for $F_t^{ext}$ in the inset plot.[]{data-label="fig1"}](fig1.pdf){width="10cm"} If $\tau$ is sufficiently small, then for suitable $t$ the last two bounds in Theorem \[th:EFt-bounds\] provide a separation into two disjoint ranges of the fidelity in the two intrinsic interaction cases, which allows the experimenter to identify the type of decoherence. He would have to proceed as follows: - Given the intrinsic or extrinsic coupling strength $\Gamma:=\max\{\|{\mathcal{L}}\|,\|{\mathcal{L}}'\|,\|\bar{{\mathcal{L}}}\|\}$, choose $t\ll 1/\Gamma$. - Choose and vary $\tau\ll t$ in that range. - Compute the fidelity of many decoupling pulse sample paths for these given values of $\Gamma$ and varying $\tau,t$, then average and extrapolate to get his averaged fidelity $\bar{F}_t$ as a function of $\tau$ and $t$. - Compare it with the bounds for these given values of $\Gamma,\tau$: then in the above limit, he will find either $$\bar{F}_t \gtrsim 1 - 2d \tau \int_0^t\|{\mathcal{L}}'_0(t')\|^2{\operatorname{d}}t' \quad \mbox{or} \quad \bar{F}_t \lesssim 1- \frac{2}{d\, |J|} \tau t \|{\mathcal{L}}-\bar{{\mathcal{L}}}\|^2 - \frac{1}{d} t^2 \|\bar{{\mathcal{L}}}\|^2,\quad t\in{\mathbb{R}}_+.$$ the first case corresponding to extrinsic, the second to intrinsic dephasing. If he cannot carry out many runs, then it would also be necessary to take into account the quantiles from above in order to understand how well his experimental mean value $\bar{F}_t$ describes the analytical ${\mathbb{E}}[F_t]$. This can be done by considering higher moments ${\mathbb{E}}[F_t^n]$ and ${\operatorname{Var}}[F_t]$ as in Proposition \[prop:EFtVarFt\] For an arbitrarily large number of runs, however, this is not necessary. Figure \[fig1\] illustrates the bounds with an average of concrete sample paths. Although the precise relation between $\|{\mathcal{L}}\|, \|{\mathcal{L}}_0\|, \|\bar{{\mathcal{L}}}\|, \|{\mathcal{L}}'\|$ etc. is not clear and it is therefore difficult to compare the above bounds quantitatively, these values depend somehow monotonically on one another, [[i.e.,]{} ]{}increase or decrease synchronically. In any case, when $\tau{\rightarrow}0$ then the higher-order terms can be neglected and the difference between discrete random walk and continuum-limit tends to $0$. At this point boundedness of $H'$ is needed; otherwise alternative assumptions would have to be made that lead to future work, [[cf.]{} ]{}also [@ABH]. Moreover, it follows from the above bounds then that $\bar{F}_t{\rightarrow}1$ in the extrinsic case, whereas $\bar{F}_t$ converges to some function $f(t)\lesssim 1-t^2\|\bar{{\mathcal{L}}}\|^2/d$ in the intrinsic case. If extrinsic and intrinsic decoherence appear together: since the respective coupling strengths will not be known, it is impossible to compute the above bounds; yet, for fixed $t$, letting $\tau{\rightarrow}0$, the experimenter can check whether or not $\bar{F}_t$ goes to $1$, meaning pure extrinsic or intrinsic/mixed decoherence, respectively. [$\Box$]{} Lie groups and convolution semigroups {#sec:app} ===================================== The aim of this appendix section is to sketch the necessary definitions and facts about Lie groups and convolution semigroups necessary to understand the third and fourth section. For a comprehensive study and the notation used in Section \[sec:three\] we refer to any suitable textbook: [[e.g.]{} ]{}[@FH] for (linear) Lie groups and algebras, [@da; @EN] for one-parameter semigroups, [@gr; @he Ch.4] for probability and convolution measures on Lie groups. An *$N$-dimensional (real) Lie group* $G$ is an $N$-dimensional smooth manifold which has a group structure with neutral element $\mathbf{1}$ such that multiplication and inversion are smooth maps. In this paper $G$ is always a *linear algebraic Lie group*, [[i.e.,]{} ]{}a group of linear mappings on a finite-dimensional real vector space. The tangent space $T_\mathbf{1} G$ of $G$ in $\mathbf{1}$ forms a Lie algebra and is called the *Lie algebra of $G$*, denoted $\mathfrak{g}$ with scalar product (nondegenerate bilinear form) $\langle\cdot,\cdot\rangle_\mathfrak{g}$. There is a canonical diffeomorphism $\exp$ from a $0$-neighbourhood in $\mathfrak{g}$ to some $\mathbf{1}$-neighbourhood $U\subset G$ mapping $0$ to $\mathbf{1}$ and called the exponential map, which in the present case can be identified with the standard exponential function of matrices. Given an orthonormal basis $(X_k)_{k=1\ldots N}$ of $\mathfrak{g}$, the corresponding *(coordinate) $\mathbf{1}$-chart* is the smooth function $x: U{\rightarrow}\mathbb{R}^N$ such that $$g= \exp \Big( \sum_{k=1}^N x_k(g) X_k \Big), \quad g\in U,$$ and $x_k:U{\rightarrow}\mathbb{R}$ is called the *$k$-th coordinate map*. One may extend the functions $x_k\in C^\infty(U)$ to functions in $C^\infty_c(G)$ denoted again by $x_k$; write $G_c$ for the one-point compactification of $G$ if $G$ is noncompact, otherwise we take $G_c=G$, and every function $f\in C_c(G)$ is extended by $f(\infty):=0$ to $G_c$; this is needed in section \[sec:three\] for technical reasons. One notes that $$\frac{{\operatorname{d}}}{{\operatorname{d}}t} x_k({\operatorname{e}}^{t Y}) \restriction_{t=0} = \langle Y, X_k\rangle_\mathfrak{g},$$ for every $k=1,\ldots,N$ and $Y\in\mathfrak{g}$. The directional derivative $D_Y$ for $Y\in\mathfrak{g}$ is defined by $$D_Y f(g) := \frac{{\operatorname{d}}}{{\operatorname{d}}t} f({\operatorname{e}}^{t Y}g) \restriction_{t=0}, \quad f\in C^1_c(G), \, g\in G,$$ and one has $D_{X+\lambda Y} = D_X + \lambda D_Y$, for $X,Y\in\mathfrak{g}$ and $\lambda\in\mathbb{R}$. The [*convolution*]{} of two probability measures $\mu_1,\mu_2$ on $G$ (with usual Borel $\sigma$-algebra $\mathfrak{B}(G)$) is defined by $$\mu_1*\mu_2 (A) := (\mu_1\times\mu_2)\{(g,h)\in G\times G : gh \in A\}, \quad A\in \mathfrak{B}(G).$$ Suppose that $\mu_1$ and $\mu_2$ are supported in a subsemigroup $H\subset G$. Then for every $A\in \mathfrak{B}(G)$, we have $$\begin{aligned} \mu_1*\mu_2 (A) =& (\mu_1\times\mu_2)\{(g,h)\in G\times G : gh \in A\}\\ =& (\mu_1\times\mu_2)\{(g,h)\in H\times H : gh \in A\}\\ =& (\mu_1\times\mu_2)\{(g,h)\in H\times H : gh \in A\cap H\}\\ =&\mu_1*\mu_2 (A\cap H),\end{aligned}$$ so $\mu_1*\mu_2$ is supported in $H$, too. Measures and convolution on $G$ can be trivially extended to $G_c$ by setting $g\infty:=\infty g := \infty$, for all $g\in G_c$. The set of probability measures on $G_c$, equipped with the \*-weak topology and convolution as multiplication, constitutes a topological monoid, where the Dirac measure $\delta _{\mathbf{1}}$ serves as the neutral element. Here the [*-weak topology*]{} on $G_c$ is defined as follows: a net of measures $(\mu_i)_{i \in I}$ converges to a limit measure $\mu$ if for all $f \in C(G_c)$ (the continuous ${\mathbb{R}}$-valued functions on $G_c$) the condition $\int_{G_c} f {\operatorname{d}}\mu_i {\rightarrow}\int_{G_c} f {\operatorname{d}}\mu$ holds. A [*continuous convolution semigroup of probability measures on $G$*]{} is a set $(\mu_t)_{t \in {\mathbb{R}}_+}$ of probability measures on $G$ (trivially extended to $G_c$) such that $\mu_s*\mu_t =\mu_{s+t}$ for every $s,\ t \in {\mathbb{R}}_+$ and $\lim_{t{\rightarrow}0}\mu_t=\mu_0=\delta _{{\mathbf{1}}}$ \*-weakly. Let $(\mu _t)_{t\in{\mathbb{R}}_+}$ be a continuous convolution semigroup of probability measures on $G$. For $t\in {\mathbb{R}}_+$ define the operator $$T_t: C(G_c) {\rightarrow}C(G_c), \; (T_t f) (g) := \int_{G_c} f(gh) d\mu _t(h),\quad g\in G_c.$$ Then $(T_t)_{t \in {\mathbb{R}}_+}$ forms a [*strongly continuous one-parameter contraction semigroup on $C(G_c)$*]{}. To $(T_t)_{t\in {\mathbb{R}}_+}$ there corresponds an [*infinitesimal generator*]{} $$L := \lim_{t {\rightarrow}0} \frac{T_t -{\operatorname{id}}}{t}$$ (in the strong operator topology) on a suitable $T_t$-invariant dense domain ${\operatorname{dom}}(L)\subset C(G_c)$. Given a strongly continuous one-parameter semigroup $(T_t)_{t\in{\mathbb{R}}_+}$ with generator $(L,{\operatorname{dom}}(L))$ on a Banach space $E$, we write ${C^\infty}(L):=\bigcap_{n\in{\mathbb{N}}} {\operatorname{dom}}(L^n)$. A vector $f\in{C^\infty}(L)$ is called *entire analytic for $L$* if $$z\in {\mathbb{C}}\mapsto \sum_{n=0}^\infty \frac{z^n}{n!} L^n f\in {C^\infty}(L)$$ is analytic, in which case it extends $t\in{\mathbb{R}}_+\mapsto T_t f\in E$ to an entire analytic function. Nonzero analytic vectors need not exist for one-parameter semigroups, whereas for one-parameter groups they do. If $(T_t)_{t\in{\mathbb{R}}_+}$ is a strongly continuous one-parameter semigroup on a Banach space $E$ with generator $(L,{\operatorname{dom}}(L))$ which leaves invariant a closed subspace $E_0\subset E$, then it induces a strongly continuous one-parameter semigroup $(S_t)_{t\in{\mathbb{R}}_+}$ on the quotient Banach space $E/E_0$ with infinitesimal generator $(K,{\operatorname{dom}}(K))$ as follows: denote the quotient map by $q:E{\rightarrow}E/E_0$, then $$S_t q(f) := q(T_t f), \quad f\in E,$$ and $K q(f) = q(L f)$ with dense ${\operatorname{dom}}(K) = q({\operatorname{dom}}(L))\subset E/E_0$. Given a continuous convolution semigroup of probability measures $(\mu_t)_{t \in {\mathbb{R}}_+}$ on $G$, there exists a probability space and a $G$-valued Markov process on this space such that its transition probabilities from $(g,0)\in G_c \times {\mathbb{R}}_+$ to $(A,t) \in \mathfrak{B} (G_c)\times {\mathbb{R}}_+$ are given by $(\mu_t * \delta _{g})(A)$. The most interesting processes on $G$ are the ones we encounter in Section \[sec:three\], the so-called [*Gaussian processes*]{}, whose contraction semigroups have generators of the form $$L=\sum_{k=1}^N a_k D_{X_k} +\sum_{k,l=1}^N a_{kl} D_{X_k} D_{X_l},$$ with $a_k\in {\mathbb{R}}$ and $(a_{kl})_{k,l=1...N}$ forms a positive-definite matrix, and with ${\operatorname{dom}}(L)=C^2(G_c)$. [ABH14]{} S. L. Adler: [*Quantum theory as an emergent phenomenon*]{}, Cambridge University Press (2004) C. Arenz, R. Hillier, D. Burgarth: Distinguishing decoherence from alternative quantum theories by dynamical decoupling, [*arXiv*]{}:1405.7644v3 \[quant-ph\] (2014) H. Breuer, F. Petruccione: [*The theory of open quantum systems*]{}, Oxford University Press (2002) E. Christensen, D. E. Evans: Cohomology of operator algebras and quantum dynamical semigroups, [*J. London Math. Soc.*]{} [**2**]{}, 2, 358-368 (1979) E. B. Davies: [*One-parameter semigroups*]{}, Academic Press (1980) K. J. Engel, R. Nagel: [*One-parameter semigroups for linear evolution equations*]{}, Springer (2000) W. Fulton, J. Harris: [*Representation theory. A first course*]{}, Springer (1991) U. Grenander: [*Probabilities on algebraic structures*]{}, Courier Dover Publications (2008) H. Heyer: [*Probability measures on locally compact groups*]{}, Springer (1979) O. Kern, G. Alber, D. L. Shepelyansky: Quantum error correction of coherent errors by randomization, [*Eur. Phys. J. D.*]{} [**22**]{}, 153 (2005) D. A. Lidar, T. A. Brun: [*Quantum Error Correction*]{}, Cambridge University Press (2013) L. Santos, L. Viola: Dynamical control of qubit coherence: Random versus deterministic schemes, [*Phys. Rev. A*]{} [**72**]{}, 062303 (2005) A. N. Shiryaev: [*Probability*]{}, Springer (1996) L. Viola, E. Knill and S. Lloyd: Dynamical decoupling of open quantum systems, [*Phys. Rev. Lett.*]{} [**82**]{}, 2417 (1999) L. Viola, E. Knill: Random decoupling schemes for quantum dynamical control and error suppression, [*Phys. Rev. Lett.*]{}, [**94**]{}, 060502 (2005) D. Wehn: Probability on Lie groups, [*Proc. Nat. Acad. Sci. USA*]{} [**48**]{}, 791-795 (1962) M. M. Wolf: Quantum channels and operations. Guided Tour. (Lecture notes on [http://www-m5.ma.tum.de/Allgemeines/MichaelWolfEn]{}) (2011)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the Generalized Parton Distributions (GPDs) in impact parameter space using the explicit light front wave functions (LFWFs) for the two-particle Fock state of the electron in QED. The Fourier transform (FT) of the GPDs gives the distribution of quarks in the transverse plane for zero longitudinal momentum transfer ($\xi=0$). We study the relationship of the spin flip GPD $E(x,0,-\vec{\Delta}_\perp^2)$ with the distortion of unpolarized quark distribution in the transverse plane when the target nucleon is transversely polarized and also determine the sign of distortion from the sign of anomalous magnetic moment. To verify the sign of distortion, we also compute it directly from the LFWFs by performing a FT in position space coordinate $\vec{f}_\perp$. The explicit relation between the deformation in the two spaces can also be obtained using the convolution integrals. To show the relation of the model LFWFs to a realistic model of nucleon physics, we have designed a specific weight function of our model LFWFs and integrated it over the mass parameter. Also we have simulated the form factor of the nucleon in the AdS/QCD holographic LFWFs model and studied the power-law behaviour at short distances.' address: | Department of Physics\ Dr. B. R. Ambedkar National Institute of Technology\ Jalandhar-144011, India author: - Narinder Kumar and Harleen Dahiya title: Transverse distortion of a relativistic composite system in impact parameter space --- Introduction ============ Deep virtual compton scattering (DVCS) [@dvcs; @dvcs1; @dvcs2; @dvcs3] is the main process to probe the internal structure of hadrons. Recently, the Generalized Parton Distributions (GPDs) [@gpds; @gpds1; @gpds2; @gpds3; @gpds4; @miller1; @miller2; @miller3; @miller4; @miller5; @pire] have attracted a considerable amount of interest towards this. GPDs allow us to access partonic configurations not only with a given longitudinal momentum fraction but also at a specific (transverse) location inside the hadron. GPDs can be related to the angular momentum carried by quarks in the nucleon and the distribution of quarks can be described in the longitudinal direction as well as in the impact parameter space [@longit; @imp2; @imp1; @imp3; @imp4; @imp0]. When integrated over $x$ the GPDs reduce to the form factors which are the non-forward matrix element of the current operator and they describe how the forwards matrix element (charge) is distributed in position space. The GPDs are the off-forward matrix elements and it is well known that they reduce to Parton Distribution Functions (PDFs) in the forward limit. On the other hand, Fourier transform (FT) of GPDs w.r.t. transverse momentum transfer gives the distribution of partons in transverse position space [@imp2; @imp1]. Therefore, their should be some connection between transverse position of partons and FT of GPDs w.r.t. transverse momentum transfer. With the help of impact parameter dependent parton distribution function (ipdpdf) one can obtain the transverse position of partons in the transverse plane. However, it is not possible to measure the longitudinal position of partons. In order to measure the transverse position with the longitudinal momentum simultaneously we can consider the polarized nucleon state in the transverse direction which leads to distorted unpolarized ipdpdf in the tranverse plane [@ipd_buk1; @ipd_buk2; @ipd11; @ipd12]. Distortion obtained in the transverse plane also leads to single spin asymmetries (SSA) and it has been shown that such asymmetries can be explained by final state interactions (FSI) [@ssa; @ssa1; @fsi]. This mechanism gives us a good interpretation of SSAs which arises from the asymmetry (left-right) of quarks distribution in impact parameter space. To study the GPDs, we use light front wave functions (LFWFs) which give a very simple representation of GPDs. Impact parameter dependent parton distribution functions have been investigated by using the explicit LFWFs for the two-particle Fock state of the electron in QED [@model1; @kumar1; @kumar2; @kumar3]. In the present study we use the model consisting of spin-$\frac{1}{2}$ system as a composite of spin-$\frac{1}{2}$ fermion and spin-1 vector boson. We have generalized the framework of QED by assigning a mass $M$ to external electrons in the Compton scattering process, but a different mass $m$ to the internal electron line and a mass $\lambda$ to the internal photon line. The idea behind this is to model the structure of a composite fermion state with a mass $M$ by a fermion and a vector constituent with respective masses $m$ and $\lambda$ [@model; @model1]. In our case we take $\xi=0$ [@ipd2; @ipd3] which represents the momentum transfer exclusively in transverse direction leading to the study of ipdpdfs in transverse impact parameter space. In order to show the relation of the LFWFs in the two-particle Fock state of the electron in QED to a realistic model of nucleon physics, we have designed a specific weight function of our model LFWFs and integrated it over the mass parameter. The Dirac and Pauli form factors have been simulated to obtain the correct perturbative QCD fall-off of the form factors at large $q^2$. Also we have simulated the form factor of the nucleon in the AdS/QCD holographic LFWFs model [@ads_qcd; @ads_elec_gravit; @ads_gravit] and studied the power-law behaviour of wavefunction at short distances. For the case of spin flip GPD $E(x,0,-\vec{\Delta}_\perp^2)$, the parton distribution is distorted in the transverse plane when the target has a transverse polarization and when integrated over $x$, $E(x,0,-\vec{\Delta}_\perp^2)$ yields the Pauli form factor $F_2(t)$. The study of the Fourier transformed GPD $\mathcal{E}(x,b_\perp)$ is important for a transversely polarized target since it measures the distortion of the parton distribution in the transverse plane. Integrating ipdpdf $\mathcal{E}(x,b_\perp)$ over $b_\perp$ and $x$ gives us the magnetic moment. The sign of distortion can be concluded from the sign of the magnetic moment of the nucleon. We extend the calculations to determine this sign of distortion from the unintegrated momentum space distribution obtained directly from the LFWFs which can be obtained after performing a FT to relative position space coordinate $\vec{f}_\perp$. This is the another direct way to determine the sign of distortion from the LFWFs. The explicit relation between the deformation calculated from GPDs in the impact parameter space and the deformation calculated directly from the LFWFs can also be obtained. Generalized Parton Distributions (GPDs) ======================================== The GPDs $H,E$ are defined through matrix elements of the bilinear vector currents on the light cone [@gpds2; @gpds5; @imp1]: && e\^[i x P\^+ y\^-/2]{} P’||(0) \^+ (y)| P|\_[y\^+=0,y\_=0]{}\ &=& |[U]{}(P’)\[H(x,,t) \^+ +E(x,,t) \^[+ ]{}(\_)\] U(P). \[e1\] Here, $\bar{P}=\frac{1}{2}(P'+P)$ is the average momentum of the initial and final hadron and $\xi$ is the skewness parameter. Since we are considering the case where momentum transfer is purely transverse, we take the skewness parameter $\xi=0$ and in that case $t= - \vec{\Delta}_\perp^2$ is the invariant momentum transfer. The off-forward matrix elements can be expressed as overlaps of the light front wave functions (LFWFs) for the two-particle Fock state of the electron in QED. We consider here a spin-$\frac{1}{2}$ system as a composite of spin-$\frac{1}{2}$ fermion and spin-1 vector boson. The details of the model have been presented in Ref. [@model1], however, for the sake of completeness we present here the essential two-particle wave functions for spin up and spin down electron expressed as $$\begin{aligned} &&\psi_{+\frac{1}{2}+1}^{\uparrow}(x,\vec{k}_\perp)=-\sqrt{2}\frac{-k^1+i k^2}{x(1-x)}\varphi,\nonumber\\ && \psi_{+\frac{1}{2}-1}^{\uparrow}(x,\vec{k}_\perp)=-\sqrt{2}\frac{k^1+ ik^2}{(1-x)}\varphi,\nonumber\\ && \psi_{-\frac{1}{2}+1}^{\uparrow}(x,\vec{k}_\perp)=-\sqrt{2}\left(M-\frac{m}{x}\right)\varphi,\nonumber\\ &&\psi_{-\frac{1}{2}-1}^{\uparrow}(x,\vec{k}_\perp)=0 \,, \label{spinup}\end{aligned}$$ and $$\begin{aligned} &&\psi_{+\frac{1}{2}+1}^{\downarrow}(x,\vec{k}_\perp)=0,\nonumber\\&& \psi_{+\frac{1}{2}-1}^{\downarrow}(x,\vec{k}_\perp)=-\sqrt{2}\left(M-\frac{m}{x}\right)\varphi,\nonumber\\ &&\psi_{-\frac{1}{2}+1}^{\downarrow}(x,\vec{k}_\perp)=-\sqrt{2}\frac{-k^1+i k^2}{(1-x)}\varphi,\nonumber\\ &&\psi_{-\frac{1}{2}-1}^{\downarrow}(x,\vec{k}_\perp)=-\sqrt{2}\frac{k^1+i k^2}{x(1-x)}\varphi \,, \label{spindown}\end{aligned}$$ where $$\begin{aligned} \varphi(x, \vec{k}_{\perp}) =\frac{e}{\sqrt {1-x}} \frac{1}{M^2-\frac{\vec{k}^2_{\perp}+m^2}{x}-\frac{\vec{k}_{\perp}^{2}+\lambda^2}{1-x}}\,.\end{aligned}$$ The framework of QED has been generalized by assigning a mass $M$ to external electrons in the Compton scattering process, but a different mass $m$ to the internal electron line and a mass $\lambda$ to the internal photon line. Using the above wavefunctions, the helicity non-flip and flip GPDs can be expressed as H(x,0,-\_\^2)&=& , \[h2\] E(x,0,-\_\^2)= . \[e2\] Using eqs. (\[spinup\]) and (\[spindown\]) as well as the relation $\vec{k}'_\perp=\vec{k}_\perp-(1-x)\vec{\Delta}_\perp$, we get E(x,0,-\_\^2)=- 2 M (M-)x\^2(1-x) I\_1 , \[e3\] where I\_1= \_[0]{}\^[1]{} , \[e4\] and D= (1-) (1-x)\^2 \^2\_-M\^2 x (1-x)+ m\^2 (1-x)+\^2 x . \[e5\] Since the FT diagonalizes the convolution integral, we switch to transverse position space representation of the LFWF by taking FT in $\vec{\Delta}_\perp$ as (x,\_)&=&d\^2\_e\^[-i \_\_]{} H(x,0,-\_\^2)= d J\_0(b) H(x,0,-\_\^2),\ (x,\_)&=&d\^2\_e\^[-i \_\_]{} E(x,0,-\_\^2)= d J\_0(b) E(x,0,-\_\^2), \[fourierHE\] where $J_0(\Delta b)$ is the Bessel function and $\vec{b}_\perp$ is the impact parameter conjugate to $\vec{\Delta}_\perp$ representing the transverse distance between the active quark and the center of mass momentum. ![Plots of $\mathcal{E}(x,b_\perp)$ as a function of $ b_\perp $ and $x$ for three different values of $x$ and $b_\perp$ respectively.[]{data-label="impactE"}](ipdpdf.eps "fig:"){width="5.8cm"} ![Plots of $\mathcal{E}(x,b_\perp)$ as a function of $ b_\perp $ and $x$ for three different values of $x$ and $b_\perp$ respectively.[]{data-label="impactE"}](ipdpdf1.eps "fig:"){width="5.8cm"} \ ![Plots of ipdpdf $\mathcal{E}(x,b_\perp)$ for three different values of $x$.[]{data-label="contour"}](a.eps "fig:"){width="6cm"} ![Plots of ipdpdf $\mathcal{E}(x,b_\perp)$ for three different values of $x$.[]{data-label="contour"}](b.eps "fig:"){width="6cm"} ![Plots of ipdpdf $\mathcal{E}(x,b_\perp)$ for three different values of $x$.[]{data-label="contour"}](d.eps "fig:"){width="6cm"} In fig. \[impactE\](a), we present the impact parameter dependent parton distribution function $\mathcal{E}(x,b_\perp)$ as a function of ${b_\perp}$ for different fixed values of $x$. We have taken $M=0.5$MeV, $m=0.5$MeV and $\lambda=0.02$MeV for our numerical calculations. We can see that $\mathcal{E}(x,\vec{b}_\perp)$ decreases as $\vec{b}_\perp$ increases. Since $\xi=0$ in the present study, it clearly implies that there is no finite momentum transfer in the longitudinal direction. This in turn suggests that the initial and final transverse positions of the proton remain the same and the probability interpretation is now possible. It is important to note here that $\mathcal{E}(x,\vec{b}_\perp)$ for a free Dirac particle is a delta function and the smearing observed in the $|{b_\perp}|$ space is due to the spin correlation in the two particle LFWFs. It is clear from the plots that the partons are distributed mostly near $b_\perp=0$ which is the center of momentum. As we move away from the center of momentum towards larger values of $b_\perp$, the density of partons decreases. Further, the magnitude of $\mathcal{E}(x,\vec{b}_\perp)$ increases with the increasing value of the momentum fraction $x$. In fig. \[impactE\](b), we have plotted the $\mathcal{E}(x,b_\perp)$ as a function of $x$ with three different values of $b_\perp$. It can be clearly seen that it increases as the value of $x$ increases and tends to zero at $x \rightarrow 1$. In fig. \[contour\] we present quark distribution in the transverse plane for $x$= 0.1, 0.3 and 0.8. It describes the quark distribution for the unpolarized nucleon and it is clear from the plot that for $x$=0.1 the distribution is spread over the whole region but as the value of $x$ increases it gets denser near the center. In order to have a deep insight of this model in context of the well know nucleon properties, we design the model of LFWFs integrated over the mass parameter dM\^2 (M\^2) M\^2, \[simulate\] where $\rho(M^2)$ is the weight function. We have chosen $\rho(M^2)= e^{- \frac{M^2}{\Lambda^2_{QCD}(1-x)^2}}$, which is not only consistent with the $x \rightarrow 1$ and $x \rightarrow 0$ constraints but also its integration gives the correct perturbative QCD fall-off of the Dirac and Pauli form factors at large $q^2$. We would like to emphasize here that even though earlier studies have already produced this behavior without any weight function [@ma-ivan], the ad hoc weighting function is introduced so that a relation to a realistic model of nucleon physics can be shown. The form factors obtained from LFWFs have been simulated using eq. (\[simulate\]). We have taken an arbitrary parameter $y=x M^2$ to solve the integration given in eq. (\[simulate\]) and the results for the Dirac and Pauli form factors have been obtained following Ref. [@model1]. In fig. \[simulated\], we have plotted the simulated Dirac and Pauli form factors as a function of $q^2$ and it is clear from the plots that, as expected, both form factors fall off at large $q^2$. ![ Plots of simulated Dirac and Pauli form factors as a function of $q^2$ after integrating over $M^2$.[]{data-label="simulated"}](F1_simulate.eps "fig:"){width="8cm"} ![ Plots of simulated Dirac and Pauli form factors as a function of $q^2$ after integrating over $M^2$.[]{data-label="simulated"}](F2_simulate.eps "fig:"){width="8cm"} In addition to this, we have simulated the form factor of the nucleon in the AdS/QCD holographic LFWFs model [@ads_qcd; @ads_elec_gravit; @ads_gravit] where the LFWFs encode all the properties of hadron like bound state quark and gluon properties. The holographic model is quite successful in explaining the hadron spectrum and can act as template for composite systems describing the partonic structure. Following Ref. [@ads_qcd], we define a string amplitude $\Phi(z)$ on the fifth dimension in AdS$_5$ space which easily maps to the LFWFs of the hadrons and allow us to calculate the structure functions, form factors, DVCS constants etc.. Further, with $z\rightarrow 0$, the scale dependence determines the power-law behaviour of wavefunction at short distances and predicted behaviour matches with the available perturbative QCD results [@ads_pqcd]. A correspondence exists between the fifth dimensional holographic variable $z$ and a impact separation variable $\zeta$. The form factor in AdS is given by the overlap of normalizable modes dual to the incoming and outgoing hadrons and is given as F(q\^2)=2 \_[0]{}\^[1]{} dJ\_0( q  ) |(x,)|\^2, where the normalized light front wavefunction for two particle state follows from [@ads_qcd] \_[L,k]{}(x,)=B\_[L,k]{} J\_L(\_[L,k]{} \_[QCD]{}) (z\_[QCD]{}\^[-1]{}), where B\_[L,k]{}=\_[QCD]{} \[(-1)\^L J\_[1+L]{}(\_[L,k]{}  \_[QCD]{}) J\_[1-L]{}(\_[L,k]{}  \_[QCD]{})\]\^[-]{}, and $\beta_{L,k}$ is the $k$th zero of the Bessel function $J_L$. We have obtained the nucleon form factor from normalized LFWFs $\Psi_{L,k}$ as a function of $q^2$. In this case also, we have simulated the nucleon form factor by taking the integration over the parameter $M^2$ as described in eq. (\[simulate\]). The form factor for the ground ($L=0, k=1$) and the first orbital excited state ($L=1, k=1$) are presented in fig. \[ads\]. It is clear from the plots that the magnitude of the form factor falls-off at large value of $q^2$ and the light cone composite model used in the present work matches the power-law fall-off of form factors in perturbative QCD. ![Nucleon form factor as a function of $q^2$ for the ground ($L=0, k=1$) and the first orbital excited state ($L=1, k=1$) obtained from AdS/QCD model of LFWFs.[]{data-label="ads"}](ads_qcd_simulate.eps "fig:"){width="8cm"} ![Nucleon form factor as a function of $q^2$ for the ground ($L=0, k=1$) and the first orbital excited state ($L=1, k=1$) obtained from AdS/QCD model of LFWFs.[]{data-label="ads"}](ads_qcd_simulate1.eps "fig:"){width="8cm"} Transverse distortion of the wave function ========================================== To understand the physical significance of ipdpdf $\mathcal{E}(x,\vec{b}_\perp)$, we consider a state polarized in the $+\hat{y}$ direction with it’s transverse center of momentum at the origin |P\^+,\_=\_,+ =(|P\^+,\_=\_,+i |P\^+,\_=\_,, \[e6\] where |P\^+,\_=\_,= d\^2\_|P\^+,\_,. \[e7\] $\mathcal{N}$ is the normalization factor and it is chosen such that we get the parton distributions when the impact parameter dependent distributions are integrated over $d^2\vec{b}_\perp$. The transverse distance from center of momentum can defined using the light cone momentum density component of the energy momentum tensor and can be expressed as \_ d\^2\_dx\^- T\^[++]{} \_= \_[i=q,g]{} x\_i \_[,i]{}. Here, $x_i$ are the light cone momentum fractions carried by each parton and the sum in the parton representation of $\vec{R}_\perp$ extends over the transverse positions $\vec{r}_{\perp,i}$ of all quarks and gluons in the target. Using the operator \_q(x,\_)=| ( - \_) \^+ ( \_) e\^[ixP\^+y\^-]{}, and the light front gauge $A^+=0$ for a state polarized in $+\hat{y}$ direction, we get the unpolarized quark distribution in impact parameter space [@ipd11; @ipd12] expressed as q\_(x,\_)&=& P\^+, \_=\_, + | \_q(x,\_)| P\^+, \_=\_, +\ &=& e\^[-i \_\_]{} \[H(x,0,-\_\^2)\]+i E(x,0,-\_\^2)\]. \[e8\] Using eq. (\[fourierHE\]), we get the unpolarized quark distribution in terms of the Fourier transformed GPDs as follows q\_(x,\_)&=& (x,\_)+ (x,\_). \[e9\] It is clear from the above expression that the parton distribution of quarks in the transverse plane is distorted for the target having transverse polarization when the $b^x$ derivative of $\mathcal{E}(x,b_\perp)$ is added provided the spin flip GPD $E(x,0,t)$ is non zero. On the one hand, integrating the spin-flip GPD $E(x,0,t)$ over $x$ gives the Pauli form factor $F_2(t)$ whereas on the other hand, integrating $\mathcal{E}(x,\vec{b}_\perp)$ over both $x$ and $\vec{b}_\perp$ gives the quark contribution to anomalous magnetic moment as follows dx d\^2 \_(x,\_)=. \[e10\] The sign of anomalous magnetic moment is important since it determines the sign of distortion of quark distribution in impact parameter space. It is well known that a Fourier transformed function may have a maxima (minima) at the origin. From eq. (\[e9\]) it can be clearly seen that when $\kappa$ is taken to be positive, the $b^x$ derivative of a smooth positive function $\mathcal{E}(x,b_\perp)$, with a maxima at the origin, is positive for negative $b^x$ and negative for positive $b^x$. The situation reverses for the negative values of $\kappa$. As a result, when $\kappa>0$, the nucleon which is polarized in the $\hat{y}$ direction, the distortion is towards negative $\hat{x}$ for positive $b^x$ and towards positive $\hat{x}$ for negative $b^x$. Similarly, when $\kappa<0$, the nucleon which is polarized in the $\hat{y}$ direction, the distortion is towards positive $\hat{x}$ for positive $b^x$ and towards negative $\hat{x}$ for negative $b^x$. We can verify the above assumptions in the present study. The distortion in impact parameter space for a polarized nucleon in the present study is given as (x,\_)&=& - \^2 J\_1(b) E(x,0,-\_\^2) d. \[e11\] To have a deeper understanding we have plotted, in fig. \[newx\](a), the distortion in impact parameter space $\frac{\partial}{\partial b^x} \mathcal{E}(x,b_\perp)$ for a nucleon polarized in the $\hat{y}$ direction as a function of $b_\perp$. We have taken three different values of $x$ and it is clear from plot that magnitude of distortion increases as the value of $x$ increases. The distortion obtained is in negative direction because the anomalous magnetic moment is positive. This is in agreement with the results of Ref. [@ipd3] where a model of spin-$\frac{1}{2}$ system namely an electron dressed with a photon in QED has been used to study the distortion in impact parameter space. Again we have taken $M=0.5$MeV, $m=0.5$MeV and $\lambda=0.02$MeV for our numerical calculations. In fig. \[newx\](b), we present the $\frac{\partial}{\partial b^x} \mathcal{E}(x,b_\perp)$ vs $x$ for three different values of $b_\perp$. It represents the distortion of ipdpdf in the transverse plane for a transversely polarized target. It is clear from the plot that distortion increases as the value of $x$ increases but at $x \rightarrow 1$ it decreases. Further, the magnitude of distortion decreases as the value of $b_\perp$ increases. The study of transverse distortion is significant in the context of developing an intuitive explanation for transverse SSAs [@ipd11; @ipd12]. ![Plots of $\frac{\partial}{\partial b^x} \mathcal{E}(x,b_\perp)$ as a function of $ b_\perp $ and $x$ for different values of $x$ and $b_\perp$ respectively.[]{data-label="newx"}](distortion.eps "fig:"){width="5.8cm"} ![Plots of $\frac{\partial}{\partial b^x} \mathcal{E}(x,b_\perp)$ as a function of $ b_\perp $ and $x$ for different values of $x$ and $b_\perp$ respectively.[]{data-label="newx"}](distortion1.eps "fig:"){width="5.8cm"} In addition to the sign of distortion in impact parameter space obtained above, there is infact an alternate way to determine this sign from the unintegrated momentum space distribution obtained directly from the LFWFs. This can be achieved by performing a FT in position space coordinate $\vec{f}_\perp$. One can then explicitly show the relation between the deformation obtained from GPDs in the impact parameter space and as calculated directly from the LFWFs in the calculations presented below using the convolution integrals. These relations will also provide insight into the phenomena of shifting from the impact parameter space to the transverse position space representation. To this end, we start by taking the wavefunctions for a nucleon polarized in $+\hat{y}$ direction as follows \^[+ ]{}\_[++1]{}(x,\_)&& \[\^\_[+1]{}(x,\_)+ i \^\_[+1]{}(x,\_)\],\ \^[+ ]{}\_[+-1]{}(x,\_)&& \[\^\_[-1]{}(x,\_)+ i \^\_[-1]{}(x,\_)\],\ \^[+ ]{}\_[-+1]{}(x,\_)&& \[\^\_[-+1]{}(x,\_)+ i \^\_[-+1]{}(x,\_)\],\ \^[+ ]{}\_[--1]{}(x,\_)&& \[\^\_[--1]{}(x,\_)+ i \^\_[--1]{}(x,\_)\].\ \[e12\] Using eqs. (\[spinup\]) and (\[spindown\]), we have \^[+ ]{}\_[++1]{}(x,\_)&=& ,\ \^[+ ]{}\_[+-1]{}(x,\_)&=& - ( + i (M-)),\ \^[+ ]{}\_[-+1]{}(x,\_)&=& - ((M-) + i ) ,\ \^[+ ]{}\_[--1]{}(x,\_)&=& - i . \[e13\] The unintegrated momentum space distribution, which is even in $k_\perp$, can be obtained by squaring the above equations q\_(x,\_)&=&\ &=& \^2 , \[e14\] where $\varphi$ is supposed to be real. We can prove explicitly that there is an asymmetry in the $\hat{x}$ direction in the state corresponding to eq. (\[e13\]). For this purpose, we perform a Fourier transformation to the transverse position space coordinate say $\vec{f}_\perp$. We now have \^[+ ]{}\_[++1]{}(x,\_)&& e\^[i \_\_]{} \^[+ ]{}\_[++1]{}(x,\_)\ &=& (- i -) (f\_) , \^[+ ]{}\_[+-1]{}(x,\_)&& e\^[i \_\_]{} \^[+ ]{}\_[+-1]{}(x,\_)\ &=& -(f\_) , \^[+ ]{}\_[-+1]{}(x,\_)&& e\^[i \_\_]{} \^[+ ]{}\_[-+1]{}(x,\_)\ &=& -(f\_) , \^[+ ]{}\_[--1]{}(x,\_)&& e\^[i \_\_]{} \^[+ ]{}\_[--1]{}(x,\_)\ &=& (-i +)(f\_) , where (\_)&& e\^[i \_\_]{} (\_)\ &=& - e x e\^[i \_\_]{}\ &=& - x K\_0(|f\_|), and C=m\^2(1-x)- M\^2 x (1-x)+ \^2 x, ![Plot of $q_{\hat{y}}(x,\vec{k}_\perp)$ vs $k_\perp$ for three different values of $x=(0.1,0.4,0.8)$ .[]{data-label="3d-distribution"}](q1.eps "fig:"){width="6cm"} ![Plot of $q_{\hat{y}}(x,\vec{k}_\perp)$ vs $k_\perp$ for three different values of $x=(0.1,0.4,0.8)$ .[]{data-label="3d-distribution"}](q2.eps "fig:"){width="6cm"} ![Plot of $q_{\hat{y}}(x,\vec{k}_\perp)$ vs $k_\perp$ for three different values of $x=(0.1,0.4,0.8)$ .[]{data-label="3d-distribution"}](q4.eps "fig:"){width="6cm"} Using the relations $$\begin{aligned} |\psi^{+ \hat{y}}_{+\frac{1}{2}+1}(x,\vec{f}_\perp)|^2 &=& |\psi^{+ \hat{y}}_{-\frac{1}{2}-1}(x,\vec{f}_\perp)|^2, \nonumber\\ |\psi^{+ \hat{y}}_{+\frac{1}{2}-1}(x,\vec{f}_\perp)|^2 &=& |\psi^{+ \hat{y}}_{-\frac{1}{2}+1}(x,\vec{f}_\perp)|^2, \label{e15}\end{aligned}$$ the unpolarized quark distribution in transverse coordinate space $\vec{f}_\perp$ can be expressed as q\_(x,\_)&=&\ &=&\ &&-(M-)( ). \[quark\_distribution\] From this equation we observe that the last term is odd under $f^x \rightarrow - f^x$ and it defines the deformation in transverse coordinate space. This deformation is in agreement with the deformation predicted in eq. \[e9\] in the impact parameter space. One can explicitly show the relation between the deformations in both the spaces using the convolution integrals. Since the convolution integrals are diagonalized by the FT, one can shift from the impact parameter space to the transverse position space representation as already shown. The transverse momentum $k_\perp$ in the two-particle Fock component is the Fourier conjugate of the distance $\vec{f}_\perp = \vec{r}_{\perp 1} - \vec{r}_{\perp 2}$ between the active quark and spectator system. The two variables $\vec{b}_\perp$ (transverse distance between the active quark and the center of mass momentum) and $\vec{f}_\perp$ can be related to each other by the relation $\vec{b}_\perp = (1-x) \vec{f}_\perp$ [@kumar2] and one can write (x,\_) &=& ,\ (x,b\_)&=& ,\ (x, b\_) &=& (M-) . In fig. \[3d-distribution\] we present the unintegrated momentum space distribution obtained from the light front wavefunctions for different values of $x$, to check the sign of distortion for a polarized nucleon in impact parameter space. It is clear from the plots that at $x=0.1$ there is maxima at origin but as the value of $x$ is increased some distortion is observed. It can be easily seen from the plot that as the value at $x$ is further increased the distortion also increases towards negative direction. These plots helps us to determine the distortion sign directly from the light front wavefunctions. Conclusions =========== In this present work we have studied the GPDs in impact parameter space obtained from LFWFs. We consider the spin-$\frac{1}{2}$ system consist of spin-$\frac{1}{2}$ fermion and spin-1 vector boson. We have shown that if spin flip GPD is non-zero then parton distribution is distorted in the transverse plane when the target nucleon has transverse polarization. We have obtained the sign of the distortion from the sign of anomalous magnetic moment and our results are in agreement with the expected results. Since the LFWFs can also be used directly to find the sign of distortion in impact parameter space, we have performed a FT of LFWFs in position space coordinate $\vec{f}_\perp$ and then explicitly shown the relation between the deformation obtained from GPDs in the impact parameter space and the deformation calculated directly from the LFWFs in the position space coordinate using the convolution integrals. These relations will also provide insight into the phenomena of shifting from the impact parameter space to the transverse position space representation. We consider the nucleon polarized in $+\hat{y}$ direction and then obtain the unintegrated momentum space distribution which is even in $k_\perp$. The deformation obtained in the impact parameter space is in agreement with the deformation predicted in transverse position space. We have designed a specific weight function of our model LFWFs and integrated it over the mass parameter to relate the LFWFs in the two-particle Fock state of the electron in QED to a realistic model of nucleon physics. The simulated Dirac and Pauli form factors obtained from LFWFs fall off at large $q^2$. In addition to this, we have simulated the form factor of the nucleon in the AdS/QCD holographic LFWFs model and studied the power-law behaviour of wavefunction at short distances. The magnitude of the form factor falls-off at large value of $q^2$. The light cone composite model used in the present work matches the power-law fall-off of form factors in perturbative QCD. Acknowledgement =============== Authors acknowledge helpful discussion with S.J. Brodsky. HD would like to thank Department of Science and Technology (Ref No. SB/S2/HEP-004/2013), Government of India for financial support. [99]{} X. Ji, Phys. Rev. D [**55**]{}, 7114 (1997). S.J. Brodsky, H.C. Pauli, S.S. Pinsky, Phys. Rep. [**301**]{}, 299 (1998). S.J. Brodsky, M. Diehl, D.S. Hwang, Nucl. Phys. B [**596**]{}, 99 (2001). M. Guidal, H. Moutarde, Marc Vanderhaeghen, Rept. Prog. Phys. [**76**]{}, 066202 (2013). X. Ji, Phys. G [**24**]{}, 1181 (1998). K. Goeke, M.V. Polyakov, M. Vanderhaeghen, Prog. Part. Nucl. Phys. [**47**]{}, 401 (2001). M. Diehl, Phys. Rept, [**388**]{}, 41 (2003). A.V. Radyushkin, Phys. Part. Nucl. [**44**]{}, 469 (2013). M. Diehl, P. Kroll, Eur. Phys. J. C [**73**]{}, 2397 (2013). J.P. Ralston, B. Pire, Phys. Rev. D [**66**]{}, 111501 (2002). M. Burkardt, G.A. Miller, Phys. Rev. D [**74**]{}, 034015 (2006). G.A. Miller, Phys. Rev. Lett. [**99**]{}, 112001 (2007). G.A. Miller, Phys. Rev. C [**80**]{}, 045210 (2009). G.A. Miler, Annu. Rev. Nucl. Part. Sci. [**60**]{}, 1 (2010). G.A. Miller, Phys. Rev. D [**90**]{}, 113001 (2014). S.J. Brodsky, D. Chakrabarti, A. Harindranath, A. Mukherjee, J.P. Vary, Phys. Rev. D [**75**]{}, 014003 (2007). R. Manohar, A. Mukherjee, D. Chakrabarti, Phy. Rev. D [**83**]{}, 014004 (2011). M. Diehl, Eur. Phys. J. C [**25**]{}, 223 (2002). H. Dahiya, A. Mukherjee, S. Ray, Phys. Rev. D [**76**]{}, 034010 (2007). D. Chakrabarti, R. Manohar, A. Mukherjee, Phys. Rev. D [**79**]{}, 034006 (2009). D. Chakrabarti, R. Manohar, A. Mukherjee, Phys. Lett. B [**682**]{}, 428 (2010). M. Burkardt, Phys. Rev. D [**62**]{} 071503 (2000). M. Burkardt, Int. J. Mod. Phy. A [**18**]{}, 173 (2003). M. Burkardt, Prog. Part. Nucl. Phys. [**67**]{}, 260 (2012). M. Burkardt. Phys. Rev. D [**66**]{}, 114005 (2002). M. Burkardt, D.S. Hwang, Phys. Rev. D [**69**]{}, 074032 (2004). S.J. Brodsky, D.S. Hwang, I. Schmidt, Phy. Lett. B [**530**]{}, 99 (2002). S.J. Brodsky, D.S. Hwang, Y.V. Kovchegov, I. Schmidt and M.D. Sievert, Phys. Rev. D [**88**]{}, 014032 (2013). D.S. Hwang, Nucl. Phys. B. (Proc. Suppl.) [**214**]{}, 173 (2011). S.J. Brodsky, S.D. Drell, Phys. Rev. D [**22**]{}, 2236 (1980). S.J. Brodsky, D.S. Hwang, B. Ma, I. Schmidt, Nucl. Phys. B [**593**]{}, 311 (2001). N. Kumar, H. Dahiya, Mod. Phys. Lett. A [**29**]{}, 1450118 (2014). N. Kumar, H. Dahiya, Phys. Rev. D [**90**]{}, 094030 (2014). N. Kumar, H. Dahiya, Int. Jol. of Mod. Phys. A, Vol. [**30**]{}, 1550010, (2015). M. Burkardt, Phys. Rev. D [**62**]{}, 071503 (2000). D. Chakrabarti, A. Mukherjee, Phys. Rev. D [**71**]{}, 014038 (2005). S.J. Brodsky, Guy F. de T$\acute{e}$ramond, Phys. Rev. Lett. [**96**]{} 201601 (2006). Z. Abidin, Carl E. Carlson, Phys. Rev. D [**79**]{} 115003 (2009). S.J. Brodsky, Guy F. de T$\acute{e}$ramond, Phys. Rev. D [**78**]{}, 025032 (2008). B. Ma, D. Qing, I. Schmidt, Phys. Rev. C [**65**]{}, 035205 (2002). X. Ji, J.P. Ma, F. Yuan, Phys. Rev. Lett. [**90**]{}, 241601 (2003).
{ "pile_set_name": "ArXiv" }
--- abstract: | Chemically peculiar stars define a class of stars that show unusual elemental abundances due to stellar photospheric effects and not due to natal variations. In this paper, we compare the elemental abundance patterns of the ultra metal-poor stars with metallicities \[Fe/H\] $\sim -5 $ to those of a subclass of chemically peculiar stars. These include post-AGB stars, RV Tauri variable stars, and the Lambda Bootis stars, which range in mass, age, binarity, and evolutionary status, yet can have iron abundance determinations as low as \[Fe/H\] $\sim -5$. These chemical peculiarities are interpreted as due to the separation of gas and dust beyond the stellar surface, followed by the accretion of dust depleted-gas. Contrary to this, the elemental abundances in the ultra metal-poor stars are thought to represent yields of the most metal-poor supernova and, therefore, observationally constrain the earliest stages of chemical evolution in the Universe. Detailed chemical abundances are now available for HE1327-2326 and HE0107-5240, the two extreme ultra metal-poor stars in our Galaxy, and for HE0557-4840, another ultra metal-poor star found by the Hamberg/ESO survey. There are interesting similarities in their abundance ratios to those of the chemically peculiar stars, e.g., the abundance of the elements in their photospheres are related to the condensation temperature of that element. If HE1327-2326 and HE0107-5240 are ultra metal-poor due to the preferential removal of metals by dust grain formation or dilution through the accretion of metal-poor interstellar gas, then their CNO abundances suggest true metallicities of \[$X$/H\] $\sim -2$ rather than their present metallicities of \[Fe/H\] $\leq -5$, and, thus their status as truly ultra metal-poor stars is called into question. The initial abundance for HE0557-4840 would be \[$X$/H\] $\ge -4$. It is important to establish the nature of these stars since they are used as tests of the early chemical evolution of the Galaxy, yet if they are chemically peculiar, then those tests should be focused on stars in the metal-poor tail of the Galactic halo distribution. Many, but not all, chemically peculiar stars show a mid-infrared excess from the circumstellar dust. We examine the JHK fluxes for the ultra metal-poor stars but find no excesses. A more important test of the stars’ status as chemically peculiar is provided by the elemental abundances of sulphur and/or zinc. These two elements have low condensation temperatures and do not form dust grains easily; furthermore, the chemically peculiar stars universally show sulphur and zinc to be undepleted or nearly so. We show that near-infrared lines of S[i]{} offer a promising test of the possibility that HE1327-2326 may be chemically peculiar. Although there are some parallels between the compositions of the ultra metal-poor stars and chemically peculiar stars, a definitive ruling on whether the former are chemically peculiar requires additional information. author: - 'K. A. Venn' - 'David L. Lambert' title: 'Could the Ultra Metal-poor Stars be Chemically Peculiar and Not Related to the First Stars?' --- Introduction ============ A fossil record of the earliest episodes of stellar nucleosynthesis in the Universe and Galaxy should be revealed by the compositions of the most metal-poor Galactic stars (e.g., Tumlinson 2007a, 2007b; Tominaga [[*et al.*]{}]{}2007; Umeda & Nomoto 2003). The lure of this revelation has driven the search to find and analyse such Rosetta stones. A great leap forward was achieved recently by the discovery of two stars with iron abundances \[Fe/H\] $< -5.3$ (Christlieb [[*et al.*]{}]{}2002; Frebel [[*et al.*]{}]{}2005), a limit about 1.5 dex below the abundance of the previously known most metal-poor star. A third star with \[Fe/H\] $\simeq -4.8$ also beats the previous lower bound (Norris [[*et al.*]{}]{}2007). Prior to these remarkable discoveries, the most Fe-poor stars known were HR 4049 and HD 52961 with \[Fe/H\] $\simeq -4.8$ (e.g., Waelkens [[*et al.*]{}]{}1991). However, these and slightly more iron-rich examples were dismissed - correctly - as irrelevant to the issue of early stellar nucleosynthesis because they are ‘chemically peculiar’, i.e., their present surface compositions are far removed from their initial compositions. In particular, their compositions reflect that of gas from which refractory elements have been removed to varying degrees by a process dubbed ‘dust-gas separation’. The existence of HR 4049 and HD 52961 has led us to reexamine the question of whether the recently discovered ultra metal-poor stars may themselves be chemically peculiar. Two of the three ultra metal-poor stars in question are HE1327-2326 and HE0107-5240. HE1327-2326 was discovered by Frebel [[*et al.*]{}]{}(2005) and abundance analyses have been described by Aoki [[*et al.*]{}]{}(2006), Frebel [[*et al.*]{}]{}(2006), Collet [[*et al.*]{}]{}(2006), and most recently by Frebel & Christlieb (2007). The latter analysis based on the highest SNR spectra yields abundances for 11 elements and upper limits for an additional nine elements: the new iron abundance at \[Fe/H\] $=-5.9$ is even lower than the previous determination. HE0107-5240 was discovered by Christlieb [[*et al.*]{}]{}(2002) and abundance analyses have been reported by Christlieb [[*et al.*]{}]{}(2004), Bessell [[*et al.*]{}]{} (2004), and Collet [[*et al.*]{}]{}(2006) with the latter suggesting \[Fe/H\]$\simeq -5.6$. The third star HE0557-4840 was discovered and analysed by Norris [[*et al.*]{}]{}(2007): \[Fe/H\] $= -4.8$ places it between the two ultra metal-poor stars (HE1327-2326 and HE0107-5240) and the lower boundary of the metal-poor tail of Galactic stars. A marked characteristic of these three discoveries is that some abundance ratios are uncharacteristic of metal-poor stars of higher Fe abundance. Notably, the stars are C-rich for their Fe abundance, i.e., \[C/Fe\] $= 3.7$ for HE1327-2326, and also \[N/Fe\] $= 4.1$ and \[O/Fe\] $= 3.4$ (Frebel & Christlieb 2007). This property of unusual abundance ratios is shared in a qualitative sense with HR 4049 and HD 52961. In the following sections, we briefly review the classes of known chemically peculiar stars affected by dust-gas separation. Then, we discuss if the three stars HE1327-2326, HE0107-5240, and HE0557-4840 are chemically peculiar rather than true ultra metal-poor stars. In the final sections, we discuss possible tests of the hypothesis that ultra metal-poor stars may be chemically peculiar. Separation of gas and dust and chemically peculiar stars ======================================================== The chemically peculiar stars in question are those whose atmospheres betray the operation of the dust-gas separation process. In gas of a sufficiently low temperature, dust condenses out and the gas is depleted in those elements that form the dust. The local interstellar gas, for example, displays such depletion (Savage & Sembach 1996). The composition of the gas in a dust-gas mixture depends on several primary factors including the initial composition of the gas, the total pressure, and the history of the gas-dust mixture. If a star were to then accrete gas, preferentially over dust, and the accreted gas were to comprise a major fraction of the stellar photosphere, a star with striking abundance anomalies results. This scenario is one version of how a star could develop chemical peculiarities from dust-gas separation (or “dust-gas winnowing”). Anomalies plausibly attributable to dust-gas separation have now been reported for three kinds of stars: Lambda Bootis stars, post-AGB A-type spectroscopic binaries, and RV Tauri variables. The Lambda Bootis stars are main sequence stars, ranging from A- to mid F-types, and found at all evolutionary phases from very young (e.g., they are found in young open clusters, Gray & Corbally 1998), to the end of the main-sequence (Kamp & Paunzen 2002, Andrievsky [[*et al.*]{}]{}2002). They are expected to have a solar-like composition, but have long been known for their significant metal-deficiencies (Morgan, Keenan, & Kellman 1943; Burbidge & Burbidge 1956; Baschek & Searle 1969). These stars have effective temperatures of from about 6500 to 9500 K and surface gravities [log g]{}$\simeq 4$. Our (Venn & Lambert 1990) abundance analysis of three stars, including the eponym, showed that the compositions of the stars could be attributed to accretion of circumstellar gas, without dust. The iron deficiency in the case of $\lambda$ Boo was \[Fe/H\] $\simeq -2$, and slightly less severe for 29 Cyg, with normal abundances of C, N, O and S for both stars, in agreement with the expectations for an atmosphere contaminated with dust-free gas (see Figure 1). Even Vega, a ‘standard’ A0 star, was shown to be a mild Lambda Bootis star (Venn & Lambert 1990, Lemke & Venn 1996). Vega also shows an infrared excess due to a dusty circumstellar disk (Su [[*et al.*]{}]{}2005; Aumann [[*et al.*]{}]{}1984). The shallow convective envelope of early A-type stars is a key factor in the creation and maintenance of the abundance anomalies in the [*diffusion/accretion model*]{} (Turcotte & Charbonneau 1993). Abundance anomalies can persist ($\sim$10$^6$ yr), even after dispersal of the circumstellar dust and gas and, hence, removal of the infrared excess contributed by the dust. Thus, not all Lambda Bootis stars show an infrared excess; Paunzen [[*et al.*]{}]{}(2003) estimate that 23% of [*bona fide*]{} Lambda Bootis stars show evidence for circumstellar matter. On the other hand, the circumstellar matter may be of interstellar origin (Kamp & Paunzen 2002, Gáspár [[*et al.*]{}]{}2007). Movement of a star through the denser parts of the interstellar medium can create a bow shock which heats the interstellar dust causing an infrared excess. Meanwhile, radiation pressure from the star repels the grains, while gas is accreted onto the stellar surface. With return of the star to passage through low density gas, the infrared excess dissipates and accretion ceases. This alternate theory for the origin of the Lambda Bootis stars implies the chemical anomalies are transient, but could help to explain why the phenomenon is seen in such a wide range of main-sequence stars. Abundance anomalies should not survive the transition from the main sequence to the giant branch though. The deep convective envelope of giant stars will surely dilute abundance anomalies beyond recognition. Yet, a metal deficiency of even greater severity can be found among post-AGB stars in spectroscopic binaries (Waelkens [[*et al.*]{}]{}1991; Van Winckel 2003). These stars, like HR 4049 and HD 52961 discussed above, are supergiants with [T$_{\rm eff}$]{}in the range of 6000 to 7600 K and surface gravities [log g]{}$\simeq 1$. One must suppose that a new episode of dust-gas separation led to these abundance anomalies. The original quartet of post-AGB binaries (Van Winckel [[*et al.*]{}]{}1995) comprised HR 4049, HD 44179, HD 52961, and BD +39 4926. Their \[Fe/H\] values range from $-3.0$ to $-4.8$, but they show an abundance pattern reminiscent of interstellar gas, i.e., quasi-solar abundances of C, N, O, S, and Zn, but severe underabundances of, for example, Si, Ca, and Fe (see Figure 1). The pattern shows that it is the former set of elements that define the initial composition of these stars and not the latter set. Gas is thought to be accreted onto the star from a circumbinary disk, while radiation pressure exerted by the star on the dust grains inhibits the accretion of dust by the star and may also promote a separation of dust and gas in the disk. The dusty disk is betrayed by an infrared excess: HD 44179, also known as the Red Rectangle, is a rather special proto-planetary nebula with a striking infrared excess. On the other hand, BD +39 4926 lacks an infrared excess. Shallow convective envelopes in these extended stars are presumably a key factor in the appearance of their huge abundance anomalies. Chemical peculiarities of the post-AGB stars are presumably developed from less extreme peculiarities seen in their immediate progenitors, the RV Tauri variables. Discovery of this third category of star displaying the marks of dust-gas winnowing began with the analysis of the RV Tauri variable IW Car (Giridhar [[*et al.*]{}]{}1994). A RV Tauri star is a post-AGB star with an infrared excess (first noted by Gehrz 1972). Subsequent analyses (Giridhar [[*et al.*]{}]{}2005) showed that the effects of dust-gas separation among RV Tauri stars appear limited to the warmer stars ([T$_{\rm eff}$]{}$> 4500$ K) and to the stars with intrinsic metallicities \[Fe/H\] $\geq -1$. Affected stars have [T$_{\rm eff}$]{}$\simeq 4500 - 6500$ K (hotter stars fall outside the instability strip and appear as non-variable post-AGB stars) and [log g]{}$\simeq 0$ to 1. Gonzalez & Lambert (1997) also discussed the potential importance of the composition of the photosphere (e.g., C/O ratio), and environment (e.g., field vs globular cluster stars). The reasons for these effective temperature and metallicity boundaries are not entirely clear, but the cooler stars possess more extensive convective envelopes that dilute the accreted gas. Dust-gas winnowing may be impaired in low metallicity gas where the dust to gas ratio is necessarily lower. The winnowing site is again presumed to be a circumbinary disk; there is increasing evidence that the affected stars are spectroscopic binaries (Van Winckel 2007). How the gas is captured onto the star from the circumbinary disk in the presence of a stellar wind remains an unsolved problem, as it does for the post-AGB stars in binaries. A signature of a star that is affected seriously by dust-gas winnowing is a correlation between an element’s abundance and the predicted condensation temperature $T_{cond}$ (or the abundance in interstellar gas). Estimates of $T_{cond}$ depend on the initial composition and pressure in a gas and the assumption that cooling of the gas and condensation of grains occurs under equilibrium conditions. Lodders (2003) provides a comprehensive discussion of $T_{cond}$ estimates for all elements for the solar (O-rich) composition. In the post-AGB stars, and in some Lambda Bootis and RV Tauri stars, the abundance \[$X$/H\] is smoothly correlated with $T_{cond}$ (see Figure 1). In some Lambda Bootis and RV Tauri stars, \[$X$/H\] shows no obvious trend with the $T_{cond}$ (e.g., EQ Cas, Giridhar [[*et al.*]{}]{}2005). Rao & Reddy (2005) brought order to chaos by noting instead that the \[$X$/H\] for EQ Cas were correlated with the ionization potential of the neutral atom (‘the first ionization potential’ or FIP). This raises the intriguing possibility that an alternative or additional process is operating in this one star. The FIP effect is a well known phenomena in the solar corona thought to reflect the greater ease with which ions (low FIP elements) rather than neutral atoms (high FIP elements) are fed from the cool chromosphere into the corona. Perhaps, EQ Cas is a star where there is a selective feeding of the stellar wind. Thus, in single stars, the [*stellar wind*]{} may control the abundance anomalies but where the variable is in a binary the [*circumbinary disk*]{} may exert control. Another possibility for affecting the \[$X$/H\] vs $T_{cond}$ correlation (at high T$_{cond}$) is a competition between accretion of metal free gas and chemical separation due to, e.g., diffusion, gravitational settling, radiative acceleration, or rotational mixing above the convective zone. The diffusion/accretion model for Lambda Bootis stars by Turcotte & Charbonneau (1993) suggests timescales for chemical separation can vary by element, thus complicating the abundance pattern established earlier by accretion. The conclusion must be that the dust-gas separation mechanism does not have the same effects on the elemental abundances in the various stars where it appears to operate. Either the mechanism itself, and/or the re-accretion phase of the dust-depleted gas, and/or additional processes that affect chemical separation impact the observed chemical abundance pattern. In summary, stars with abundance anomalies attributable to dust-gas separation are seen in several parts of the HR-diagram. Associated properties of affected stars, such as age, mass, binarity, or evolutionary status, do not appear to be uniform across the cases. Details in the observed abundance patterns seem to vary between the cases, particularly for elements with high T$_{cond}$, possibly due to the mechanism itself or additional processes. An accompanying infrared excess may be a warning bell, but it is not a required signature of dust-gas separation. However, to date, the inferred intrinsic metallicities of affected stars are one-tenth solar or greater. Examination of dust-gas separation in the ultra metal-poor stars ================================================================ As the discussion turns to the ultra metal-poor stars, we emphasize the importance of the assumptions made in the determination of the $T_{cond}$ values. Throughout our discussion, the $T_{cond}$ values are for a solar composition gas. Thus, the values are determined for a gas that is O-rich. One of the ultra metal-poor stars (HE0107-5240) is presently severely C-rich, a second (HE1327-2326) has C/O $\simeq 1$, and the C/O ratio is unknown for the third (HE0557-4840). Condensation of grains from C-rich gas provides C-rich solids (e.g., graphite and carbides) and a different set of condensation temperatures. Calculations by Lodders & Fegley (1995, 1999) have discussed the condensation of grains in C-rich gas. For a gas with C/O $>$4, at a pressure representative of a circumstellar region in which dust forms via equilibrium chemistry (P$\sim 10^{-7}$ bar), the condensation temperatures $T_{cond}$ for graphite, TiC, and SiC (the first three condensates) are approximately 2000, 1560, and 1390 K, respectively. To lower temperatures, the condensation sequence is Fe, AlN, and CaS. However, the $T_{cond}$’s are not the whole story. While Ti is effectively removed from the gas at temperatures below its $T_{cond}$, Si’s removal is constrained by the fact that gaseous SiS is resistant to condensation. The $T_{cond}$ for graphite but not for TiC and SiC declines as C/O is decreased to near unity. Condensation temperatures for Al, Mg, Ca, and Fe are all several hundred degrees less than that of Ti. Thus, if the effective $T_{cond}$ in the grain forming region is 1500 K or so, dust-gas separation in a C-rich environment should provide a pronounced Ti deficiency and only for much lower temperatures will deficiencies for other elements be appreciable. Finally, it is possible that dust-gas separation may remove sufficient carbon as graphite to lower the C/O ratio towards unity (Lodders & Fegley 1999). In our figures, we adopt the T$_{cond}$ values determined for a solar (O-rich) composition gas, but discuss possible deviations due to the C/O ratio. Has dust-gas separation modified the composition of HE1327-2326? ---------------------------------------------------------------- Our discussion of HE1327-2326’s composition is based on the abundance analysis by Frebel & Christlieb (2007), which is drawn from new, high signal-to-noise VLT/UVES spectra, but confirms and extends previous detailed analyses (Aoki [[*et al.*]{}]{}2006; Frebel [[*et al.*]{}]{}2006). The abundances are plotted as \[$X$/H\] versus $T_{cond}$ in Figures 2 & 3, and assume the star is a subgiant ([T$_{\rm eff}$]{}= 6180 K, [log g]{}= 3.7; though our conclusions would be the same, and the following discussion negligibly affected, were we to adopt the abundances assuming the star to be a dwarf). While a classical model atmospheres LTE analysis is performed, predicted corrections are included for effects of stellar granulation. We also show abundances \[$X$/H\] for two extreme post-AGB stars (HR 4049 and HD 52961), the RV Tauri variables (HP Lyr and UY CMa), and two Lambda Bootis stars (29 Cyg and HD 106223). There is a resemblance between the compositions of HE1327-2326 and the chemically peculiar stars; as T$_{cond}$ increases, the elemental depletion increases relative to solar. However, HE1327-2326 is unique because of its extreme \[Mg/Fe\] ratio (see Figure 2) which is not seen in any of the chemically peculiar stars. Iron appears to be an outlier, since it is the most underabundant element in this star, which is rarely the case for the RV Tauri or Lambda Bootis stars. Notably, S and Zn, important elements of low condensation temperature, have not been detected in HE1327-2326. The S and Zn upper-limits provide no useful constraints on HE1327-2326, although an abundance estimate for S may be possible in the future (see below). Were we to insist that the abundance pattern for the RV Tauri star HP Lyr, with its smooth trend in \[$X$/H\] vs T$_{cond}$, be a fair template for testing the suggestion that dust-gas separation has affected HE1327-2326, we would be bound to note the scatter in Figure 2 for $T_{cond} \geq 1200$ K. In particular, the low abundance of Fe relative to Al and Ti in HE1327-2326 is the opposite in HP Lyr. It seems unlikely that a change in the applied 3D corrections for the (as yet) untested models of stellar granulation can reverse this trend. The 3D corrections are small for these elements and all of the same sign with an element-to-element scatter of less than 0.2 dex. Estimates of non-LTE corrections assembled by Aoki [[*et al.*]{}]{}(2006) for the 1D model also aggravate the situation in that the corrections raise the Al abundance by 0.4 dex relative to Fe. The non-LTE corrections applicable to the 3D model are unknown. HP Lyr (and the other RV Tauri comparison star, UY CMa, in Figure 2) may be an imperfect template for HE1327-2326. Perhaps, [*crucially*]{}, the $T_{cond}$ estimates are based on a solar composition and, in particular, on the fact that the Sun is O-rich (i.e., O/C $>1$). The measured abundances of C and O show that HE1327-2326 is nominally C-rich now as a subgiant and most likely the C/O ratio has been depressed in evolution to the subgiant branch. (The C abundance is a mere 0.06 dex greater than the O abundance, a difference less than the errors of measurement.) However, an alternative scenario, dust-gas separation in a C-rich environment also appears to fail to explain the observed composition. As noted above, a signature of such a separation should be an appreciable underabundance of Ti, which is not seen in Figure 2 relative to the other elements. A comparison to the Lambda Bootis stars is more intriguing. Other than the large \[Mg/Fe\] ratio already noted as peculiar to HE1327-2326, the relative abundances of Mg, Ca, and Ti to each other (all high T$_{cond}$ elements) are similar to the Lambda Bootis star HD106223 (see Figure 3). If the mechanism for dust-gas separation is more similar between HE1327-2326 and the Lambda Bootis stars, possibly complicated by diffusion or another process (see Section 2), then these stars may be a better comparison template. In Figure 3, both of the Lambda Bootis stars shown have similar depletions of Ca, Ti, and Fe. Unfortunately, Al is not available for either Lambda Bootis star. Al, with its very high T$_{cond}$ is severely depleted in the RV Tauri variables, but less so in HE1327-2326. If HE1327-2326 has undergone dust-gas separation, we can estimate the star’s intrinsic metallicity from the C, N, and O abundances: an initial metallicity of about $-2.0$ by the 3D-corrected C, N, and O abundances but about $-1.3$ without these corrections. The 3D corrections are large ($\simeq-0.7$ dex) for C, N, and O because their abundances are derived from molecules (CH, NH, and OH) whose formation is greatly enhanced by the presence of cooler regions in 3D models. At an initial metallicity of about $-1.3$, the star resembles some post-AGB and RV Tauri stars. If the initial metallicity of HE1327-2326 were in fact either \[Fe/H\] = $-2.0$ or $-1.3$, this has a small impact on our expectation of the initial abundance ratios for the other elements. For example, \[Ca/Fe\] $\simeq +0.3$ for [*normal*]{} metal-poor stars with \[Fe/H\] = $-2$, and this is approximately seen in Figure 2. A similar ratio can be expected of Mg and Ti. While \[Ti/Fe\] is slightly larger than expected, the uniquely high \[Mg/Fe\] ($\simeq +2$) ratio is quite out of line with expectations for normal metal-poor stars. Lastly, in normal metal-poor stars, \[Al/Fe\] $<0$, thus the high Al point is also peculiar to HE1327-2326. Inspection of the \[$X$/H\] for HE1327-2326 (and the other two stars) shows no evidence for a relationship between \[$X$/H\] and the FIP. Has dust-gas separation modified the composition of HE0107-5240? ---------------------------------------------------------------- The composition of HE0107-5240 was determined by Christlieb [[*et al.*]{}]{}(2004) and Bessell [[*et al.*]{}]{}(2002) who found the star to be a giant with $T_{\rm eff} = 5100$ K and [log g]{}=2.2. Their analyses adopted a classical (1D) atmosphere and LTE for the atmosphere and line formation. The derived abundances corrected to those for a 3D atmosphere by Collet [[*et al.*]{}]{}(2006) with retention of the assumption of LTE are plotted in Figures 4 & 5. The abundance pattern for HE0107-5240 is more similar to the chemically peculiar stars than was HE1327-2326 discussed above. The \[Mg/Fe\] ratio is similar to that of the chemically peculiar stars, as well as normal metal-poor stars. Ca and Ti also show depletions with respect to Fe that are in fair agreement with those of the RV Tauri stars. The upper-limits to Zn and S provide no useful constraint for examining the dust-gas scenario, however the very low upper-limit for Al is consistent with the predictions. This star is presently C-rich; C/O (by number) is about 20 for the 1D analysis and about 6 for the 3D analysis. An initial metallicity of $\simeq -4$ is crudely indicated by the 3D abundances of N and O, but $\simeq -2$ by the 1D C abundance. The result of the 1D to 3D corrections is, as indicated above, a large reduction in the C, N, and O abundances, all derived from diatomic molecules (CH, C$_2$, NH, CN, and OH). In 1D, the abundances are larger than those plotted by about 1.2, 1.1, and 0.7 dex for C, N, and O, respectively. It is an interesting point, possibly one of concern, that NH and CN (assuming the C abundance from CH and C$_2$) give a consistent N abundance using the 1D atmosphere but inconsistent abundances (difference is 0.9 dex) using the 3D atmosphere. The \[C/O\] ratio in HE0107-5240 is similar to the carbon-enhanced metal-poor stars which also range in metallicity from \[Fe/H\] = $-2$ to $-4$ (Sivarani [[*et al.*]{}]{}2006). The appropriateness of using $T_{cond}$ in Figures 4 & 5 is decidedly questionable due to the star being very C-rich. If the star was and is a single star and its evolution from the main sequence to its present status as a giant even approximately follows expectation, the star must have begun life even more C-rich. Then, if dust-gas separation occurs in either a protostellar, circumstellar or wind environment, it is the $T_{cond}$’s for C-rich gas that are the relevant quantities. As noted above, a significant deficiency of Ti is expected for the star accreting substantial amounts of gas. On the other hand, if the star has accreted significant amounts of material from the interstellar medium of the established Galaxy, including the present diffuse interstellar medium, the dust-gas separation was likely to have occured or is occuring in an O-rich medium and, then Figures 4 & 5 use approximately the appropriate $T_{cond}$ values. Accretion of interstellar gas that is O-rich requires then an explanation of how HE0107-5240 is so C-rich and metal-poor. Has dust-gas separation modified the composition of HE0557-4840? ---------------------------------------------------------------- The composition is taken from Norris [[*et al.*]{}]{}(2007) who determine that the star is a giant and offer spectroscopic and photometric estimates of the effective temperature of 5100 K and 4900 K, respectively. The surface gravity is given as [log g]{}= 2.2. The mean of their LTE abundances for a 1D atmosphere are plotted in Figure 6. Corrections for adoption of a 3D model atmosphere are small, except for C (and the upper-limits on N and O), and are plotted in Figure 7. The C/O ratio is undetermined for this star. If the initial composition were that suggested by the C abundance and N upper-limit for a 1D atmosphere, the initial \[Fe/H\] $\simeq -3$. Taking the 3D corrections into account, the initial metallicity is likely $\simeq -4$. This lower initial abundance also reduces the scatter in the \[$X$/H\] values, with a hint that \[$X$/H\] is independent of $T_{cond}$. There is no pattern in the elemental abundances for HE0557-4840 that would clearly suggest dust-gas separation. The \[$X$/H\] values for nearly all elements are flat, ranging from $-4.0 > [$X$/H] > -5.5$, and therefore showing similar depletions from a solar composition gas in nearly all elements. This might be unexpected since the normal metal-poor stars with \[Fe/H\] $\sim -4$ show higher $\alpha$-element abundance ratios, and usually an extremely wide range in the $r$-process elements (McWilliam 1997). However, the \[$\alpha$/Fe\] abundances in HE0557-4840 do not fall outside the range of normal metal-poor star abundances (e.g., \[Mg/Fe\] and \[Ti/Fe\] are within expectations). Other Metal-Poor Stars with \[Fe/H\] $\le -3.5$ ------------------------------------------------ It is natural to ask how ubiquitous the dust-gas separation pattern is amongst the extremely metal-poor stars with detailed elemental abundance determinations. An examination of the literature for stars with $-3.5 \ge$ \[Fe/H\] $\ge -4.0$ shows that none of them shows the dust-gas separation signature. Some examples: CS 29498-043 (Aoki [[*et al.*]{}]{}2004) shows an enhancement in Mg similar to those for CNO, and the Zn upper-limit is significantly lower than CNO which is not predicted in the dust-gas separation pattern. CS22949-037 (Depagne [[*et al.*]{}]{}2002) also shows an enhancement in Mg (and Na) that is close to that for CNO, and the Zn determination is signficantly lower whereas dust-gas separation would predict a very similar Zn enhancement to those other elements. Finally, for HE1300+0157 (Frebel [[*et al.*]{}]{}2007) and HE1424-0241 (Cohen [[*et al.*]{}]{}2007), several elements ($\alpha$ elements and iron group elements) have the same enhancements (or nearly so) as the CNO abundances. Of course, each of these stars has interesting and unusual abundance pattern(s), and while not similar to the predictions for dust-gas separation, they are interesting in terms of the progenitor mass or explosion characteristics of their (small number of) contributing supernovae. Infrared excesses ================= An association of circumstellar dust, i.e., an infrared excess, with a star that is a candidate for exposure to dust-gas separation strengthens the explanation for the abundance anomalies. The absence of an infrared excess is not fatal to the star’s candidacy, as shown by many of the Lambda Bootis stars, and the post-AGB star BD+39 4926. The broad-band colors and spectral energy distributions from model atmospheres are shown for the three of the ultra metal-poor stars in Figures 8 to 10. JHK magnitudes are from the 2MASS catalogue; UBVRI magnitudes are from Aoki [[*et al.*]{}]{}(2006) for HE1327-2326, Christlieb [[*et al.*]{}]{}(2002) for HE0107-5240, and Beers [[*et al.*]{}]{}(2007) for HE0557-4840. UBVRI magnitudes have been converted to fluxes using the Vega zero points by Colina [[*et al.*]{}]{}(1996) and JHK magnitudes converted using the 2MASS Explanatory Supplement (Sect VI.4a). Reddening has been taken from the reddening maps by Schlegel [[*et al.*]{}]{}(1998); E(B-V) = 0.08, 0.013, and 0.04 for HE1327-2326, HE0107-5240, and HE0557-4840, respectively. These were converted to A$_\lambda$ based on conversions in Cardelli [[*et al.*]{}]{}(1989). The observed fluxes are compared to model flux distributions from the online grid of available MARCS (Gustafsson [[*et al.*]{}]{}1975; Asplund [[*et al.*]{}]{}1997) and Kurucz models (Castelli & Kurucz 2003). The closest model parameters to those adopted for each star and available in both grids were [T$_{\rm eff}$]{}= 6250 K, [log g]{}= 4.0, and \[Fe/H\]=$-5.0$ for HE1327-2326, and [T$_{\rm eff}$]{}= 5000 K, [log g]{}= 2.0, and \[Fe/H\]=$-5.0$ for both HE0107-5240 and HE0557-4840. There are small differences in the models that account for their slight differences in spectral energy distributions, e.g., there are differences in the opacities used, and MARCS models vary stellar mass and \[$\alpha$/Fe\] ratios. The observed fluxes were scaled to overlay the model fluxes best at R, I, and J; the scale factor is estimated for each star, but would be equivalent to (distance/radius)$^2$. No infrared excess is seen in the colours of any of the ultra metal-poor stars compared to the model flux distributions. While this is an important test for the presence of a dusty disk, it is not definitive. Emission from a disk depends on composition, age, distance from the star, and dust temperature. For many post-AGB stars, like HR 4049, the infrared excess is strong in the K-band and accompanied by a suppression in the UV continuum (Figure 1 of Dominik [[*et al.*]{}]{}2003). On the other hand, the circumstellar disk around the post-AGB star HD56126 (Van Winckel & Reyniers 2000) is detected only beyond 4 microns, with no excess observed in the K-band or below (see Figure 1 in the review paper by van Winckel 2003). Furthermore, not all Lambda Bootis stars show evidence of their dusty disks either. Paunzen [[*et al.*]{}]{}(2003) estimated 23% of [*bona fide*]{} Lambda Bootis stars show evidence for circumstellar matter via infrared excesses or Infrared Space Observatory (ISO) and sub-mm CO(2-1) line emission. Also, the presence of dust around other A-type stars does not necessarily imply the presence of the Lambda Bootis abundance pattern (Kamp [[*et al.*]{}]{}2002, Dunkin [[*et al.*]{}]{}1997); in the study by Acke & Waelkens (2004) only one in 24 targets showed a clear Lambda Bootis abundance pattern. Obviously, the search for an infrared excess should, if possible, be extended to longer wavelengths. Although an infrared excess would clearly identify the presence of a dusty disk, the absence of infrared emission does not preclude the dust-gas separation scenario. Are these stars spectroscopic binaries? --------------------------------------- The presence of a companion seems a prerequisite for efficient dust-gas separation in a post-AGB A-type star and quite probably in the RV Tauri stars. (It is not seemingly a prerequisite for the Lambda Bootis stars.) For this and other reasons, it would be valuable to show whether all three stars are binaries. Presently, the available data are limited on this point. For HE0107-5240, Bessell [[*et al.*]{}]{}(2004) find no radial velocity variations larger than 0.5 km s$^{-1}$ over $\sim$100 (or 373 days when including two spectra taken nearly a year earlier). Similarly for HE1327-2326, Frebel [[*et al.*]{}]{}(2006) find no variations larger than their measurement error or 0.7 to 1.0 km/s over $\sim$100 days (383 if 3 spectra taken nearly a year earlier are included). For HE0557-4840, Norris [[*et al.*]{}]{}(2007) find no velocity variations greater than $\pm0.4$ km s$^{-1}$ from observations spanning 40 days. These data do not find evidence for binarity, however The gold standard for mass transfer across a binary creating abundance anomalies is McClure’s (1985) study of velocity variations for Barium stars, which included a 5 year campaign, with accuracies of 0.5 km s$^{-1}$. An intensive observing campaign may be needed to test convincingly the hypothesis that these ultra metal-poor stars are or are not spectroscopic binaries. The sulphur abundance --------------------- For HE1327-2326, but not for the cooler stars HE0107-5240 and HE0557-5240, it may be possible to detect S[i]{} lines and determine or at least set an interesting limit on the S abundance. Observations of Lambda Bootis stars, RV Tauri variables, and A-type post-AGB stars show that S (and Zn) do not share the depletions of iron-group elements (Figure 1). The upper-limit on the Zn abundance for HE1327-2326 is consistent with a smooth interpolation of the C, N, O, and Na abundance. The strongest S I lines of three different multiplets are observed around 10455, 9212, and 8695 Å (Caffau [[*et al.*]{}]{}2005, 2007). Given the atmospheric parameters of HE1327-2326, we have performed a spectrum synthesis using MOOG (Sneden 1973) and OSMARCS models (Plez [[*et al.*]{}]{}2000). If the star has undergone dust-gas separation, then the intrinsic metallicity as suggested by the C, N, and O abundance is \[$X$/H\] $\simeq -2$ according to the 3D models but $-1.3$ for 1D models. At \[$X$/H\]=$-2$, the strongest S[i]{} lines of the 10455, 9212, and 8695 Å multiplets, are estimated to have LTE equivalent widths of 15, 25, and 1 mÅ. NLTE corrections are thought to strengthen the line from these predictions (Takeda [[*et al.*]{}]{}2005). Thus, the 10455 Å and 9212 Å lines should be detectable, however the 9212 Å lines are in a region of strong telluric lines. Detection of the 10455 Å triplet was reported by Nissen [[*et al.*]{}]{}(2007). These authors used CRIRES on the VLT to detect with ease the S[i]{} lines in G29-23, a subdwarf with \[Fe/H\] $= -1.7$. Its atmospheric parameters ($T_{\rm eff} = 6200$ K, and [log g]{}= 4.0) are similar to those of HE1327-2326. Its \[Fe/H\] is within the range supposed for HE1327-2326 were the C, N, and O abundances indicative of the initial composition. The challenge will be to extend the CRIRES observation of G29-23 at V = 10.2 to HE1327-2326 at V = 13.5, a decrease of about a factor of 20 in flux. Concluding remarks ================== In this paper, we have outlined the existing evicence for and against the three ultra metal-poor stars being affected by dust-gas separation. The evidence for dust-gas separation having occured at the stellar photosphere is primarily the abundance pattern, which resembles that of known chemically peculiar stars globally where dust-gas separation has been supported by other observations. The other observations usually include one or more of the following for at least one object within a sample; infrared excesses (usually beyond the K band at 2.2 microns) or other evidence of circumstellar material, evidence of binarity, or the observation that sulphur or zinc do not have the same depletions as the rest of the metals. The latter is a critical test because non-depleted abundances of these two elements is more consistent with dust formation, otherwise requiring random and puzzling variations in nucleosynthesic yields. Most of these observational tests do not yet exist for the ultra metal-poor stars, or the results are currently inconclusive regarding dust-gas separation. Searchs for radial velocity variations are ongoing in the ultra metal-poor stars to determine if they are in binary systems. Currently, no variations are found, which suggests these stars are not binaries, but this evidence cannot be conclusive because of orbit inclinations and/or potentially small amplitude variations. Sulphur and zinc upper-limits exist for all three ultra metal-poor stars, however these upper-limits are presently too high to distinguish between nucleosynthetic origins or dust-gas separation. K band photometry exists for all three ultra metal-poor stars, and none of them show an infrared excess, however many RV Tau, post-AGB, and Lambda Boo chemically peculiar stars only show infrared excesses beyond the K band, if at all (many have no detectable infrared excesses). In addition, known chemically peculiar stars suggest that neither dust not binarity are necessary conditions; the Lambda Bootis stars exist as single stars, often without dust; the peculiar post-AGB are binaries but one lacks dust; the affected RV Tauri stars have an infrared excess and, although binarity may be a necessary condition, it is as yet unknown observationally that all are binaries. Nevertheless, these observational are possible for the ultra metal-poor stars, e.g. Spitzer observations at mid-IR wavelengths, and examination of the S[I]{} lines in near-IR spectroscopy, but have yet to be performed. One remaining concern is that two of the ultra metal-poor stars do not occupy similar atmospheric parameter ranges as any of the known chemically peculiar stars. HE0107-5240 and HE0557-4840 have temperatures like the RV Tauri variables, yet higher gravities and lower intrinsic metallicities (if CNO are the appropriate proxies). Physically, these stars are expected to have atmospheres with deep convective envelopes, and thus, accreted gas from a stellar wind, a circumstellar shell, or a circumbinary disk is should be diluted beyond detection. Existence of these envelopes makes it difficult to understand how dust-gas separation effects can be created and sustained in these giants. Indeed, the stars must have received several tenths of a solar mass of separated gas in order that the anomalies be detectable in these giants. Of course, this is a concern for the RV Tauri and post-AGB stars as well. These difficulties are less severe for HE1327-2326, which is significantly hotter and is expected to have a more shallow convective envelope. HE1327-2326 occupies (nearly) the same stellar parameter range as the Lambda Bootis stars. Fortunately, detection and analysis of the S[i]{} lines in this star offers a critical test for any chemical peculiarities due to dust-gas separation. Stellar astronomers are conditioned to think about stars with very peculiar abundance anomalies - real or imagined. Commonsense is often a reliable guide to the boundary between real and imagined. In this regard, if the three stars under consideration here are truly stars of a higher intrinsic metallicity, one might ask why examples have not been seen in well studied samples of metal-poor stars. The globular clusters spring to mind. Strömgren photometry should be able to pick out those few stars with a reduced metallicity from the rest of the stars showing the monometallicity that is a mark of a globular cluster. Of course, if these stars are due to the random effects of passing through dense interstellar clouds, and the effects on the abundances are short lived, then one does not expect to find similar stars in globular clusters, and from objective prism surveys such stars would not stand out as peculiar if the resultant metallicity is greater than $\sim -3$. It is only because ultra metal-poor stars are so important as tests of early chemical evolution in the Galaxy that these stars were picked out and studied in detail from high resolution spectroscopy. It is important that the suspicion be laid to rest that they are not truly ultra metal-poor but chemically peculiar stars of a more common, if low, metallicity. Then, the focus may be placed exclusively on finding an explanation in terms of stellar nucleosynthesis and the chemical evolution of the young Galaxy (e.g., Iwamoto [[*et al.*]{}]{}2005). We are grateful to P. Bonifacio for valuable discussions on metal-poor stars and on observations and model calculations of the S[I]{} lines. We thank Anna Frebel for providing abundances in advance of publication and Katharina Lodders for helpful comments on dust formation in C-rich material. Thanks to Ian Roederer, Inga Kamp, and the anonymous referee for many helpful comments on this manuscript. DLL’s contributions have been supported by the Robert A. Welch Foundation of Houston, Texas. KAV would like to thank NSERC for support through a Discovery grant. Acke B., Waelkens C., 2004, , 427, 1009 Aoki W., Norris J.E., Ryan E.G., Beers T.C., Christlieb N., Tsangarides S., Ando H., 2004, , 608, 971 Aoki W., Frebel A., Christlieb N., [[*et al.*]{}]{}, 2006, , 639, 897 Andrievsky S.M., [[*et al.*]{}]{}, 2002, , 396 641 Asplund M., Gustafsson B., Kiselman D., Eriksson K., 1997, , 318, 521 Aumann H.H., Beichman C.A., Gillett F.C., de Jong T., Houck J.R., Low F.J., Neugebauer G., Walker R.G., Wesselius P.R., 1984, , 278, 23 Baschek B., Searle L., 1969, , 155, 537 Baum[ü]{}ller D., Gehren T., 1997, , 325, 1088 Beers T., [[*et al.*]{}]{}, 2007, , 168, 128 Bessell M.S., Christlieb N., Gustafsson B., 2004, , 612, L61 Burbidge E.M., Burbidge G.R., 1956, , 124, 116 Caffau E., [[*et al.*]{}]{}2007, , 470, 699 Caffau E., [[*et al.*]{}]{}2005, , 441, 533 Cardelli J., [[*et al.*]{}]{}1989, , 345, 245 Castelli R., Kurucz R.L., 2003, Proc. of IAU Symp. 210, Modelling of Stellar Asmospheres, eds. N. Piskunov et al., poster A20 on the enclosed CD-ROM (astro-ph/0405087) Christlieb N., [[*et al.*]{}]{}2002, Nature, 419, 904 Christlieb N., [[*et al.*]{}]{}2004, , 603, 708 Cohen J., McWilliam A., Christlieb N., Shectman S., Thompson I., Melendez J., Wisotzki L., Reimers D., 2007, , 659, L161 Colina L., Bohlin R., Castelli F., 1996, STScI Instrument Science Report CAL/SCS-008 Collet R., Asplund M., Trampedach R., 2006, , 644, 121 Depagne E., [[*et al.*]{}]{}, 2007, , 390, 187 Dominik C., Dullemond C.P., Cami J., van Winckel H., 2003, , 397, 595 Dunkin S.K., Barlow M.J., Ryan S.G. 1997, MNRAS, 286, 604 Frebel A., [[*et al.*]{}]{}, 2005, Nature, 434, 871 Frebel A., [[*et al.*]{}]{}, 2006, , 638, L17 Frebel A., [[*et al.*]{}]{}, 2007, , 658, 534 Frebel A., & Christlieb N., 2007, priv. comm. Gáspár, A., Su K.Y.L., Rieke G.H., Balog Z., Kamp I., Martínez-Galarza J.R., Stapelfeldt K., 2007, , in press (astro-ph/0709.4247) Gehrz R.D., 1972, , 178, 715 Giridhar S., Rao N.K., Lambert D.L., 1994, , 437, 476 Giridhar S., Lambert D.L., Reddy B.E., Gonzalez G., Yong D., 2005, , 627, 432 Gonzalez G., Lambert D.L., 1997, , 114, 341 Gratton R., Carretta E., Eriksson K., Gustafsson B., 1999, , 350, 955 Gray R.O., Corbally C.J., 1998, , 116, 2530 Gustafsson B., Bell R.A., Eriksson K., Nordlund A., 1975, , 42, 407 Kamp I., Paunzen E., 2002, MNRAS, 335, 45 Kamp I., Hempel M., Holweger H., 2002, , 388, 978 Lambert D.L., Hinkle K.H., Luck R.E., 1988, , 333, 917 Lemke M., Venn K.A., 1996, , 309, 558 Lodders K., Fegley B., 1995, Meteoritics, 30, 661 Lodders K., Fegley B., 1999, in IAU Symp. 191, Asymptotic Giant Branch Stars, ed. T. Le Bertre, A. Lébre, & C. Waelkens (San Francisco: ASP), 279 Lodders K., 2003, , 591, 1220 McClure R.D., 1985, in Cool Stars with Excesses of Heavy Elements, Proceedings of the Strasbourg Observatory Colloquium, (Dordrecht: Reidel), 327 McWilliam A., 1997, ARAA, 35, 503 Morgan W.W., Keenan P.C., Kellman E., 1943, An Atlas of Stellar Spectra with an Outline of Spectral Classification (Chicago: Univ. Chicago Press) Norris J.E., Christlieb N., Korn A.J., Eriksson K., Bessel M.S., Beers T.C. Wisotzki L., Reimers D., 2007 , in press (astro-ph/0707.2657v2) Paunzen E., Kamp I., Weiss W.W., Wiesemeyer H., 2003, , 404, 579 Plez B., 2000, in The Carbon Star Phenomenon, IAU 177 proceedings, p. 71 (Kluwer: Dordrecht) Ed. R.F. Wing. Rao N.K., Reddy B.E., 2005, MNRAS, 357, 235 Savage B., Sembach K., 1996, , 470, 893 Schlegel D.J., Finkbeiner D.P., Davis M., 1998, , 500, 525 Sivarani T., [[*et al.*]{}]{}, 2006, , 459, 125 Sneden C., 1973, , 184, 839 Spite M., [[*et al.*]{}]{}2005, , 430, 655 Su K.Y.L., [[*et al.*]{}]{}2005, , 628, 487 Suda T., Aikawa M., Machida M.N., Fujimoto M.Y., Iben I. Jr., 2004, , 611, 476 Takeda Y., [[*et al.*]{}]{}2002, PASJ, 54, 765 Takeda Y., Hashimoto O., Taguchi H., Yoshioka K., Takada-Hidai M., Saito Y., Honda S., 2005, PASJ, 57, 751 Tominaga N., Umeda H., Nomoto K., 2007, , 660, 516 Tumlinson J., 2007a, , 664, 63 Tumlinson J., 2007b, , 665, 1361 Turcotte S., Charbonneau P., 1993, , 413, 376 Umeda H, Nomoto K., 2003, Nature, 422, 871 van Winckel H., Waelkens C., Waters L.B.F.M., 1995, , 293, L25 van Winckel H., Mathis J.S., Waelkens C., 1992, Nature, 356, 500 van Winckel H., Reyniers M., 2000, , 354, 135 van Winckel H., 2003, ARAA, 41, 391 van Winckel H., 2007, in Baltic Astronomy, 16, 112 van Winckel H., Lloyd Evans T., Reyniers M., Deroo P., Gielen C., 2006, MmSAI, 77, 943 Venn K.A., Lambert D.L., 1990, , 363, 234 Waelkens C., van Winckel H., Bogaert E., Trams N.R., 1991, , 251, 495
{ "pile_set_name": "ArXiv" }
--- abstract: 'The minimal coupling procedure, which is employed in standard Yang–Mills theories, appears to be ambiguous in the case of gravity. We propose a slight modification of this procedure, which removes the ambiguity. Our modification justifies some earlier results concerning the consequences of the Poincar[é]{} gauge theory of gravity. In particular, the predictions of the Einstein–Cartan theory with fermionic matter are rendered unique.' author: - Marcin Kaźmierczak title: 'Modified coupling procedure for the Poincar[é]{} gauge theory of gravity' --- Introduction {#section1} ============ Since the introduction by Yang and Mills of the non–Abelian gauge theories [@YaMi], attempts have been undertaken of describing all the known interactions as emerging from the localization of some fundamental symmetries of the laws of physics. It is now clear that all the non–gravitational fundamental interactions can be successfully given such an interpretation. The Yang–Mills (YM) theories constitute a formal basis for the standard model of particle physics. Although the attempts to describe gravity as a gauge theory were initiated by Utiyama [@Ut] within a mere two years after the pioneering work of Yang and Mills, the construction of this theory seems yet not to be satisfactorily completed. If a field theory in Minkowski space is given, this theory being symmetric under the global action of a representation of a Lie group, the natural way to introduce the corresponding interaction within the spirit of YM is to apply the minimal coupling procedure (MCP). However, trying to apply MCP in order to pass from a field theory in flat space to a Riemann–Cartan (RC) space (i.e. a manifold equipped with a metric tensor and a metric connection) results in difficulties. This is because adding a divergence to the flat space Lagrangian density, which is a symmetry transformation, leads to the non–equivalent theory in curved space after MCP is applied. Although this problem was observed already by Kibble[@Kib1], it has been largely ignored in the subsequent investigations concerning EC theory. The resulting ambiguity can be physically important for the standard Einstein–Cartan theory and its modifications [@Kazm1; @Kazm2]. It seems that MCP should be somehow modified for the sake of connections with torsion, so that it gives equivalent results for equivalent flat space Lagrangians. An attempt to establish such a modification was made by Saa [@Saa1; @Saa2]. Unfortunately, Saa’s solution results in significant departures from general relativity, which seem incompatible with observable data [@BFY][@FY], unless some additional assumptions of rather artificial nature are made, such as demanding a priori that part of the torsion tensor vanish [@RMAS]. The main purpose of this paper is to introduce an alternative modification of MCP, which also eliminates the ambiguity. Unlike Saa’s proposal, our approach does not lead to radical changes in the predictions of the theory. In the case of gravity with fermions, the procedure simply justifies the earlier results of [@HD; @HH; @Ker; @Rumpf; @PR]. These results were obtained partly ‘by chance’, as the flat space Dirac Lagrangian was randomly selected from the infinity of equally good possibilities. The gauge approach to gravity and the ambiguity of minimal coupling {#section2} =================================================================== Let us recall the classical formalism of a YM theory of a Lie group $G$. Let \[S\] S\[\]=ł[,\_]{}[)]{}d\^4x=ł[,d]{}[)]{}represent the action of a field theory in Minkowski space $M$. Here $\mathcal{L}$ is a Lagrangian density and $\mathfrak{L}$ a Lagrangian four–form. Assume that $\mathcal{V}$ is a (finite dimensional) linear space in which fields $\phi$ take their values, $\phi:M\rightarrow \mathcal{V}$, and $\pi$ is a representation of $Lie(G)$ on $\mathcal{V}$. Let $\rho$ denote the corresponding representation of the group[^1], $\rho{\left(}{\exp (\mathfrak{g})}{\right)}=\exp{\left(}{\pi(\mathfrak{g})}{\right)}$. If the Lagrangian four–form is invariant under its global action $\phi\rightarrow\phi'=\rho(g)\phi$, one can introduce an interaction associated to the symmetry group $G$ by allowing the group element $g$ to depend on space–time point and demanding the theory to be invariant under the local action of $G$. This can be most easily achieved by performing the replacement \[MCPYM\] d=d+, where $\mathcal{A}$ is a $Lin(\mathcal{V})$–valued one–form field on $M$ ($Lin(\mathcal{V})$ being the set of linear maps of $\mathcal{V}$ into itself) which transforms under the local action of $G$ as \[gauge\] ’=(g)\^[-1]{}(g)-d(g)\^[-1]{}(g) . In the standard YM one requires that $\mathcal{A}$ takes values in a linear subspace $Ran(\pi):=\{\pi(\mathfrak{g}):\mathfrak{g}\in Lie(G)\}\subset Lin(\mathcal{V})$, but this requirement is not necessary to make the action invariant under local transformations. We shall adopt a more general approach, in which $\mathcal{A}$ assumes the form \[AandB\] =+(,e) , where $\mathbb{A}$ is the usual YM connection taking values in $Ran(\pi)$ and transforming according to (\[gauge\]), $e$ denotes an orthonormal basis of one–form fields serving physically as a reference frame at each point of space–time[^2], $\mathbb{B}(\mathbb{A},e)$ is a $Ran(\pi)^{\perp}$–valued one–form on $M$. Here $\perp$ denotes the orthogonal complement with respect to some natural scalar product on $Lin(\mathcal{V})$. The simplest candidate for this scalar product is $\langle\langle X,Y \rangle\rangle=trace{\left(}{X^{\dag}Y}{\right)}$, where $\dag$ stands for Hermitian conjugation of a matrix. However, if $\mathcal{V}$ admits a $\rho$–invariant scalar product $\langle,\rangle_{\rho}$, such that $\forall v,w\in\mathcal{V}$, $g\in G$, $\langle\rho(g)v,\rho(g)w\rangle_{\rho}=\langle v,w\rangle_{\rho}$, then the use of the induced scalar product $\langle\langle,\rangle\rangle_{\rho}$ on $Lin(\mathcal{V})$ satisfying $\langle\langle\rho(g)X\rho^{-1}(g),\rho(g)Y\rho^{-1}(g)\rangle\rangle_{\rho}=\langle\langle X,Y\rangle\rangle_{\rho}$ may seem esthetically more appealing. This product may not be positive–definite, but if the subspace $Ran(\pi)\subset Lin(\mathcal{V})$ is nondegenerate with respect to $\langle\langle,\rangle\rangle_{\rho}$, then the space of linear maps decouples into a simple sum $Lin(\mathcal{V})=Ran(\pi)\oplus Ran(\pi)^{\perp}$ and hence $\mathbb{A}$ and $\mathbb{B}(\mathbb{A},e)$ are uniquely determined by $\mathcal{A}$. In order not to introduce additional fields, $\mathbb{B}$ is required to be determined by $\mathbb{A}$ and $e$. In order not to destroy the transformation law (\[gauge\]), it is also required that $\mathbb{B}(\mathbb{A}',e')=\rho(g)\mathbb{B}(\mathbb{A},e)\rho^{-1}(g)$. Our final requirement is that the coupling procedure thus obtained by free of the ambiguity corresponding to the possibility of the addition of a divergence to the initial matter action. It is remarkable that in the case of the gravitational interaction and fermions these ideas, together with the natural requirement that the Leibniz rule holds for vector fields composed of spinors, fix the form of $\mathbb{B}(\mathbb{A},e)$ (up to terms that can be absorbed by other known fundamental interactions and do not influence the resulting connection on the base manifold), as we will see below. All the constructions of YM can be accomplished in terms of $\mathbb{A}$ and its curvature $\mathbb{F}=d\mathbb{A}+\mathbb{A}\wedge\mathbb{A}$. The role of $\mathbb{B}$ is only to modify the coupling procedure such that it is unique. In the case of gravity, it is not sufficient to perform the replacement (\[MCPYM\]) – one needs also to replace the Minkowski space (holonomic) basis of orthonormal one–forms $dx^{\mu}$ by the cotetrad $e^a$ and redefine the geometric structure of the base manifold such that the original Minkowski space $M$ becomes the RC space $\mathcal{M}(e,\omega)$ (here $\omega$ is a spin–connection that can be extracted out of $\mathbb{A}$). We shall use the Dirac field case as an instructive example. In particle physics, the most frequently used Lagrangian four–form for the Dirac field is \[LF0\] &\_[F0]{}= -ił[dx\_]{}[)]{}\^ d-m d\^4x\ &=ł[i\^\_-m]{}[)]{} d\^4x . Here $\gamma^{\mu}$ are the Dirac matrices obeying $\gamma^{\mu}\gamma^{\nu}+\gamma^{\nu}\gamma^{\mu}=2\eta^{\mu\nu}$, where $\eta=diag(1,-1,-1,-1)$ is the Minkowski matrix, and $\ov{\psi}:={\psi}^{\dagger}\gamma^0$, where ${\psi}^{\dagger}$ is a Hermitian conjugation of a column matrix (think of $\psi$ as a column of four complex–valued functions on space–time). This four–form is invariant under the global action of the Poincar[é]{} group &x\^x’\^=[\^]{}\_ x\^+a\^, ’=S(),\ &S(()):=ł[-\_\^]{}[)]{},  \^:=\[\^,\^\] , where $a^{\mu}$ and $\varepsilon_{\mu\nu}=-\varepsilon_{\nu\mu}$ are the parameters of the transformation. In order to make the symmetry local, it is sufficient to replace the differentials by covariant differentials (thus introducing the connection $\omega$), to replace the basis of one–forms $dx^{\mu}$ of $M$ by the cotetrad basis $e^a$ on the resulting RC space $\mathcal{M}(e,\omega)$, and to use the Hodge star operator $\star$ adapted to $\mathcal{M}$. The resulting Lagrangian four–form is \[LF0gr\] &\_[F0]{}= -ił[e\_a]{}[)]{}\^aD-m,\ &D=d-\_[ab]{}\^[ab]{} (the matrices $\gamma^a$, $a=0,\dots,3$ are just the same as $\gamma^{\mu}$, $\mu=0,\dots,3$). Here $\epsilon=e^0\wedge e^1\wedge e^2\wedge e^3$ is the canonical volume element on $\mathcal{M}$. The coupling procedure of this kind will be referred to as the minimal coupling procedure (MCP) for the gravitational interaction. The one–forms $\omega_{ab}=-\omega_{ba}$, which endow the space–time with the metric–compatible connection, may be interpreted as gauge–fields corresponding to Lorentz rotations. Although the relation of $e^a$ to the translational gauge–fields is more subtle, the procedure can be given interpretation in the framework of gauge theory of the Poincar[é]{} group (see [@Kazm3] for an exhaustive and simple treatment). In EC theory, the gauge–field–part of the Lagrangian is taken to be $\mathfrak{L}_G=-\frac{1}{4k}\epsilon_{abcd}e^a\wedge e^b\wedge \Omega^{cd}$, where $k$ is a constant and ${\Omega^a}_b=d{\omega^a}_b+{\omega^a}_c\wedge {\omega^c}_b$ the curvature two–form on $\mathcal{M}$. It is crucial that the first–order formulation of general relativity is much more adequate for gauge formulation than the standard second–order one. We shall now address the problem which the first–order approach entails. Let (\[S\]) denote the action functional of a classical field theory in Minkowski space $M$. It is well known that the transformation \[Lch\] ’=+\_V\^ of the Lagrangian density changes $\mathfrak{L}$ by a differential. When introducing a new interaction, it seems reasonable to require that the resulting theory be independent on whether we have added a divergence to the initial Lagrangian density or not. Let us now specialize again to the Dirac field and consider the effect of the transformation (\[Lch\]) of the initial Lagrangian on the final Lagrangian four–form on $\mathcal{M}$. We shall consider the vector field of the form \[V\] V\^=aJ\_[(V)]{}\^+bJ\_[(A)]{}\^, a,b, where $J_{(V)}^{\mu}=\ov{\psi}\gamma^{\mu}\psi$ and $J_{(A)}^{\mu}=\ov{\psi}\gamma^{\mu}\gamma^5\psi$ are the Dirac vector and axial currents (this is the only possible form which is quadratic in $\psi$ and transforms as a vector under proper Lorentz transformations). It is straightforward to check that the following Leibniz rule applies \[Leib\] ł[D]{}[)]{}C\^a+C\^aD=dł[C\^a]{}[)]{}+[\^a]{}\_b ł[C\^b]{}[)]{}, where $C^a:=a\gamma^a+b\gamma^a\gamma^5$. Hence under the minimal coupling $d\psi\rightarrow D\psi$ the differential $dV^{\mu}$ of (\[V\]) will pass into $DV^a=dV^a+{\omega^a}_bV^b$. Using the identity $\partial_{\mu}V^{\mu}d^4x=-\star (dx_{\mu})\wedge dV^{\mu}$ one can then conclude that the change in the resulting Lagrangian four–form on $\mathcal{M}$ (under the transformation (\[Lch\]) of the initial Lagrangian density) will be \[TVtetr\] ’-=dł[V]{}[)]{}-T\_aV\^a, where $T^a={T^{ba}}_b$ is the torsion trace (the components of the torsion tensor in the tetrad basis are given by the equation $\frac{1}{2}{T^a}_{bc}e^b\wedge e^c=de^a+{\omega^a}_b\wedge e^b$) and $\lrcorner$ denotes the internal product. When deriving (\[TVtetr\]), it is necessary to use metricity of $\omega$. Within the framework of classical general relativity, where the torsion of the connection is assumed to vanish, the result would be again a differential. In EC theory the torsion is determined by the spin of matter and does not vanish in general. Hence, the equivalent theories of the Dirac field in flat space can lead to the non–equivalent theories with gravitation. Surprisingly, this fact has been used by many authors to remove a serious pathology of the Lagrangian (\[LF0gr\]). This Lagrangian is neither real, nor does it differ by divergence from the real one. As a result, the equations obtained by varying with respect to $\psi$ and $\ov{\psi}$ are not equivalent and together impose too severe restrictions on the field. The commonly accepted solution is to adopt \[LFR\] \_[FR]{}= -ł[dx\_]{}[)]{}ł[ \^ d-\^]{}[)]{}-md\^4x as an appropriate flat space Lagrangian ((\[LFR\]) differs from (\[LF0\]) by a differential). The application of MCP yields \_[FR]{}= -ł[e\_a]{}[)]{}ł[ \^a D-\^a]{}[)]{}-m. This choice of Lagrangian served as the basis for physical investigations in numerous papers. But the reality requirement does not fix the theory uniquely. We can next add to $\mathcal{L}_{FR}$ the divergence of a vector field of the form (\[V\]), where now the parameters $a$, $b$ are required to be real, since we do not want to destroy the reality of the Lagrangian. This may lead to the meaningful physical effects [@Kazm1; @Kazm2]. Hence, the standard MCP for first–order gravity appears to involve an ambiguity. How to remove the ambiguity? {#ambr} ============================ For the Dirac field, the linear space of the representation of the gravitational gauge group is $\mathbb{C}^4$ and the space $Ran(\pi)$ is spanned by the matrices $\Sigma^{ab}$. The natural Lorentz invariant scalar product $\langle\phi,\psi\rangle_{\rho}=\phi^{\dag}\gamma^0\psi$ on $\mathbb{C}^4$ induces the product $\langle\langle X,Y\rangle\rangle_{\rho}=trace{\left(}{\gamma^0X^{\dag}\gamma^0Y}{\right)}$ on $Lin(\mathcal{V})$. For any representation of the matrices $\gamma^a$ that is unitarily equivalent to the Dirac representation, the orthogonal complement is spanned by ${\bf 1}$, $\gamma^5$, $\gamma^a$, $\gamma^5\gamma^a$. Hence we have &=D+,\ &D=d+, =-\_[ab]{}\^[ab]{},\ &=+\^5+\_a\^a+\_a\^5\^a, where $\chi$, $\kappa$, $\tau_a$, $\rho_a$ are complex valued one–forms on space–time. We will require that the Leibniz rule hold for the Dirac vector and axial currents, &()\^a+\^a= dJ\_[(V)]{}\^a+\_bJ\_[(V)]{}\^b,\ &()\^a\^5+\^a\^5= dJ\_[(A)]{}\^a+\_bJ\_[(A)]{}\^b, where $\mathcal{D}\ov{\psi}:=(\mathcal{D}\psi)^{\dag}\gamma^0$ and $\tilde{\omega}{^a}_b$ represents a modified connection on the RC space. Straightforward calculations show that these equations are satisfied if and only if \_b=[\^a]{}\_b+\^a\_b, =+i\_1[**1**]{}+i\_2\^5, where $\lambda:=2Re{\left(}{\chi}{\right)}$, $\mu_1:=Im{\left(}{\chi}{\right)}$, $\mu_2:=Im(\kappa)$ are real–valued one–forms. Note that the one–forms $\mu_1$ and $\mu_2$ do not influence the resulting connection on the RC space. If non–gravitational interactions were included, the components of these one–forms could be hidden in the gauge fields corresponding to the localization of the global symmetry of the change of phase $\psi\rightarrow e^{i\alpha}\psi$ and the approximate symmetry under the chiral transformation $\psi\rightarrow e^{i\alpha\gamma^5}\psi$. In order not to involve non–gravitational interactions, one needs to set $\mu_1$ and $\mu_2$ to zero. According to the ideas presented at the beginning of this report, $\lambda$ should be determined by $\omega$ and $e$ in such a way that it is a scalar (compare (\[AandB\]) and the remarks concerning the dependence of $\mathbb{B}$ on $\mathbb{A}$ and $e$). What is more, the procedure is expected to be free of the ambiguity. To see that all the requirements can be accomplished, note that the divergence $\partial_{\mu}V^{\mu}d^4x=-\star(dx_{\mu})\wedge dV^{\mu}$ will pass into $-\star(e_a)\wedge{\left(}{dV^a+{\tilde{\omega}}{^a}_bV^b}{\right)}$. Hence, (\[TVtetr\]) imply that the procedure will yield unique results for generic $\omega$ if and only if $\lambda=\mathbb{T}$, where $\mathbb{T}=T_ae^a$ is the torsion–trace–one–form, which is indeed a scalar under local Lorentz (or Poincar[é]{}) transformations $\omega\rightarrow \Lambda\omega\Lambda^{-1}-d\Lambda\Lambda^{-1}$, $e\rightarrow\Lambda e$. Hence, there exists precisely one coupling procedure which is free of the ambiguity and satisfies all the requirements. From the perspective of the base manifold $\mathcal{M}$, it seems that the procedure could be stated briefly by saying that the modified connection $\tilde{\omega}{^a}_b={\omega^a}_b+\mathbb{T}\delta^a_b$ should be used in MCP, instead of the original metric connection $\omega$ entering $\mathfrak{L}_G$. However, it would not be clear then how the new connection is to be implemented on spinors (the simple substitution $\omega\rightarrow\tilde{\omega}$ in $D\psi$ would not work well). What is more, there are other possibilities of modifying the connection so that its application in MCP guarantees uniqueness. The simplest way to achieve this would be to subtract the contortion tensor. This would result in Levi–Civita connection reducing the formalism effectively to the second–order one. The torsion would entirely disappear from the theory. Less drastic possibility could be to retain only the antisymmetric part of the torsion tensor by adopting $\tilde{\omega}_{ab}=\stackrel{\circ}{\omega}_{ab}-\frac{1}{2}T_{[abc]}e^c$, where $\stackrel{\circ}{\omega}$ is the Levi–Civita part of $\omega$ and $T_{abc}$ the torsion of $\omega$. For the Dirac field, all such possibilities necessarily violate one of the assumptions supporting our approach (the two that were mentioned produce $\mathbb{B}$ that does not take values in the orthogonal complement of $Ran(\pi)$ – this makes impossible reading out the connection $\omega$, that ought to be used in the construction of $\mathfrak{L}_G$, from given $\mathcal{A}=\mathbb{A}+\mathbb{B}(\mathbb{A},e)$). A different approach is possible, in which the corrected connection takes values in an extension of the original Lie algebra. One should specify what kind of extensions are allowed, how the original connection is to be retrieved from the extended one and to establish the dependence of Yang–Mills fields of the extension from those of the original theory. In the case discussed here, extending $so(1,3)$ by dilatations would work well. However, the details of such an abstract approach ought to be considered with care and this will not be done in this brief report. The new connection $\tilde{\omega}$ on $\mathcal{M}$ is not metric. One could hope that $\omega$ could be obtained from $\tilde{\omega}$ as its metric part. This is however not the case. Let us recall that the coefficients ${\Gamma^a}_{bc}$ of any connection can be decomposed as \_[bc]{}=\_[bc]{}+K[\^a]{}\_[bc]{}+L[\^a]{}\_[bc]{}, where $\stackrel{\circ}{\Gamma}{^a}_{bc}$ is the Levi–Civita part determined by the metric $g=\eta_{ab}e^a\otimes e^b$, $K_{abc}:=\frac{1}{2}(T_{cab}+T_{bac}-T_{abc})$ the contortion and $L_{abc}=-\frac{1}{2}{\left(}{\nabla_bg_{ca}+\nabla_cg_{ba}-\nabla_ag_{bc}}{\right)}$ the nonmetricity. The contortion of $\tilde{\omega}$ is related to that of $\omega$ by $\tilde{K}_{abc}=K_{abc}+\eta_{cb}T_a-\eta_{ca}T_b$. The metric part of $\tilde{\Gamma}_{abc}$ is therefore equal to $\Gamma_{abc}+\eta_{cb}T_a-\eta_{ca}T_b$, and not to $\Gamma_{abc}$. Acknowledgements {#acknowledgements .unnumbered} ================ I wish to thank the referee of PRD for bringing the solution based on anti–symmetric part of the torsion to my attention and for other important remarks that improved this report. I am also grateful to Wojciech Kami[ń]{}ski, Jerzy Lewandowski and Andrzej Trautman for helpful comments. This work was partially supported by the 2007-2010 research project N202 n081 32/1844, the Foundation for Polish Science grant ”Master”. [99]{} C. Yang and R. Mills, “Conservation of isotopic spin and isotopic gauge invariance”, Phys. Rev. [**96**]{}, 191 (1954). R. Utiyama, “Invariant theoretical interpretation of interaction”, Phys. Rev. [**101**]{}, 1597 (1956). T. Kibble, “Lorentz invariance and the gravitational field”, J. Math. Phys.  [**2**]{}, 212–221 (1961). M. Kazmierczak, “Nonuniqueness of gravity induced fermion interaction in the Einstein-Cartan theory”, Phys. Rev. D [**78**]{}, 124025 (2008) \[arXiv:0811.1932\]. M. Kazmierczak, “Einstein–Cartan gravity with Holst term and fermions”, Phys. Rev. D [**79**]{}, 064029 (2009) \[arXiv:0812.1298\]. A. Saa, “Propagating torsion from first principles”, Gen. Rel. Grav.  [**29**]{}, 205 (1997) \[arXiv:gr-qc/9609011\]. A. Saa, “Volume–forms and minimal action principles in affine manifolds”, J. Geom. Phys.  [**15**]{}, 102 (1995) \[arXiv:hep-th/9308087\]. T. Boyadjiev, P. Fiziev and S. Yazadjiev, “Neutron star in presence of torsion-dilaton field”, Class. Quant. Grav.  [**16**]{}, 2359 (1999) \[arXiv:gr-qc/9803084\]. P. Fiziev and S. Yazadjiev, “Solar System Experiments and the Interpretation of Saa’s Model of Gravity with Propagating Torsion as a Theory with Variable Plank “Constant” ", Mod. Phys. Lett.  [**A14**]{}, 511 (1999) \[arXiv:gr-qc/9807025\]. R. Mosna and A. Saa, “Volume elements and torsion”, J. Math. Phys.  [**46**]{}, 112502 (2005) \[arXiv:gr-qc/0505146\]. F. Hehl and B. Data, “Nonlinear spinor equation and asymmetric connection in general relativity”, J. Math. Phys.  [**12**]{}, 1334 (1971). F. Hehl and P.  von der Heyde, “Spin and the structure of space–time”, Ann. Inst. Henri Poincaré  [**A19**]{}, 179 (1973). D. Kerlick, “Cosmology and particle pair production via gravitational spin–spin interaction in the Einstein–Cartan–Sciama–Kibble theory of gravity”, Phys. Rev. D [**12**]{}, 3004 (1975). H. Rumpf, “Creation of Dirac Particles in General Relativity with Torsion and Electromagnetism I, II, III”, Gen. Rel. Grav. [**10**]{} 509, 525, 647 (1979). A. Perez and C. Rovelli, “Physical effects of the Immirzi parameter”, Phys. Rev.  D [**73**]{}, 044013 (2006) \[arXiv:gr-qc/0505081\]. M. Kazmierczak, “On the choice of coupling procedure for the Poincar[é]{} gauge theory of gravity”, (2009) \[arXiv:0902.4432\]. [^1]: More precisely, in a generic case $\rho$ is a representation of the universal covering group of $G$, which may not be a representation of $G$ itself. [^2]: In the case of non–gravitational interactions, this frame can be fixed once and for all and the dependence on $e$ does not have to be considered. In the case of gravity, an orthonormal cotetrad can be constructed from the Poincar[é]{} gauge fields. It could be then interpreted as a part of $\mathbb{A}$, if the representation $\pi$ of the Poincar[é]{} algebra was faithful. However, physical matter fields usually transform trivially with respect to translations and representations $\pi$ are not faithful. it is therefore necessary to assume separately that $\mathbb{B}$ depends on $e$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Several recent studies have shown that about half of the massive galaxies at $z\sim2$ are in a quiescent phase. Moreover, these galaxies are commonly found to be ultra-compact with half-light radii of $\sim1$ kpc. We have obtained a $\sim29$ hr spectrum of a typical quiescent, ultra-dense galaxy at $z=2.1865$ with the Gemini Near-Infrared Spectrograph. The spectrum exhibits a strong optical break and several absorption features, which have not previously been detected in $z>2$ quiescent galaxies. Comparison of the spectral energy distribution with stellar population synthesis models implies a low star formation rate (SFR) of $1-3\rm~M_{\odot}\,yr^{-1}$, an age of $1.3-2.2$ Gyr, and a stellar mass of $\sim2\,\times\,10^{11}\,M_{\odot}$. We detect several faint emission lines, with emission-line ratios of /, /, and / typical of low-ionization nuclear emission-line region. Thus, neither the stellar continuum nor the nebular emission implies active star formation. The current SFR is $<1\%$ of the past average SFR. If this galaxy is representative of compact quiescent galaxies beyond $z=2$, it implies that quenching of star formation is extremely efficient and also indicates that low luminosity active galactic nuclei (AGNs) could be common in these objects. Nuclear emission is a potential concern for the size measurement. However, we show that the AGN contributes $\lesssim8\%$ to the rest-frame optical emission. A possible post-starburst population may affect size measurements more strongly; although a 0.5 Gyr old stellar population can make up $\lesssim10\%$ of the total stellar mass, it could account for up to $\sim40\%$ of the optical light. Nevertheless, this spectrum shows that this compact galaxy is dominated by an evolved stellar population.' author: - 'Mariska Kriek, Pieter G. van Dokkum, Ivo Labbé, Marijn Franx, Garth D. Illingworth, Danilo Marchesini, & Ryan F. Quadri' title: 'An ultra-deep near-infrared spectrum of a compact quiescent galaxy at $\lowercase{z}=2.2$' --- INTRODUCTION {#sec:intro} ============ The first massive, quiescent galaxies ($>10^{11} M_{\odot}$) arose when the universe was only $\sim$3 Gyr old [e.g., @la05; @kr06b; @rw08] or perhaps even earlier [e.g., @br07; @man09; @fo09; @mo05; @wi08]. Remarkably, these galaxies already form a red sequence at $z\sim2.3$ [@kr08b]. The relatively young ages ($\sim0.5$ Gyr) and post-starburst spectral shapes [@kr06b; @kr08b] of the $z\sim2.3$ red-sequence galaxies suggest that a significant fraction of the stars have formed over a short timescale in an intense starburst. Sub-millimeter bright galaxies are possible candidates to represent this vigorous phase of star formation [e.g., @ch04; @ta08; @wa08]. These dusty starburst galaxies have observed star formation rates (SFRs) of several hundreds up to a thousand solar masses a year. The exact mechanism responsible for transforming such active systems into quiescent galaxies is still subject to debate [e.g., @cr06; @bo06; @na07; @de08; @ho08]. Not all local early types were already massive, quiescent systems at these epochs [e.g., @vd06; @ent09a]. The majority of them quench or assemble into more massive systems at later times, and the number density of the massive end of the red sequence at $z\sim2.3$ is only $\sim1/8$ of the local value [@kr08b]. Furthermore, the future evolution of massive quiescent galaxies at $z\sim2.3$ is still unclear. Their evolved stellar populations suggest that they passively evolve into their local analogs. However, their strong size and slow color evolution contradict this picture. Recent morphological studies show that massive quiescent systems at $z\sim2$ are remarkably compact with effective radii of $\sim1$ kpc [e.g., @tr06; @tr07; @zi07; @to07; @lo07; @vd08b; @ci08; @fr08; @dam08; @we08; @sa09]. Local early types of similar stellar mass are about a factor of 5 larger [e.g., @vd08b]. Thus, these high-redshift galaxies must evolve significantly after $z\sim2$, probably by inside-out growth, primarily through minor mergers [e.g., @be09; @na09; @we09; @ho09b]. In addition, the slow color evolution of the red sequence from $z\sim2.3$ to the present implies that passive evolution alone cannot explain the observed color–redshift relation [@kr08b]. However, both the size and color evolution studies are hampered by many uncertainties and detailed, crucial information on the early phases of massive, early type galaxies is still lacking. The constraints on the stellar populations and SFRs at high redshifts are poor, even for galaxies with spectroscopy and mid-infrared photometry [e.g., @kr08a; @mu09]. Thus, it is still unclear how “dead” $z>2$ quiescent galaxies really are. Also, there are no dynamical mass measurements available for quiescent galaxies beyond $z=2$. Consequently, all stellar mass estimates are photometric, and thus suffer from uncertainties in the derived stellar populations and from assumptions in the metallicity and the initial mass function (IMF). Moreover, spectroscopic redshifts are extremely difficult to obtain for quiescent galaxies without emission lines. Using optical spectroscopy, [@ci08] have derived several absorption-line redshifts, but due to the relative faintness of quiescent galaxies in the rest-frame UV these observations require $\sim100$ hr of integration and result in incomplete samples. Near-infrared (NIR) spectroscopy allows the detection of the “bright” rest-frame optical continuum emission, but deep spectra are expensive due to the lack of multiplexing, the bright NIR background, and strong OH lines. Thus, so far no rest-frame optical absorption lines have been detected in NIR spectra of $z>2$ quiescent galaxies, and previous redshift determinations rely on the detection of the Balmer and/or 4000 Åbreak with uncertainties of $\Delta z/(1+z) < 0.019$ [@kr08a]. In this paper we present a $\sim29$ hr Gemini Near-Infrared Spectrograph [GNIRS: @el06] spectrum of a compact, quiescent galaxy at $z=2.1865$, allowing a detailed study of the stellar population and the detection of any rest-frame optical emission and absorption lines. Moreover, it gives us a glance into the future, as this is the deepest single-slit NIR spectrum ever taken of a $z>2$ galaxy. Throughout the paper, we assume a $\Lambda$CDM cosmology with $\Omega_{\rm m}=0.3$, $\Omega_{\Lambda}=0.7$, and $H_{\rm 0}=70$ km s$^{-1}$ Mpc$^{-1}$. All broadband magnitudes are given in the Vega-based photometric system. TARGET SELECTION AND DATA {#sec:data} ========================= The target was chosen from our GNIRS NIR spectroscopic survey for massive galaxies at $z\sim2.3$ [@kr08a]. All galaxies were originally selected from the Multi-Wavelength Survey by Yale-Chile (MUSYC), which provides us with deep optical-IR photometry [@ga06; @qu07; @ent09b; @da09; @ma09]. These “shallow” spectra [typically $\sim$3 hrs of integration, see @kr08a] allowed us to derive continuum redshifts and classify the galaxies. We selected 1255-0 from the nine massive quiescent galaxies presented in [@kr06b], because of its redshift (the optical break falls in the $J$ band), and its visibility at the time of our GNIRS run. The galaxy is not brighter than the other candidates. Also its effective radius ($r_e$) of 0.78 kpc is very similar to the median $r_e$ (0.9 kpc) of the other massive, quiescent galaxies in our sample [see @vd08b]. Figure \[fig:im\] shows the image of 1255-0 as obtained by the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS). Altogether, this galaxy seems typical for the general class of quiescent galaxies at $z\sim2.3$. We also consider how this galaxy compares to the general population of massive galaxies at similar redshift (including star-forming galaxies). In Figure \[full\_sample\] we compare the rest-frame UV-NIR spectral energy distributions (SEDs) of all galaxies in the deep MUSYC sample [@qu07; @ma09] with comparable redshift and stellar mass. The redshifts are all photometric, and derived using EAZY [@br08]. The stellar masses are derived using the code described in the Appendix, for the [@ma05] stellar population synthesis (SPS) models, solar metallicity, the [@ca00] reddening law, and a [@kr01] IMF. Figure \[full\_sample\] illustrates that 1255-0 is slightly redder than the average massive galaxy at this redshift. This is expected given the fact that it was selected to be quiescent. In total we integrated nearly 29 hr on 1255-0 with GNIRS (with individual exposures of 10 minutes) divided over three observing runs in 2005 May, 2006 February, and 2007 March. The integration times and average seeing are given in Table \[tab:obs\]. During the first three nights the conditions were mediocre, with occasional clouds and an average seeing of $1\arcsec$. The weather conditions were excellent during the last two runs, with clear skies and an average seeing of 0.5$\arcsec$. The galaxy was observed in a cross-dispersed mode, in combination with the 32 lines mm$^{-1}$ grating and the 0675 slit. The spectral resolution varies between $\sim$900 and $\sim$1050 over the different orders. Observations were done using an ABAB on-source dither pattern, such that we can use the average of the previous and following exposures as sky frame [@vd04; @kr06a]. Acquisition was done using blind offsets from nearby stars. Before and after every observing sequence we observe an AV0 star, for the purpose of correcting for telluric absorption. The final spectra of the two stars are combined to match the airmass of the observing sequence. A detailed description of the reduction procedure of GNIRS cross-dispersed spectra is given in [@kr06a]. In summary, we subtract the sky, mask cosmic rays and bad pixels, straighten the spectra, combine the individual exposures, stitch the orders, and finally correct for the response function. The different observing sequences are weighted according to their signal-to-noise ratio (S/N) when being combined. A one-dimensional spectrum is extracted by summing all adjacent lines (along the spatial direction) with a mean flux greater than 0.1 times the flux in the central row, using optimal weighting with the S/N. We extract both a high- and low-resolution spectrum with 10 Å and 50 Å per bin, respectively. The high-resolution spectrum, which is resampled such that no resolution is lost, is used for spectral features, while the low-resolution spectrum is extracted to study the continuum emission. In order to flux calibrate the spectrum, we derive the spectroscopic $J$, $H$, and $K$ fluxes by integrating over the corresponding filter curves. We derive one scaling factor by comparing the spectroscopic colors with the $J$, $H$, and $K$ broadband photometric data from MUSYC. Finally, we multiply the spectrum by this scaling factor. The low-resolution spectrum is shown in Figure \[full\_spec\]. Note the clear detection of the rest-frame optical break in the $J$ band and the relatively high S/N in the continuum [compared to the spectrum shown in @kr06b]. ANALYSIS ======== We study the properties of this galaxy in two ways, first by measuring and analyzing the spectral features (Sections \[sec:abs\] and \[sec:ion\]) and second by modeling and decomposing the stellar continuum emission (Sections \[sec:con\]–\[sec:agn\]). Finally, in Section \[sec:comp\], we compare the deep spectrum modeling results with our previously published shallow spectrum. Emission and Absorption Features {#sec:abs} -------------------------------- \[sec:em\] We measure the redshift and all emission-line properties by modeling the extracted high-resolution (10Å per bin) one-dimensional spectrum. We detect , , and  in the $K$ band. The bottom panels of Figures \[fig:2dlines\] and \[fig:lines\] show the relevant part of the two-dimensional and one-dimensional spectrum, respectively. For the former we first removed the best-fit continuum model (see Section \[sec:con\]) to make the lines more visible. We detected none of these lines in our shallow spectrum [@kr06b], as they are too faint. We model  and the two  lines simultaneously, by assuming the same redshift and width for all three lines, the best-fit continuum model as derived in Section \[sec:con\], and Gaussian profiles. As the detected emission lines are all faint, it is important to accurately correct for continuum emission (including the Balmer absorption lines). The continuum model is corrected for the spectral resolution of GNIRS and convolved to the same velocity width as the emission-line model. Furthermore, we adopt the ratios of transition probability between the two  lines of 0.34. We derive the best values for the redshift, line width, and the fluxes of the emission lines by minimizing . The uncertainties on the modeling results are derived using 500 Monte Carlo simulations. In the simulations, we perturb the measured spectrum using the noise spectrum. The results are listed in Table \[tab:lines\].  and  are modeled in the same way; thus the redshift, width, and scaling for both lines were free parameters. In the $H$ band we detect no emission lines (see Figure \[fig:lines\] top-right panel). We derive 2 $\sigma$ upper limits using 500 Monte Carlo simulations in which we model the lines assuming the width and redshift as obtained from the  and  lines, and the continuum model. We fit all three expected lines ,  and simultaneously, assuming a ratio of 0.33 between the two lines. Upper limits are derived from the best-fit results of all simulations. In the $J$ band we detect the doublet  (see Figures \[fig:2dlines\] and \[fig:lines\], top-left panels). We do not resolve the two lines separately, and thus we fix the ratio (to 1) and redshift when fitting the lines. The combined fitting parameters are given in Table \[tab:lines\]. For each pair of lines we fit simultaneously, we assume that all line emission originates from the same physical region.  and yielded consistent redshifts, but the different combinations of lines resulted in different widths. This might imply that the lines originate from different processes and regions in the galaxy. However, the lines are all faint, and the errors on the line widths are probably somewhat underestimated because of our fitting procedure (allowing more freedom in the fits would result in larger uncertainties, particularly in the linewidths). Furthermore, we do not know the relative contributions from the two  line. Thus, we cannot draw any firm conclusion from the different line widths. As far as we are aware of, this is the first rest-frame optical spectrum of a quiescent galaxy beyond $z=2$ for which rest-frame optical absorption lines are detected. Figure \[fig:break\] shows that we detect H$\eta$, ,  and /.  and /are also visible in the top-right panel of Figure \[fig:2dlines\]. In Figure \[fig:break\], we also show the best-fit [@bc03] models for fixed solar metallicity and when leaving the metallicity as a free parameter (see Section \[sec:con\]). Furthermore, the absorption lines Mg b $\lambda 5175$ and  are detected in the low-resolution spectrum in Figure \[full\_spec\]. There might also be a hint for  in Figure \[full\_spec\]. However, a significant detection of this line requires an even deeper spectrum. This deep spectrum allows an accurate measure of the strength of the 4000 Å break, which is an indicator of the evolution stage or age of the stellar population. We use the definition of [@ba99] to measure the strength of this break, and find a value of $D_n(4000)=1.40^{+0.03}_{-0.03}$. The corresponding age depends on the star formation history (SFH), the metallicity, and the dust content of the galaxy. For the extreme case of a simple stellar population (SSP) model with solar metallicity, an age of $\sim0.6$ Gyr is needed to produce such a break. Assuming little or no dust, this value can be seen as a lower limit on the population age. A more detailed measurement of the age of the stellar population will be given in Section \[sec:con\]. Ionization Mechanism {#sec:ion} -------------------- Emission-line ratios can be used to study the origin of the ionized emission. In particular, , which reflects both the metallicity and ionization parameter $U$ of a galaxy, is a powerful discriminator. Ratios of  $>$ 1 suggest that an object is ionized by a hard radiation field; H[ii]{} regions are not able to produce such high ratios [@ke01]. The ratio of /[^1] can be used for a similar purpose, as H[ii]{} regions have / $<0.6$. The high values for both  and / of $1.3^{+0.3}_{-0.2}$ and $0.8^{+0.2}_{-0.2}$ respectively show that normal star-forming regions are not the dominant contributor to the line emission in this galaxy. The ratio of / can be used to further characterize the hardness of the ionizing radiation field [@sh90; @kd02].  is not detected in our spectrum, but we derive a 2$\sigma$ upper limit on this parameter. We find log(/) $<-0.89$, implying an ionization parameter $U<10^7$ [@ke01]. This ratio is consistent with the original definition for low-ionization nuclear emission-line regions (LINERs) of /$>$ 1, and so the line emission in 1255-0 is most likely caused by a LINER. The origin of LINER emission has been a subject of debate since its original classification as an active galactic nucleus (AGN) class by [@he80]. Although there is substantial evidence that many LINERs are powered by accretion onto massive black holes, LINER emission can also originate from a young starburst or by shock heating through cloud collisions induced by galaxy mergers or starburst-driven winds [e.g., @ds95]. Because the SFR in 1255-0 is very low (see Section \[sec:con\]), a starburst or starburst-driven wind is unlikely to cause the LINER emission, and thus a low-luminosity AGN is the more likely option. Nonetheless, even if the observed LINER emission does not originate from black hole accretion, it is still the case that normal star-forming regions do not dominate the observed line emission. Finally, we note that the line widths are not necessarily indicative of the depth of the gravitational potential, as in the local universe there is a large scatter between the gas line widths of LINERs and the velocity dispersion of the stars of their host galaxies [@gr05]. Modeling the Continuum Emission {#sec:con} ------------------------------- We study the nature of the stellar population by comparing the spectrum with the SPS models of [@bc03] and [@ma05], using the code described in the Appendix. We assume an exponentially declining SFH with timescale $\tau$, a [@ch03] or [@kr01] IMF, and the [@ca00] reddening law. We derive model spectra for a grid with $\tau$ between 10 Myr and 10 Gyr in steps of 0.1 dex,  between 0 and 3 mag in steps of 0.05 mag, and age in steps of 0.02 dex with a minimum age of 10 Myr and the maximum age not exceeding the age of the universe. The metallicity when fitting the [@ma05] model is fixed to solar ($Z=0.02$), but for the [@bc03] models we vary the metallicity, among subsolar ($Z=0.004$), solar, and supersolar ($Z=0.05$). The redshift is fixed to $z=2.1865$ as derived from the emission lines (see Section \[sec:em\]). We fit the low-resolution one-dimensional spectrum (50 Å per bin in observed frame) and search for the best solution by minimizing . In contrast to our previous studies [@kr06a; @kr06b; @kr08a], we do not mask regions with low atmospheric transmission or strong sky emission when fitting the spectrum. As the S/N of this spectrum is considerably higher than in [@kr08a], we apply a much smaller bin size. Bins in bad wavelength regions will have larger uncertainties, and thus simply have lower weight in the fit. When using larger bins [400 Å in @kr08a] this method is less appropriate, as a bad region will contaminate nearly all bins. To further constrain the SFR, we include the rest-frame UV broadband photometry. Furthermore, we extend the SED into the rest-frame NIR using IRAC photometry [@ma09]. The rest-frame NIR helps to constrain the stellar mass and age of the galaxy [e.g., @la05; @sh05; @wu07; @mu09]. However, as the SPS models are still uncertain in this regime, we fit the galaxy both with and without the IRAC photometry. We derive 68% confidence intervals on all stellar population properties using 200 Monte Carlo simulations, as described in the Appendix. The photometric uncertainties are increased using the template error function, which accounts for uncertainties in the model templates as a function of the rest-frame wavelength [@br08]. Furthermore, we apply the automatic scale option, such that for each simulation the simulated spectrum is calibrated using the simulated $J$, $H$, and $K$ photometry (see Section \[sec:data\] and the Appendix). Thus, the uncertainty in the calibration of the spectrum is explicitly taken into account. In Table \[tab:mod\] we give all modeling results for the different SPS libraries, free or fixed metallicity, and with or without IRAC. In Figure \[full\_spec\], we show the spectrum and best-fit [@bc03] model with solar metallicity. The continuum fitting implies a stellar mass of $\sim2\times10^{11}~M_{\odot}$, a reddening of $A_V=0.0-0.3$ mag, a star formation timescale of $\tau\sim0.3$ Gyr, an age (since the onset of star formation) of $1.3-2.2$ Gyr, and an SFR of $1-3 ~M_{\odot}\,\rm yr^{-1}$. The universe is 3 Gyr at $z\sim2.1865$, which implies a formation redshift $z_{\rm form}=4-7$. It is remarkable how well constrained the formal confidence intervals are. However, the formal errors do not reflect the true uncertainties properly, as they are dominated by the systematic effects such as the assumptions concerning the SPS models, the SFH, metallicity, and extinction law [see e.g., @sh01; @wu07; @kg07; @co08; @mu09 for more discussion on this topic]. For the [@bc03] models, supersolar metallicity is formally preferred above solar. We do not believe the fact that this result is significant, due the strong degeneracy between age and metallicity, and the uncertainties in the SPS model, and so the metallicity could well be less, e.g., solar. Nonetheless, the different models are surprisingly consistent. The inclusion or exclusion of IRAC has little effect on the modeling results for this deep spectrum [see @mu09 for a general discussion of the inclusion of IRAC data in the modeling of $z\sim2.3$ galaxies with shallower spectroscopy]. The detection of the strong break and absorption features allows the measurement of a stellar continuum redshift. We use the same fitting procedure as described above; only this time, we leave the redshift as a free parameter. Furthermore, we fit only the spectrum; thus, the rest-frame UV and NIR broadband photometric data are not included. In Figure \[fig:chiz\], we show the reduced $\chi^2$ as a function of redshift. We find a stellar redshift of $z=2.1862^{+0.0005}_{-0.0009}$. The emission-line redshift is also indicated in this figure. There is no evidence for a significant redshift offset between the stellar and nebular emission. Nonetheless, we cannot exclude a possible offset, due to the relatively large uncertainties on the continuum redshift. A Recent Starburst? {#sec:bur} ------------------- When modeling the SED by SPS models, we assume an exponentially declining SFH. However, this is a simplification, and more complex SFHs are more realistic. For example, subsequent merging is expected to result in central dissipational starbursts [e.g., @ho09]. This would result in a younger stellar population in the central part of the galaxy, with a lower mass-to-light ratio ($M/L$) than the older underlying population. Similarly, if the galaxy experienced recent star formation in a disk-like component, the galaxy is composed of an old central concentration and a more extended young component. This younger population will contribute relatively more to the observed light than to the stellar mass. In order to assess whether a recent starburst took place and how much it may contribute to the stellar mass of the galaxy, we investigate how much of the light can be accounted for by a 0.5 Gyr SSP model, with 0.25 mag of visual extinction (see Table \[tab:mod\]). We choose a post-starburst instead of an ongoing starburst, as we know that the current global SFR in the galaxy is very low. We subtract different mass contributions and apply the same fitting procedure as discussed in Section \[sec:con\]. The maximum contribution of the post-starburst population is set by the rest-frame UV flux, and shown by the red SED in the top panel of Figure \[fig:comp\]. Up to $\sim$10% of the stellar mass may have been formed in a recent starburst. Due to the relatively low $M/L$ of this post-starburst population, the contribution to the $H$-band light is much larger ($\sim40\%$). In the inset of the top panel of Figure \[fig:comp\] we show the reduced $\chi^2$ of the best-fit SPS model to the corrected spectrum and photometry, for different mass fractions of the post-starburst stellar population. Models with a mass fraction between $\sim0$ and $5\%$ provide equally good fits, while higher mass fractions result in worse agreement. The red line indicates the maximum contribution, corresponding to the red SED. Thus, while this galaxy may have experienced a recent starburst, both the light and the stellar mass are dominated by an older stellar population. Nonetheless, it still remains to be explored how much the light distribution may be different from the stellar mass distribution in these compact quiescent galaxies. Continuum Emission from an AGN? {#sec:agn} ------------------------------- In Section \[sec:ion\], we found that the line emission of 1255-0 is of LINER origin. This raises the question of whether an AGN may contribute to the continuum emission of 1255-0. We investigate this by subtracting different AGN contributions and fitting the corrected SED by SPS models. We assume a power-law SED for the AGN. The maximum AGN contribution is set by the rest-frame UV fluxes in combination with the 8 $\mu$m IRAC band, and shown by the green line in the bottom panel of Figure \[fig:comp\]. The corresponding contribution to the $H$-band flux is $\sim8\%$. In the inset of the bottom panel of Figure \[fig:comp\] we show the reduced $\chi^2$ of the best-fit SPS model for different assumed AGN contributions. The green line indicates the maximum contribution, corresponding to the green SED. The fit clearly worsens for an increasing AGN contribution. Altogether, an AGN is unlikely to contribute significantly to the continuum emission, and is limited to a maximum of $\sim$8% to the $H$ band. Comparison to the shallow spectrum {#sec:comp} ---------------------------------- While our 5 hr[^2] shallow spectrum of 1255-0 presented in [@kr08b] provided a similar stellar mass and SFR (when corrected for the difference in the assumed IMF), the age, the dust content, and the continuum redshift differ by $\sim2\,\sigma$ compared to the 29 hrs deep spectrum. Our shallow spectrum of 1255-0 yields a continuum redshift of $2.31^{+0.05}_{-0.07}$, an age of $0.57^{+0.44}_{-0.28}$ Gyr, and an  of $1.2^{+0.6}_{-0.6}$ mag. Thus, this galaxy was previously classified as a dusty post-starburst galaxy, at slightly higher redshift. In the fit to the shallow spectrum the dominant optical break was thought to be the Balmer jump, while in our deeper spectrum the 4000 Å break is found to be the more prominent one. If we fit the shallow spectrum of 1255-0 with the redshift fixed to $z=2.1865$, we find a significantly older age of 2.9 Gyr and a lower  of 0.35 mag. The change in redshift also influences the rest-frame color determination [see @kr08b]. 1255-0 was among our reddest galaxies, with a rest-frame $U-B$ color of $0.36^{+0.05}_{-0.06}$ mag. The deep spectrum yields an $U-B$ color of $0.52$ mag. However, if we use the same method as in [@kr08b], thus determining the color of the best fit, we find $U-B=0.30$. This difference is likely caused by the discrepancy between the spectrum and the fit around 1.15 $\mu$m. We do not expect the full sample of shallow spectra to suffer as severely from this degeneracy between redshift and stellar population as 1255-0. In [@kr08a] we apply the same continuum fitting procedure to the emission-line galaxies in our sample, and find a good agreement between the emission-line and continuum redshifts. This sample contained several galaxies with SED shapes similar to those without emission lines. Nonetheless, while it seems more likely that 1255-0 is the largest outlier ($\sim2\sigma$), caution is required. This case illustrates that more deeper spectra are needed. THE STAR FORMATION ACTIVITY IN 1255-0 ===================================== Comparison of the stellar continuum emission with SPS models (Section \[sec:con\]) confirms our earlier results that the SFR in 1255-0 is strongly suppressed [@kr06b]. Depending on the SPS library and the assumed metallicity, the best-fit SFR is $1-3~M_{\odot}\, \rm yr^{-1}$. As expected for a galaxy with low-level star formation, the total dust extinction is low with values of of 0.0-0.3 mag. The low SFR is independently confirmed by the emission-line diagnostics. In the previous section, we noted that star-forming regions cannot be the dominant contributor to the line emission. Nevertheless, if we assume that the detected  emission is caused by just star formation, we can use the calibration $$\label{eq:kennicutt} {\rm SFR} \ (M_{\odot}\,{\rm yr^{-1}}) = 7.9 \times 10^{-42} \ L_{\rm H\alpha} \ {\rm (erg\,s^{-1})}$$ as given by [@ke98] to derive the SFR from the luminosity. Assuming no dust extinction, the observed luminosity would result in an SFR of $4.0^{+0.4}_{-0.6}\,M_{\odot} \rm \, yr^{-1}$ for a [@sa55] IMF. This value would decrease by a factor of $\sim$1.8 for a [@ch03] or [@kr01] IMF, but increases by a factor of $\sim2.3$ if there is 1 mag of visual extinction in the star forming regions. Although we have a good constraint on the total attenuation of the galaxy from the spectral modeling ( $=0.0-0.3$ mag), it is difficult to estimate the extinction in the line-emitting regions without measuring a Balmer decrement. The combination of  and the 2$\sigma$ upper limit on  gives a 2$\sigma$ lower limit on the ratio of / of 3.07. This limit is close to the intrinsic ratio for H[ii]{} regions of 2.76, and thus sets no constraints on the dust content. Nonetheless, even when assuming that the line emission originates just from star formation, and assuming 1 mag of dust extinction in the H[ii]{} regions, the obtained SFR is still only $\sim4~M_{\odot}~\rm yr^{-1}$ (for a Chabrier IMF). The best-fit stellar mass of 1255-0 is $1.7-2.3 \times 10^{11} M_{\odot}$. This value is slightly dependent on the choice for the SPS model, and on whether metallicity is assumed to be solar or left as a free parameter. If left as a free parameter, supersolar is preferred over solar, although we argue in Section \[sec:con\] that this result is not significant. The best-fit star formation timescale is $0.2-0.3$ Gyr, and the best-fit stellar age is $1.8-2.2$ Gyr and $\sim1.3$ Gyr for solar metallicity and supersolar metallicity, respectively. The age of the universe at this redshift is 3.0 Gyr and the formation redshift is $z_{\rm form}\sim4-7$. We study the evolution stage of the galaxy by means of the quenching factor $q_{\rm sf}$, which measures by what factor the SFR has been reduced compared to the past average SFR. We define $q_{\rm sf}$ as follows: $$\label{eq:q} {q_{\rm sf} = \rm 1 - \frac{SFR_{current}}{SFR_{past}} = 1 - \frac{SFR_{current}}{\frac{\it M_*}{age}}}$$ In this equation $M_*$ is not the current stellar mass, but all the mass formed in the galaxy. This quenching factor is related to the ratio of the age to the star formation timescale $\tau$ (i.e., how many e-folding times have been passed since the galaxy was formed) in the following way $$\label{eq:q} {q_{\rm sf} = 1 - \frac{age}{\tau} \frac{e^{-\frac{age}{\tau}}}{1 - e^{-\frac{age}{\tau}}}}$$ Because age/$\tau$ is a relatively well-constrained parameter, $q_{\rm sf}$ is a fairly robust measure for the evolution stage of a galaxy. We note that [@da08] defined a comparable factor: the star formation activity parameter $\alpha_{\rm sf} = (M_{*}/{\rm SFR}) / (t_H-1 {\rm Gyr)}$. Instead of using the best-fit age of the galaxy, this parameter uses the Hubble time ($t_H$) minus 1 Gyr. 1255-0 has a $q_{\rm sf}$ of 0.991$^{+0.007}_{-0.002}$. Thus, the SFR in this galaxy has been reduced by more than 99% and the current SFR is less than 1% of the average past SFR. This implies that the star formation has been strongly quenched since the major star formation epoch. Furthermore, in Section \[sec:bur\] we found that a 500 Myr SSP accounts for $<$10% of the stellar mass, which implies that at least 90% of the stellar mass is in stars older than 0.5 Gyr, and thus formed beyond $z=2.6$. DISCUSSION AND CONCLUSIONS ========================== Due to the introduction of large photometric surveys with deep NIR imaging [e.g., @la03; @fr03], many quiescent massive galaxies beyond $z=2$ have been identified in the past few years [e.g., @fo04; @da04a; @da04b; @la05]. Moreover, these galaxies are typically found to be ultra-compact, with stellar densities that are about 2 orders of magnitude larger than in local early type galaxies of similar mass [e.g., @tr06; @tr07; @zi07; @to07; @lo07; @vd08b; @ci08] Follow-up spectroscopic studies have tried to verify the broadband photometric redshifts and stellar population properties of these ultra-dense quiescent galaxies. This turned out to be extremely difficult, and with optical spectroscopy tens to hundreds of hours are required [@da05; @ci08], due to their relative faintness in the rest-frame UV. Evolved galaxies beyond $z=2$ are much brighter at NIR wavelengths, corresponding to the rest-frame optical. In [@kr06b], we used the Balmer and / or 4000 Å breaks to obtain redshift estimates for a sample of nine massive quiescent galaxies. However, exact redshift measurement from rest-frame optical absorption lines remained out of reach until this paper. By integrating for nearly 30 hr with GNIRS we succeeded in detecting for the first time rest-frame optical absorption lines in an NIR spectrum of a compact, quiescent galaxy beyond $z=2$. This deep spectrum has full NIR coverage ($\sim$1.0-2.4 $\mu$m). In addition to the absorption features H$\eta$, , , /, , and Mg b, we have detected , , , and  in emission. All emission lines are faint with luminosities of $0.1--0.9\times10^{42}\rm\,ergs\,s^{-1}$. The redshifts derived from the stellar continuum and from the emission lines are consistent within the uncertainties. Comparison of the spectral continuum emission and the rest-frame UV-NIR photometry with SPS models implies a stellar mass of $\sim2\times10^{11} M_{\odot}$, a reddening of $=0.0-0.3$ mag, a star formation timescale of $\tau=0.2-0.3$ Gyr, and an age of $1.3-2.2$ Gyr. The results are slightly different for the different SPS models and different assumed metallicities. We find a low SFR of about $1-3~M_{\odot}\,\rm yr^{-1}$, implying that the star formation is strongly quenched and reduced by more than 99% since its major star formation epoch. If this galaxy is typical for quiescent galaxies, quenching of star formation is extremely efficient. The constraints on the SFR are very tight, and compared to the previously published shallow spectrum of this galaxy [@kr08a] they are more stringent by a factor of $\sim$8. We do detect a faint  emission line, which, based on emission-line diagnostics, is not caused by stellar ionization. However, even if we assume that the  line was due to star formation, it would result in a low SFR of $\sim2-4~M_{\odot}\,\rm yr^{-1}$. One possibility is that obscured star formation may have been missed. However, at low redshift there is a strong correlation between the dust-corrected luminosity of  and the bolometric luminosity [e.g., @mo06]. [@re06] found a similar relation at high redshift. Thus, based on the faint  line, the nature of the line emission, and the lack of a detection at 24$\mu$m (I. Cury, private communication), we do not expect this galaxy to host obscured ongoing starburst regions. Although we can confirm our earlier conclusions about the quiescent nature of the stellar population in 1255-0 [@kr06b], we previously underestimated the age. The shallower spectrum showed an apparent Balmer break and our fit preferred a younger dusty post-starburst galaxy. However, our deeper spectrum shows that the galaxy is $1.3-2.2$ Gyr old, and the optical break is dominated by the 4000Å break rather than the Balmer jump. A post-starburst (0.5 Gyr) population can only account for $\lesssim10$% of the stellar mass. The underestimated age in the shallow spectrum is related to the overestimation of the continuum redshift. In [@kr08a] we found a continuum redshift of $2.31^{+0.05}_{-0.07}$, which is within 2$\sigma$ consistent with the emission-line redshift of 1255-0. This may imply that the uncertainties in our previous work are underestimated. However, comparison of emission-line and continuum redshifts for 19 emission-line galaxies in [@kr08a] demonstrated the reliability of the uncertainties of our continuum redshifts, and thus 1255-0 is probably “the” largest (2$\sigma$) outlier. The fact that we previously underestimated the age of the galaxy shows how difficult it is to estimate ages based on “shallow” spectra, let alone broadband photometry. Thus, we cannot exclude that the ages of more quiescent galaxies may have been systematically underestimated in [@kr08a]. The best-fit models imply a formation redshift of $z=4-7$ for 1255-0, and this may indicate that massive galaxies with strongly suppressed star formation exist at even earlier times. The rest-frame optical emission-line diagnostics indicate that 1255-0 most likely hosts a LINER. In [@kr07] we found that at least 4 out of the 11 emission-line galaxies in our massive galaxy sample at $2.0<z<2.7$ host an AGN. This study was based on relatively shallow spectroscopy ($\sim3$ hr per galaxy). For 1255-0, we did not detect any emission lines in the shallow NIR spectrum [@kr06b]. Thus, we may find actively accreting black holes in more massive galaxies, when we obtain deeper data. This suggests the possibility that low luminosity AGNs may be very common in these objects. Although an AGN dominates the line emission, its contribution to the continuum emission is very low. In the $H$ band, in which we measured the compact size of 1255-0 [@vd08b] the contribution is $\lesssim8$%. Thus, an AGN could not be the dominant cause for the compact size of this galaxy. A central post-starburst population may have a larger effect on our size measurements. If the stellar population in the center has a lower $M/L$ than the outskirts, the size will appear smaller. Similarly, if the galaxy experienced recent star formation in the outer parts, the $M/L$ will be higher in the center, and thus the size will appear larger. Although a post-starburst population can only account for $\lesssim10$% of the stellar mass, it can make up $\sim40\%$ of the $H$-band flux. Thus, depending on where this post-starburst population is situated, the light distribution in these compact galaxies may be more or less concentrated than the stellar mass distribution, and our size measurements may be over- or underestimated. However, regardless of whether or not a post-starburst or AGN is present, this spectrum shows that this compact quiescent galaxy is not actively forming new stars, and is primarily composed of an evolved stellar population. The future generation of NIR spectrographs promises higher throughput, in combination with multiplexing. This will allow even deeper spectra of larger samples. However, the most spectacular results – in particular with regard to kinematic measurements of quiescent galaxies – are expected to come from NIRSPEC on the [*James Webb Space Telescope (JWST)*]{}. Without the hindrance of the Earth’s atmosphere, it will be possible to obtain significantly deeper spectra in only a small fraction of the exposure time needed for the spectrum presented in this paper. We thank the members of the MUSYC collaboration for their contribution to this work, I. Cury and A. Muzzin for their help with the MIPS imaging, G. Brammer for the photometric redshifts of the MUSYC sample, and J. Greene and A. van der Wel for helpful discussions. This work is based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership, and on observations made with the [*Spitzer Space Telescope*]{}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support from NASA grant HST-GO-10808.01-A is gratefully acknowledged. G.D.I. acknowledges support from NASA grant NAG5-7697. D.M. is supported by NASA LTSA NNG04GE12G. R.F.Q. is supported by a NOVA postdoctoral fellowship. Fitting and Assessment of Synthetic Templates ============================================= We developed a custom IDL code named FAST, to fit SPS models to broadband photometry, spectra, or both. FAST is compatible with the photometric redshift code EAZY [@br08], such that the format of the input photometric catalog and filter files is similar. Optionally, the photometric redshifts as derived by EAZY can be read in and used by FAST. Summarized, FAST reads in a parameter file, which defines the photometric and spectroscopic catalogs, the SPS models [choice from @bc03; @ma05], the IMF[choice from @sa55; @kr01; @ch03], the reddening law, and the fitted grid of stellar population properties. For all properties (age, star formation timescale $\tau$, dust content , metallicity and redshift) the minimum and maximum value, and the step size can be defined. Optionally, FAST can calibrate spectra using the broadband fluxes. FAST generates a six-dimensional cube of model fluxes for the full stellar population grid and all filters, spectral elements, or both. To determine the best-fit parameters, it simply determines the  of every point of the model cube. In case spectroscopic or photometric redshifts are provided, the redshift will be fixed to the closest value in the grid. The confidence levels are calibrated using Monte Carlo simulations. The number of simulations can be defined in the parameter file. The observed fluxes are modified according to their photometric and spectroscopic errors in each simulation. Optionally, a rest-frame template error function [see EAZY, @br08] can be added to the broadband photometric fluxes, to account for uncertainties in the models. In case the automatic scale option is used, the spectrum is scaled individually for each simulation. By doing this, we incorporate the error on the scaling factor[^3]. We determine the best solution for all simulations and define the $\chi^2_{1\sigma}$ ($\chi^2_{2\sigma}$ or $\chi^2_{3\sigma}$) level as the  value in the originally grid that encloses 68% (95% or 99%) of the simulations. The uncertainties on the stellar population properties are the minimum and maximum values that are allowed within the $\chi^2_{1\sigma}$ ($\chi^2_{2\sigma}$ or $\chi^2_{3\sigma}$) threshold. In case photometric redshifts (as provided by EAZY) are assumed, the calculation of the confidence intervals is slightly more complicated. Both codes use different template sets and thus produce different probability distribution functions of $z$. Unfortunately, there is no perfect method to determine the confidence levels for this case. We use the following method, which incorporates the confidence levels of EAZY, and is fast and user friendly. We run all Monte Carlo simulations at the best-fit  as derived by EAZY. We determine $\Delta\chi^2$; the difference between the  value that encloses 68% (or 95% or 99%) of the simulations and the minimum  value at this . Next, we return to the full  grid that resulted from FAST. The confidence intervals for a given parameter are the minimum and maximum values in the full model cube with a  value less than $\chi^2_{\rm min}+\Delta\chi^2$ [*and*]{} a redshift within the 68% confidence interval as provided by EAZY. In essence, we use the EAZY output to limit the redshift range of the solutions allowed in FAST. Note that $\chi^2_{\rm min}$ of the full grid is likely lower than the minimum value of  at . We tested this method using a large number of Monte Carlo simulations that are input in both EAZY and FAST (retaining the combinations of  and perturbed fluxes). In this case we do not use the simulations for calibration, but derive the uncertainties directly from the output. We find good correlations between the uncertainties of the two methods and we believe that the FAST confidence levels are robust and reliable. Balogh, M. L., Morris, S. L., Yee, H. K. C., Carlberg, R. G., & Ellingson, E. 1999, , 527, 54 Bezanson, R., van Dokkum, P. G., Tal, T., Marchesini, D., Kriek, M., Franx, M., & Coppi, P. 2009, , in press (arXiv:0903.2044) Bower, R.G., et al. 2006, , 370, 645 Brammer, G., & van Dokkum, P. G. 2007, , 654, L107 Brammer, G., van Dokkum, P. G., & Coppi, P. 2008, , 686, 1503 Bruzual, G. & Charlot, S. 2003, , 344, 1000 Calzetti, D., Armus, L., Bohlin, R.C., Kinney, A.L., Koornheef, J., & Storchi-Bergmann, T. 2000, , 533, 682 Chabrier, G. 2003, , 115, 763 Chapman, S. C., Smail, I, Blain, A., W., & Ivison R. J. 2004, , 614, 671 Cimatti, A., et al. 2008, A&A, 482, 21 Conroy, C., Gunn, J. E., & White, M. 2008, , submitted (arXiv:0809.4261) Croton, D.J., et al. 2006, , 365, 11 Daddi, E., et al. 2004a, , 600, L127 Daddi, E., et al. 2004b, , 617, 746 Daddi, E., et al. 2005, , 626, 680 Damen, M., Labbé, I., Franx, M., van Dokkum, P. G., Taylor, E. N., & Gawiser E. J. 2009, , 690, 937 Damjanov, I., et al. 2009, , 695, 101 Davé, R. 2008, , 385, 147 Dekel, A., & Birnboim, Y. 2008, , 383, 119 Dopita, M. A., & Sutherland, R. S. 1995, , 455, 468 Elias, J. H., et al. 2006, Proc. SPIE, 6269, 139 Fontana, A., et al. 2009, A&A, in press (arXiv:0901.2898) Förster Schreiber, N.M. et al. 2004, , 616, 40 Franx, M., van Dokkum, P. G., Förster Schreiber, N. M., Wuyts, S., Labbé, I., & Toft, S. 2008, , 688, 770 Franx, M., et al. 2003, , 587, L79 Gawiser, E., et al. 2006, , 162, 1 Greene, J. E., & Ho, L. C. 2005, , 627, 721 Heckman, T. M. 1980, A&A, 87, 152 Hopkins, P. F., Cox, T. J., Keres, D., & Hernquist, L. 2008, , 175, 390 Hopkins, P. F., Bundy, K., Murray, N., Quataert, E., Lauer, T., & Ma, C.-P. 2009b, , submitted (arXiv:0903.2479) Hopkins, P. F., Lauer, T. R., Cox, T. J., Hernquist, L., & Kormendy, J. 2009a, , 181, 486 Kannappan, S.J., & Gawiser, E. 2007, , 657, L5 Kennicutt, R. C. 1998, ARA&A, 36, 189 Kewley, L. J., & Dopita, M. A. 2002, , 142, 35 Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001, , 556, 121 Kriek, M., et al. 2006a, , 645, 44 Kriek, M., et al. 2006b, , 649, L71 Kriek, M., et al. 2007, , 669, 776 Kriek, M., van der Wel, A., van Dokkum, P. G., Franx, M., & Illingworth, G. D. 2008b, , 682, 896 Kriek, M., et al. 2008a, , 677, 219 Kroupa, P. 2001, , 322, 231 Labbé, I., et al. 2003, , 125, 1107 Labbé, I., et al. 2005, , 624, L81 Longhetti, M., et al. 2007, , 274, 614 Mancini, C., et al. 2009, A&A, in press (arXiv:0901.3341) Maraston, C. 2005, , 362, 799 Marchesini, D., van Dokkum, P. G., Förster Schreiber, N. M., Franx, M., Labbé, I., & Wuyts, S. 2009, , submitted (arXiv0811.1773) Mobasher, B., et al. 2005, , 635, 832 Moustakas, J., Kennicutt, R. C., & Tremonti, C. A. 2006, , 642, 775 Muzzin, A., Marchesini, D., van Dokkum, P. G., Labbé, I., Kriek M., & Franx, M. 2009, , submitted Naab, T., Johansson, P. H., & Ostriker, J. P. 2009, , submitted Naab, T., Johansson, P. H., Ostriker, J. P., & Efstathiou, G. 2007, , 658, 710 Quadri, R., et al. 2007, , 134, 1103 Reddy, N. A., et al. 2006, , 644, 792 Salpeter, E. E. 1955, , 121, 161 Saracco, P., Longhetti, M., & Andreon, S. 2009, , 392, 718 Shapley, A. E., Steidel, C. C., Adelberger, K. L., Dickinson, M., Giavalisco, M., & Pettini, M. 2001, , 562, 95 Shapley, A. E., et al. 2005, , 626, 698 Shields, G. A., 1990, ARA&A, 28, 525 Tacconi, L. J., et al. 2008, , 680, 246 Taylor, E. N., et al. 2009a, , 694, 1171 Taylor, E. N., et al. 2009b, , submitted (arXiv0903.3051) Toft, S., et al. 2007, , 671, 285 Trujillo, I., Conselice, C. J., Bundy, K., Cooper, M. C., Eisenhardt, P., & Ellis, R. S. 2007, , 382, 109 Trujillo, I., et al. 2006, , 650, 18 van der Wel, A., Bell, E. F., van den Bosch, F. C., Gallazzi, A., & Rix, H.-W. 2009 , in press (arXiv:0903.4857) van der Wel, A., Holden, B. P., Zirm, A. W., Franx, M., Rettura, A., Illingworth, G. D., & Ford, H. C. 2008, 688, 48 van Dokkum, P. G., et al. 2004, , 611, 703 van Dokkum, P. G., et al. 2006, , 638, L59 van Dokkum, P. G., et al. 2008, , 677, L5 Wall, J. V., Pope, A., & Scott, D. 2008, , 383, 435 Wiklind, T., Dickinson, M., Ferguson, H. C., Giavalisco, M., Mobasher, B., Grogin, N. A., & Panagia, N. 2008, 676, 781 Williams, R. J., Quadri, R. F., Franx, M., van Dokkum, P. G., & Labbé, I. 2008, , 691, 1879 Wuyts, S., et al. 2007, , 655, 51 Zirm, A. W., et al. 2007, , 656, 66 [^1]: For  we take the sum of both lines, while for the ratio of  only  is used [^2]: The weather conditions during the first runs were significantly worse, and the effective exposure time is closer to 1-2 hr. [^3]: Note that we did not apply this method in [@kr08a]; instead, we arbitrarily increased the uncertainties by quadratically adding 10% of the average flux of the spectrum to account for systematic effects.
{ "pile_set_name": "ArXiv" }